id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2310.01283 | The influence of coordinated behavior on toxicity | In the intricate landscape of social media, genuine content dissemination may
be altered by a number of threats. Coordinated Behavior (CB), defined as
orchestrated efforts by entities to deceive or mislead users about their
identity and intentions, emerges as a tactic to exploit or manipulate online
discourse. This study delves into the relationship between CB and toxic
conversation on X (formerly known as Twitter). Using a dataset of 11 million
tweets from 1 million users preceding the 2019 UK general election, we show
that users displaying CB typically disseminate less harmful content,
irrespective of political affiliation. However, distinct toxicity patterns
emerge among different coordinated cohorts. Compared to their non-CB
counterparts, CB participants show marginally higher toxicity levels only when
considering their original posts. We further show the effects of CB-driven
toxic content on non-CB users, gauging its impact based on political leanings.
Our findings suggest that CB only has a limited impact on the toxicity of
digital discourse. | Edoardo Loru, Matteo Cinelli, Maurizio Tesconi, Walter Quattrociocchi | 2023-10-02T15:35:04Z | http://arxiv.org/abs/2310.01283v2 | # The influence of coordinated behavior on toxicity
###### Abstract
In the intricate landscape of social media genuine content dissemination may be altered by a number of threats. Coordinated Behavior (CB), defined as orchestrated efforts by entities to deceive or mislead users about their identity and intentions, emerges as a tactic to exploit or manipulate online discourse. This study delves into the relationship between CB and toxic conversation on Twitter. Using a dataset of 11 million tweets from 1 million users preceding the 2019 UK General Elections, we show that users displaying CB typically disseminate less harmful content, irrespective of political affiliation. However, distinct toxicity patterns emerge among different CB cohorts. Compared to their non-CB counterparts, CB participants show marginally elevated toxicity levels only when considering their original posts. We further show the effects of CB-driven toxic content on non-CB users, gauging its impact based on political leanings. Our findings suggest a nuanced but statistically significant influence of CB on digital discourse.
## 1 Introduction
Social media are nowadays one of the main arenas for public debate where users get their information and interact with other peers under the potential influence of feed algorithms used to prioritize their engagement with like-minded content [1, 2, 3]. According to recent studies, such systems can challenge democracy in various ways [4, 5, 6]. Problems include the fast spread of false information [7, 8, 9, 10, 11], more division among groups [12, 13, 14, 15, 16], and harmful behaviors online [17, 18, 19]. Despite efforts to fix these issues, solutions are hard to find [5, 6].
Further complicating this ecosystem is the phenomenon of Coordinated Behavior (CB), which can be defined as an unexpected, suspicious, or exceptional similarity among users of a group [20]. Social media campaigns, such as online activism, protests, and disinformation campaigns [21, 22, 23], generally involve participants coordinating their actions to disseminate content widely. Initially, scientific research focused on the benefits of coordination for social movements. However, it has become evident that benign actors, such as activists, are using similar techniques and malicious actors are engaged in political astroturfing [24] and the dissemination of inappropriate content. Coordinated behavior on social media can have negative consequences, including distorting public opinion and contributing to the polarization of society. Recognizing these problems, researchers and practitioners are working on strategies to identify, characterize, and mitigate coordinated behavior [25, 26, 27]. In particular, the characterization of coordinated groups is a crucial aspect discussed in the existing literature [25]. This can be done at different levels of depth but remains essential due to the absence of ground-truth data for detection tasks. To assess the harm of coordinated behavior, established methods in the literature can be used, primarily the analysis of content shared by coordinated users, which includes the identification of fake news [28, 29], hate speech and toxicity detection [18, 30].
In the context of the rapidly evolving digital landscape, the 2019 UK General Elections provide a pertinent setting to explore the dynamics of online behavior. Our study aims to disentangle the complex relationship between CB and the prevalence of toxic content on the Twitter platform. For this purpose, we analyze a dataset encompassing 11 million tweets from a diverse pool of 1 million users. From our analysis, a salient observation emerges: users demonstrating high coordination in online activities tend to disseminate less toxic content. This propensity holds regardless of their political leaning, suggesting that coordination might not necessarily be synonymous with malicious intent or negative discourse. However, when we delve deeper into the subsets of coordinated users, nuanced toxicity patterns begin to materialize, indicating a varied landscape of content sharing
even within these coordinated groups. In more detail, we observe that the extensive retweeting activity of coordinated users plays a role in evaluating their toxicity levels. Both coordinated and non-coordinated users tend to post original content with a higher toxicity level compared to that they disseminate through retweets, with the former category being even more toxic than the latter. Beyond merely observing these trends, our study delves into the consequential effects of content stemming from CB efforts. Specifically, we explore how interacting with toxic content, particularly when associated with CB, affects the behaviors of non-coordinated users. A key aspect of this examination is understanding the potential modulation of reactions by political orientations. Does a user's political inclination amplify or attenuate their reaction to CB-driven toxicity? After carefully considering these dynamics, our results point towards a subtle yet undeniably present influence of CB on online conversations and narratives. While the impact might not always be overtly visible in direct content comparisons, the undercurrents shaping digital discourse reveal a significant role in coordinated behaviors.
## 2 Materials and Methods
### Data
In this work, we use a publicly available1, large collection of tweets gathered in the run-up to the 2019 UK general election [20], spanning from 12 November 2019 to 12 December 2019 (election day). Within this time frame, all tweets that featured at least one of the predefined election-related hashtags in Table 1 were collected; some of the hashtags have a clear political alignment (Labour or Conservative), while the rest only refer to the election itself and can be considered neutral. Additionally, the dataset includes all tweets published by the official accounts of the two parties and their leaders and all interactions they received (i.e., retweets and replies), as summarized in Table 2. The final dataset combines these two collection processes, resulting in a set of 11,264,280 tweets posted by 1,179,659 distinct users.
Footnote 1: [https://doi.org/10.5281/zenodo.4647893](https://doi.org/10.5281/zenodo.4647893)
ToxicityTo estimate the toxicity conveyed by a message, we leveraged the publicly available Perspective API [31], which is an established standard for toxic speech detection. We will thus follow the definition of "toxic content" it is based upon, which is "a rude, disrespectful, or unreasonable comment that is likely to make people leave a discussion". To achieve more robust results, we standardized all tweets in our dataset by enforcing UTF-8 encoding, converting them to lower-case, and stripping them of all hashtags, URLs, and emojis. Therefore, we input them to Perspective API and assign to each the toxicity score output by the model, a value in the range \([0,1]\) that represents "the likelihood that someone will perceive the text as toxic" [31].
Figure 1: Empirical Cumulative Distribution Function (ECDF) of the number of tweets posted by superspreaders compared to the rest of the users, taking into account original tweets (left panel) and retweets (right panel).
Coordinated communitiesBy applying the state-of-the-art method proposed in [20], we subsequently identified the coordinated users within our dataset, assigning each user a score between 0 and 1 that measures the extent of coordination exhibited by that user. Firstly, we selected the top 1% retweeters in the dataset, which from this point onward we will refer to as _superspreaders_, resulting in a subset of 10,782 users. Despite being characterized by an extremely prolific retweeting activity, in Fig. 1 we show that superspreaders also tend to produce more original tweets than other users.
Secondly, we computed the TF-IDF vector of the IDs of the tweets retweeted by each superspreader, thus obtaining for each a retweet-based feature vector. This weighting allows us to place more importance to the action of retweeting of unpopular tweets, which is a telltale sign of suspicious behavior. By computing the pairwise cosine similarities between user vectors, we thus obtained a weighted undirected similarity network, whose nodes are the superspreaders we identified earlier and the weight of the edges between them represents their similarity in retweet activity. We then extracted the multiscale backbone [32] of the similarity network as to only retain statistically relevant connections, resulting in a network containing 276,775 edges, and performed community detection with the Louvain clustering algorithm [33]. On this dataset, Nizzoli _et al._ (2021) [20] detected seven clusters of coordinated users, which they subsequently characterized politically by means of their hashtag usage. In this work, we will focus on providing a toxicity-based characterization for the three largest communities: the LAB and CON clusters, populated by users supporting the Labour and Conservative party, respectively, and the TVT cluster, a community promoting liberal democrats, tactical voting, anti-Brexit, and anti-Tory campaigns. In addition to being the largest, we argue that these communities are also the most representative of the dataset, and thus most valuable for investigation: the first two have distinctly politically slanted narratives, while the latter can serve as a benchmark to validate the interplay between political alignment, toxicity, and coordinated behavior. Finally, we applied the algorithm proposed in [20] to assign each user a coordination score in the range \([0,1]\).
For the purposes of this study, we label as _coordinated_ all superspreaders having a coordination score above the median of the coordination score empirical distribution, which is approximately equal to 0.8, resulting in a set of 5,450 coordinated accounts. We consider the set of remaining users - including all other superspreaders - to be _non-coordinated_.
### Political leaning inference
We aim to investigate the role of political leaning and its relationship with coordination and toxicity. The alignment of a user on the political spectrum can be estimated by studying hashtag usage [34, 35]. Specifically, we built a tweet-hashtag bipartite network starting from the sets of all tweets in our dataset and the hashtags in them. Then, we projected this network onto a hashtag co-occurrence network, whose nodes represent hashtags and an edge between two hashtags indicates the two have appeared at least once in the same tweet; the weight of the edge is a positive integer representing the number of such co-occurrences. The co-occurrence network resulting from this projection contains a total of 100,461 nodes and 822,420 edges. This network can then be used to infer the leaning of each hashtag by applying a label propagation algorithm, using the set of hashtags we defined for the data collection step as an initial seed of known polarity in a way similar to [9]. The algorithm builds upon the multiscale backbone extraction method [32] to identify relevant connections (i.e., co-occurrences) among the network's hashtags. All nodes are initially assigned an "undefined" political leaning score, except for the initial seed of hashtags of known polarity used for data collection (Table 1). A single iteration \(i\) of the algorithm consists of two main operations: extraction of the backbone of the co-occurrence network with a disparity filter \(\alpha_{i}\); and simultaneous update of all hashtags with undefined leaning at step \(i\). The leaning assigned to a hashtag is equal to the average of the leaning of its neighbors, weighted by the number of co-occurrences with each. In this computation, all neighbors that weren't already assigned a leaning in a previous step are temporarily assigned a leaning of 0. At step \(i+1\), a larger disparity filter \(\alpha_{i+1}>\alpha_{i}\) is used to extract the backbone; this corresponds to a softer filtering that allows more nodes to be part of the extracted network. The newly added hashtags and those that haven't been labeled in the previous steps will thus be assigned a political leaning score. The algorithm stops when all hashtags have been assigned a score or when a disparity filter equal to 1 is employed, meaning the update is performed on the entire network. In the latter case, all hashtags that still haven't been updated at the end of the algorithm will be assumed to be neutral and assigned a political leaning score of zero. The length of the sequence of disparity filters will have an impact on the final result of the label propagation, as it's strongly dependent on the network it is applied on and on the nodes used as the initial seed: defining an excessively short sequence may lead to many
neutral hashtags, while a long one - although preferable - can become computationally expensive if the network involved is large. For our purposes, we have found choosing a sequence of values that scale logarithmically and span across several orders of magnitude to be the optimal choice, in contrast to setting a fixed increment to be added at each iteration.
Knowing the polarity of hashtags allows us to get an estimate of the political leaning of each tweet, which we define as equal to that of the hashtag in it with the highest score in absolute value (i.e., most polarized); this avoids the leaning of polarized hashtags being averaged out in tweets where they are used in conjunction with many politically neutral hashtags. Finally, we estimate the political leaning of a user as the average political leaning score of its tweets and retweets.
## 3 Results and Discussion
### Toxicity across coordinated communities
To analyze variations in toxicity across coordinated communities, each user was assigned a toxicity score. This score is determined by averaging the toxicity of a user's top 10% most toxic original tweets and retweets. By focusing on this metric, we aimed to gauge the peak toxicity a user can manifest and disseminate [36, 18] rather than their standard activity levels. Fig. 2(a) displays the joint Probability Density Function (PDF) of the coordination scores of superspreaders from the three largest clusters within the similarity network, aligned with their toxicity (as defined previously). A discernible trend emerges: strongly-coordinated users tend to display below-average toxicity. This implies a propensity among coordinated users to disseminate content with low toxicity levels. This pattern is particularly pronounced within the Labour and Conservative clusters, representing the most distinctly politically aligned communities among superspreaders. Furthermore, Fig. 2(a) highlights the coordination score threshold employed to classify a superspreader as 'coordinated'; subsequent analyses will center on this specific user subset.
Fig. 2(b) depicts the joint PDF of each coordinated user's toxicity against the average toxicity of their neighboring users. These neighbors are factored by their similarity (i.e., edge weight) to the user. The displayed PDF exposes observable toxicity-based homophily in the CON cluster and, to a lesser extent, in the TVT cluster, suggesting that users with comparable retweet behaviors within these clusters also exhibit similar toxicity levels. To confirm this intuition, we can consider the subgraph corresponding to each cluster and estimate its assortativity [37] with respect to the toxicity of the users within, resulting in a score of 0.54 for the CON cluster, 0.30 for the TVT
Figure 2: Joint distributions of different metrics for the superspreaders in the three largest communities of the similarity network. Color intensity indicates the number of users in a bin, with red regions highlighting peaks. (a) Coordination score and toxicity of users, with the dotted line (median coordination) indicating the threshold used to label a superspreader as “coordinated”; (b) toxicity of users and weighted average of their neighborhood’s toxicity; (c) cluster-normalized (least to most extreme) political leaning of users and their toxicity; (d) political leaning of users and weighted average of their neighborhood’s leaning.
cluster, and 0.06 for the LAB cluster. Shuffling the toxicity scores among users of the same cluster allows us to validate these estimates further [38]; after 10,000 random shuffles, we obtain an average assortativity coefficient approximately equal to zero and a Z-score \(\gg 1\) for the observed coefficients of the CON and the TVT clusters.
In Fig. 2(c), we present the joint distribution of user toxicity juxtaposed with a normalized political leaning score. Interestingly, the distribution pertaining to the coordinated users in the CON cluster suggests that the toxicity they express and their political alignment are negatively correlated; in fact, for the two metrics we obtain a Pearson's correlation coefficient of \(\rho=-0.44\), statistically validated with a \(t\)-test (\(p<0.001\)) against the hypothesis of no correlation. For additional context, Fig. 2(d) illustrates the relationship between the political leaning of coordinated users and their neighbors. The distributions for the LAB and TVT clusters are mostly centered around a singular peak; this is especially evident in the latter, where users predominantly employ non-polarized hashtags and thus form a distinct peak around zero. On the other hand, the distribution of the CON cluster is characterized by two separate peaks, one close to 0.5 - similarly to the LAB cluster, although with opposite alignment - and the other close to zero. This result suggests that the CON cluster might effectively house at least two sub-communities, as denoted by the two observed peaks. The peak around 0.5 may be comprised of coordinated users sharing tweets with a distinct political connotation, which is determined by hashtags often co-occurring with the conservative hashtags of polarity (+1) used for data collection (Table 1). On the other hand, the coordinated users distributed around zero may still be sharing tweets with a clear conservative perspective, however, their focus might be on topics characterized by hashtags that are not tightly linked to our predefined set of conservative hashtags. This observed within-cluster bi-modality is further strengthened by our previous analysis on the relationship between polarity and toxicity: the users in these potential sub-communities not only share tweets with different political content, as suggested by their hashtag usage but also with different linguistic content, as suggested by the toxicity they express. Specifically, users sharing more distinctly politically aligned tweets do so with seemingly non-toxic language. This result further highlights the nuanced behavior of coordinated users in regard to the topics they promote and how they promote them from a linguistic viewpoint.
### Toxicity in coordinated and non-coordinated users
In our subsequent analysis, we sought to measure the difference in toxicity exhibited by coordinated and non-coordinated users, applying for both the "user toxicity" definition we presented in 3.1. For
Figure 3: Comparison between coordinated and non-coordinated users in terms of expressed toxicity, defined as the average of the top 10% most toxic original tweets or retweets. (a) Tweeting activity and user toxicity smoothed via a LOESS curve (the shaded region indicates the corresponding 95% CI), with the observed user toxicity distribution for both groups in miniature; (b) bootstrap distribution of the average user toxicity; (c) bootstrap distribution of the average user toxicity obtained by ignoring retweets.
what concerns this comparison, we only considered users with a minimum of 5 tweets or retweets, as including all users would generate a user toxicity distribution with a heavy positive skew for both groups, thus making the comparison less valuable for an assessment of the overall behavioral differences. Fig. 3(a) shows that the toxicity expressed by coordinated users remains relatively unchanged across different activity levels, suggesting that the more active coordinated users don't share content that is any more or less toxic. In contrast, increased activity in non-coordinated users appears to correlate with reduced toxic behavior.
Examining the user toxicity distributions for both user groups in Fig. 3(a) further highlights this difference: coordinated users display a distribution that is more sharply concentrated around its mean (\(\overline{x}=0.43,\hat{\sigma}=0.12\)), while non-coordinated users are characterized by larger fluctuations (\(\overline{x}=0.46,\hat{\sigma}=0.17\)). In addition, our results provide an indication that coordinated users manifest on average lower toxicity than their non-coordinated counterparts, which we have statistically validated (\(p<0.001\)) by means of the Anderson Darling \(k\)-sample test with 10,000 simulations. We can quantify this discrepancy in average toxicity via bootstrap resampling to account for the inherent difference in sample size between the two groups. In Fig. 3(b), we report the distributions resulting from 50,000 bootstrap replicates, which confirm our previous observation that coordinated users are on average less toxic than the non-coordinated ones; for the former, we obtain \(\hat{\mu}=0.4269\) (95% CI: \([0.4238,0.4300]\)), while for the latter \(\hat{\mu}=0.46129\) (95% CI: \([0.45947,0.46308]\)). We argue that this result follows from the difference in tweeting activity of the two groups. In fact, we have defined coordinated users as a subset of the most prolific retweeters in the dataset, hence their activity being mostly characterized by sharing other accounts' content suggests that coordinated behaviors may favor promoting content that is less toxic than average. To assess whether this result effectively stems from the difference in tweeting patterns, we repeated the same analysis upon exclusion of retweets, thus solely focusing on original tweets: out of the 5450 coordinated users we initially identified, this filtering reduced the set of coordinated users to 4515. The densities obtained via bootstrap reported in Fig. 3(c) indicate that extensive retweeting activity does play a role, and also elicit an interesting observation: both coordinated and non-coordinated users tend to post original content with a higher toxicity level compared to that they disseminate through retweets. In fact, we measure \(\hat{\mu}=0.5161\) (95% CI: \([0.5040,0.5281]\)) for coordinated users, and \(\hat{\mu}=0.4821\) (95% CI: \([0.4759,0.4883]\)) for the non-coordinated. In addition, our analysis highlights that coordinated users produce original tweets that are distinctly more toxic than those they spread via retweeting, whereas non-coordinated users tend to produce and retweet content of similar toxicity. Indeed, unlike our previous analysis that included retweets, the average toxicity of the coordinated users is now higher than that estimated for the non-coordinated. This suggests that coordinated behaviors aiming to maximize exposure might be strategically orchestrated to disseminate toxic content while remaining wary of triggering platform moderation mechanisms.
### Influence of coordination on toxicity
We further examine whether coordinated activity directly affects the toxic behaviors of non-coordinated users. To achieve this, we classify the tweets each non-coordinated user has been involved in into two main categories: 'productions' (i.e., the original tweets the user has written) and 'interactions' (i.e., the tweets the user has retweeted or replied to). This classification allows us to encode user activity within the context of the electoral debate as a sequence of productions and interactions, which we can study from a toxicity viewpoint. As we aim to investigate whether interacting with toxic content influences the toxicity of the following productions, for each user we discard the first sequence of tweets if they are original productions and the last sequence if they are interactions. In addition, to assess the role of CB, we only focus on sequences of interactions that involve only coordinated or non-coordinated users. Finally, we assign to each sequence of productions a toxicity score equal to the average toxicity of its tweets. In Fig. 4(a), we report the density of the average toxicity produced by non-coordinated users, obtained via bootstrap resampling, upon interacting with both groups of users. The resulting estimates suggest that non-coordinated users who have exclusively interacted with coordinated behavior tend to manifest slightly higher toxicity levels than those who haven't at all; we obtain \(\hat{\mu}=0.20274\) (95% CI: \([0.20124,0.20425]\)) for the former, and \(\hat{\mu}=0.18918\) (95% CI: \([0.18794,0.19042]\)) for the latter.
Fig. 4(b) displays how factoring in the toxicity of the content a user has interacted with affects the previous outcome. In this analysis, we consider a sequence of interactions to be 'toxic' if all the tweets that constitute it have a toxicity score above 0.6; this is the threshold suggested by PerspectiveAPI to classify a tweet as toxic, as it indicates with reasonable confidence that more than half the readers would classify that message as such. In contrast, a sequence is considered 'non-toxic' if all of its tweets have a toxicity score below 0.6. Finally, we discard all sequences
with a mix of toxic and non-toxic tweets. Our results indicate that the level of toxicity a user expresses upon interacting with toxic content doesn't significantly change when the author of that content is a coordinated user (the two samples originate from the same population with \(p=0.093\), Anderson Darling \(k\)-sample test). In fact, the two bootstrap estimates overlap: \(\hat{\mu}=0.2374\) (95% CI: \([0.2293,0.2455]\)) following interactions with coordinated users, and \(\hat{\mu}=0.2486\) (95% CI: \([0.2409,0.2561]\)) with non-coordinated users. We measure on the other hand a statistically significant difference (\(p<0.001\)) in the case of interactions with non-toxic tweets, however, the increase is only minimal: \(\hat{\mu}=0.20127\) (95% CI: \([0.19970,0.20279]\)) following interactions with coordinated users, and \(\hat{\mu}=0.18694\) (95% CI: \([0.18570,0.18819]\)) with non-coordinated users. This suggests that the observed increase in produced toxicity should likely be attributed to the nature of the users' preferences.
Figure 4: Average toxicity of tweets produced by non-coordinated users, obtained with bootstrap resampling. (a) Distributions of the average toxicity produced following exclusive interactions with non-coordinated users or coordinated users; (b) estimates with their 95% CI obtained by factoring in the toxicity of the interactions, using a score of 0.6 as the threshold to label a tweet as ‘toxic’, and their political leaning.
Figure 5: Hourly average toxicity of the tweets produced by non-coordinated users compared with that of the tweets by coordinated users they have interacted with, overlaid with LOESS curves (the shaded regions indicate their respective 95% CI) for easier visualization. The size and visibility of a point are proportional to the number of tweets observed within the corresponding hour. On the right side, empirical distributions of the two metrics across the entire time period.
of the tweets shared by coordinated users. In fact, as coordinated efforts are typically orchestrated to disseminate and boost specific agendas, the tweets they spread might be more polarizing or controversial in character rather than inherently toxic. To investigate this further, we consider the political alignment of the content a user has interacted with and compare it to the user's own alignment: if the average leaning of the interactions has an opposite sign to that of the user, we consider that sequence of interactions to have 'opposite leaning', otherwise'same leaning'. To avoid ties, we exclude from this analysis all users with a political leaning score equal to 0. Surprisingly, Fig. 4(b) shows that users interacting with content of opposite political alignment exhibit only marginally higher toxicity levels than those interacting with politically aligned content. In addition, these findings point towards coordinated behavior having only a minor role in affecting the toxicity of tweets produced by non-coordinated users: both the toxicity level and the political content of the tweets being shared seemingly have a larger impact than the coordination degree of its author. In fact, our measurements suggest that the 'volume' of tweets, which is characteristic of coordinated behaviors, has a less pronounced effect on the toxic behavior of non-coordinated users than the political message they convey and how they convey it from a toxicity viewpoint.
An alternative approach we can employ to explore how CB might potentially influence toxic production is to study the phenomenon from a strictly temporal perspective. In this regard, we construct two-time series spanning the entire time frame of our data: one is built by sequencing the hourly average toxicity of original tweets produced by non-coordinated users, while the other by averaging the toxicity of the tweets they have interacted with that have been shared by coordinated users. Fig. 5 shows that while the former appears stable and concentrated around a small range of values, the latter is characterized by evident oscillations. This hints at coordinated users adopting a noticeably more toxic and non-stationary tone in the context of specific events in the campaign (such as TV debates between political leaders) and less toxic in others. As we have suggested in 3.2, a possible explanation for this behavior might reside in CB being specifically employed to inject inflammatory content or trigger toxic responses while simultaneously avoiding the platform moderation mechanism.
Additionally, from Fig. 5 we can observe that the two-time series nearly mirror each other in proximity to election day, which might serve as compelling evidence of one influencing the other. To verify this, we can quantitatively measure the information flow between the two by applying the transfer entropy method [39], which is based on the concept of entropy. Unlike other methods such as Granger causality [40], transfer entropy can capture not only non-linear dependencies between two time series but also the dominant direction of the flow, by estimating the net gain in information about the future observations of one derived from the past observations of the other. As entropy requires discrete data, we discretize the average toxicity scores of the two-time series and assign them to 4 bins with bounds equal to the quantiles at \(\mathbf{p}=(0.05,0.5,0.95)\) of their respective empirical distributions. The selection of these quantiles is made to emphasize the tails of the distributions, as they include the least and most toxic observations; to this end, we compute the Renyi transfer entropy with a weighting parameter of \(q=0.5\), which puts more weights on the tails when calculating transfer entropy [41]. Our findings indicate that the toxicity shared via CB affects the toxic tendencies of non-coordinated users and that the information flow from the former to the latter is statistically relevant (\(p=0.034\)), as tested using the method proposed by Dimpfl and Peter (2013) [42] and implemented by Behrendt _et al._ (2019) [41], whereas that in the opposite direction is not (\(p=0.78\)). This is an additional indication of CB's role, suggesting that even passive interactions with coordinated activities can significantly shape the toxicity levels expressed by non-coordinated users.
Our findings expose a marginal yet measurable shift in the production of toxic content when non-coordinated users engage with CB. However, this increase is likely attributed to the nature of the content being shared, suggesting that the 'character' of the message might be more influential than the extent of coordination of the efforts that channeled it. Additionally, our results are somewhat counterintuitive concerning the current assumption regarding political polarization.
## 4 Conclusions
In the context of the 2019 UK General Elections, we systematically analyze the interplay between Coordinated Behavior (CB) and content toxicity on Twitter. Drawing from a dataset of 11 million tweets from 1 million unique users, we aim to better understand the subtleties of this relationship on a significant digital communication platform. Our findings underscore that users with higher coordination scores predominantly circulate content with diminished toxicity levels. This trend is pronounced in politically-affiliated clusters, indicating that coordination, while strategic, does
not necessarily result in promoting harmful content. The primary aim of these coordinated users is amplification and influence rather than explicit toxicity dissemination. Clear patterns emerge when analyzing coordinated versus non-coordinated users: both groups exhibit comparable average toxicity levels. However, coordinated users maintain consistent toxicity levels regardless of their activity volume. Conversely, increased activity among non-coordinated users typically aligns with decreased toxic behavior, suggesting a more intentional toxicity regulation by coordinated entities.
In examining the non-coordinated user group, we find that exposure to CB subtly heightens their toxic behavior. Notably, this rise is not solely driven by the volume of toxic exposure but is significantly influenced by the character of the content. Political alignment, while a factor in toxicity, does not show a direct linear relation with toxicity intensity. From a temporal perspective, using Renyi entropy reveals the immediacy of CB's impact on non-coordinated user toxicity. A discernible flow from exposure to production suggests CB's subtle influence in shaping regular user behaviors.
In conclusion, our study highlights the nuanced manner in which CB functions online. While coordination does not directly spread toxicity, its influence on user behaviors is evident. The interplay between content nature and its dissemination strategy is crucial. Future research should investigate the motivations behind coordinated activities, the socio-political environments nurturing them, and their broader effects on digital conversations.
## Tables
\begin{table}
\begin{tabular}{l c c c} \hline Hashtag & Leaning & Users & Tweets \\ \hline \#GE2019 & N (0) & 436,356 & 2,640,966 \\ \#GeneralElection19 & N (0) & 104,616 & 274,095 \\ \#GeneralElection2019 & N (0) & 240,712 & 783,805 \\ \hline \#VoteLabour & L (\(-\)1) & 201,774 & 917,936 \\ \#VoteLabour2019 & L (\(-\)1) & 55,703 & 265,899 \\ \#ForTheMany & L (\(-\)1) & 17,859 & 35,621 \\ \#ForTheManyNotTheFew & L (\(-\)1) & 22,966 & 40,116 \\ \#ChangeIsComing & L (\(-\)1) & 8,170 & 13,381 \\ \#RealChange & L (\(-\)1) & 78,285 & 274254 \\ \hline \#VoteConservative & C (+1) & 52,642 & 238,647 \\ \#VoteConservative2019 & C (+1) & 13,513 & 34,195 \\ \#BackBoris & C (+1) & 36,725 & 157,434 \\ \#GetBrexitDone & C (+1) & 46,429 & 168,911 \\ \hline Total & & 668,312 & 4,983,499 \\ \hline \end{tabular}
\end{table}
Table 1: Data collected via hashtags [20]. Neutral (N) hashtags have been assigned a political leaning score of \(0\), whereas hashtags linked to the Labour (L) and Conservative (C) parties have been assigned a score of \(-1\) and \(+1\), respectively.
\begin{table}
\begin{tabular}{l c r r r} \hline & \multicolumn{2}{c}{Production} & \multicolumn{2}{c}{Interactions} \\ \cline{2-5} Account & Leaning & Tweets & Retwets & Replies \\ \hline \#jeremycorbyn & L (\(-\)1) & 788 & 1,759,823 & 414,158 \\ \#UKLabour & L (\(-\)1) & 1,002 & 325,219 & 79,932 \\ \#BorisJohnson & C (+1) & 454 & 284,544 & 382,237 \\ \#Conservatives & C (+1) & 1,398 & 151,913 & 169,736 \\ \hline Total & & 3,642 & 2,521,499 & 1,046,063 \\ \hline \end{tabular}
\end{table}
Table 2: Data collected via accounts [20]. The two accounts linked to the Labour party (L) have been assigned a political leaning score of \(-1\), whereas the two linked to the Conservative party (C) a score of \(+1\). |
2301.11909 | Quantized Deep Path-following Control on a Microcontroller | Model predictive Path-Following Control (MPFC) is a viable option for motion
systems in many application domains. However, despite considerable progress on
tailored numerical methods for predictive control, the real-time implementation
of predictive control and MPFC on small-scale autonomous platforms with
low-cost embedded hardware remains challenging. While usual stabilizing MPC
formulations lead to static feedback laws, the MPFC feedback turns out to be
dynamic as the path parameter acts as an internal controller variable. In this
paper, we leverage deep learning to implement predictive path-following control
on microcontrollers. We show that deep neural networks can approximate the
dynamic MPFC feedback law accurately. Moreover, we illustrate and tackle the
challenges that arise if the target platform employs limited precision
arithmetic. Specifically, we draw upon a post-stabilization with an additional
feedback law to attenuate undesired quantization effects. Simulation examples
underpin the efficacy of the proposed approach. | Pablo Zometa, Timm Faulwasser | 2023-01-27T18:48:54Z | http://arxiv.org/abs/2301.11909v1 | # Quantized Deep Path-following Control on a Microcontroller
###### Abstract
Model predictive Path-Following Control (MPFC) is a viable option for motion systems in many application domains. However, despite considerable progress on tailored numerical methods for predictive control, the real-time implementation of predictive control and MPFC on small-scale autonomous platforms with low-cost embedded hardware remains challenging. While usual stabilizing MPC formulations lead to static feedback laws, the MPFC feedback turns out to be dynamic as the path parameter acts as an internal controller variable. In this paper, we leverage deep learning to implement predictive path-following control on microcontrollers. We show that deep neural networks can approximate the dynamic MPFC feedback law accurately. Moreover, we illustrate and tackle the challenges that arise if the target platform employs limited precision arithmetic. Specifically, we draw upon a post-stabilization with an additional feedback law to attenuate undesired quantization effects. Simulation examples underpin the efficacy of the proposed approach.
## I Introduction
Nonlinear Model Predictive Control (NMPC) is a control method that can handle nonlinear system dynamics as well as input and state constraints. In its base variant NMPC for setpoint stabilization yields a static feedback law. Another variant is Model predictive Path-Following Control (MPFC), which has been successfully applied to motion control of robots to precisely follow a geometric reference path [1, 2, 3]. In MPFC the considered reference is a geometric path and timing along the path is computed at the run-time of the controller. Hence and in contrast to NMPC for setpoint stabilization, the MPFC is a dynamic feedback strategy as the reference position is an internal controller memory [4].
An often cited disadvantage of NMPC is its high computational cost, which significantly limits its use in low-cost computing hardware like MicroController Units (MCU). The Optimization Engine (OpEn) [5] and acados [6], two popular state-of-the-art NMPC solvers, can efficiently run on embedded hardware like a Raspberry Pi (a single-board computer). However, at the time of this writing, none of them can run out of the box on 32-bit MCUs.
To overcome the high computational demands of NMPC, the use of deep neural networks as a way to quickly find an approximate solution to the NMPC problem has been proposed [7, 8, 9]. In particular, [8] explores a robust multi-stage NMPC on an MCU using a Deep Neural Network (DNN) using single-precision floating-point arithmetic during network inference.
Moreover, to further increase the efficiency of DNNs, the use of quantization--i.e., storing the network parameters using fixed-point representation instead of floating point--has been explored [10]. Compared to a regular DNN, a quantized DNN executes much faster, requires less memory, and is more energy efficient--there is the downside of some loss of numerical accuracy [10].
The present paper investigates the use of quantized deep neural networks for model predictive path-following control of mobile robots. Our main contribution is two-fold: first, we propose a way to generate the training set that takes into account the path to be followed, and second, we extend the DNN with a simple controller to make up for errors introduced by the quantized DNN approximation.
Using the proposed approach with hardware-in-the-loop simulations running on an MCU, we show that a quantized deep neural network requiring less than \(5\) kB of storage memory achieves a good path following performance while being several orders of magnitude faster than OpEn.
The remainder of the paper is organized as follows: Section II recalls MPFC applied to a mobile robot. Section III discusses quantized DNNs. Section IV introduces an approach to efficiently approximate the MPFC problem using quantized DNN, followed by the results (Section V) and conclusions (Section VI).
## II Path following control of a mobile robot
This section summarizes the main idea of MPFC according to [11], and its application to differential drive robots [3].
### _System Description_
Fig. 1 shows a schematic of a differential drive robot. The global (inertial) frame is defined by the axes \(XY\)
Fig. 1: Left: differential drive robot and its coordinate systems. Right: the path at scale, an ellipse. The robot’s left and right wheels are marked \(L\) and \(R\), respectively.
whereas the local frame attached to the robot is defined by the axes \(\hat{X}\hat{Y}\). The position of the robot in the global frame is represented by the Cartesian coordinates of point \(q\) (the origin of the local frame). The robot's pose \(\xi\) in the inertial frame is represented by its Cartesian position \(q=[q_{x}~{}q_{y}]^{\intercal}\) and orientation \(\varphi\), that is \(\xi=[q_{x}~{}q_{y}~{}\varphi]^{\intercal}\). We represent the robot dynamics as the rate of change of the pose in terms of the robot's forward speed \(s\), and its angular velocity \(\omega\):
\[\dot{\xi}=f(\xi,u)=\begin{bmatrix}s\cos(\varphi)\\ s\sin(\varphi)\\ \omega\end{bmatrix},~{}~{}~{}\xi(0)=\xi_{0}, \tag{1}\]
with \(\xi\in\mathcal{X}\subseteq\mathbb{R}^{3}\), and \(u=[s~{}~{}\omega]^{\intercal}\in\mathcal{PC}(\mathcal{U})\subset\mathbb{R}^{2}\). We use \(\mathcal{PC}(\mathcal{U})\) to denote that the inputs are piece-wise continuous and take values from a compact set \(\mathcal{U}\).
### _The State-Space Path-Following Problem_
We recall the path-following problem in the state space of the robot model (1) as introduced by [11]. The path-following problem aims at making the system (1) follow a geometric reference without explicit timing requirements, i.e., _when to be where_ on the path is not specified. The reference is given by
\[\mathcal{P}=\{\xi\in\mathbb{R}^{3}~{}|~{}\exists~{}\theta\in\mathbb{R}\mapsto \xi=p(\theta)\}.\]
The variable \(\theta(t)\in\mathbb{R}\) is the path parameter, and \(p(\theta(t))\in\mathbb{R}^{3}\) is a parameterization of \(\mathcal{P}\). Note that although \(\theta\) is dependent on time, its time evolution \(t\mapsto\theta\) is not specified. Thus, the control inputs \(u\in\mathcal{PC}(\mathcal{U})\) and the timing \(\theta:\mathbb{R}^{+}_{0}\rightarrow\mathbb{R}^{+}_{0}\) are chosen such that they follow the path as closely as possible.
**Problem 1**: _(State-space path following with speed assignment)_
1. Convergence to the path: the robot's state \(\xi\) converges to the path \(\mathcal{P}\) such that \[\lim_{t\rightarrow\infty}\|\xi(t)-p(\theta)\|=0.\]
2. Constraint satisfaction: the constraints on the states \(\xi\in\mathcal{X}\) and inputs \(u\in\mathcal{U}\) are satisfied at all times.
3. Velocity convergence: the path velocity \(\dot{\theta}\) converges to a predefined profile such that \[\lim_{t\rightarrow\infty}\|\dot{\theta}(t)-v_{r}(t)\|=0.\]
Here we consider path parametrizations of the form
\[p(\theta)=[p_{x}(\theta)~{}~{}p_{y}(\theta)~{}~{}p_{\varphi}( \theta)]^{\intercal}, \tag{2}\] \[p_{\varphi}(\theta)=\arctan\left(\frac{p^{\prime}_{y}}{p^{\prime }_{x}}\right),~{}p^{\prime}_{x}=\frac{\partial p_{x}}{\partial\theta},~{}p^{ \prime}_{y}=\frac{\partial p_{y}}{\partial\theta},\]
where \(p_{x}(\theta)\) and \(p_{y}(\theta)\) are at least twice continuously differentiable (see [3]). We denote \(p_{xy}=[p_{x}~{}p_{y}]^{\intercal}\) as the vector of Cartesian coordinates of the path.
The path parameter \(\theta\) is considered a virtual state, which is controlled by the virtual input \(v\). Here the dynamics of \(\theta\) are chosen as a single integrator:
\[\dot{\theta}=v,~{}~{}\theta(0)=\theta_{0},\]
where \(v\in\mathcal{PC}(\mathcal{V})\), \(\mathcal{V}\doteq[0,\bar{v}]\), and \(\bar{v}\in\mathbb{R}\).
The path following problem is formulated using the augmented system
\[\dot{z}=f(z,w)=\begin{bmatrix}\dot{q}_{x}\\ \dot{q}_{y}\\ \dot{\varphi}\\ \dot{\theta}\end{bmatrix}=\begin{bmatrix}s\cos(\varphi)\\ s\sin(\varphi)\\ \omega\\ v\end{bmatrix},\]
with the augmented state vector \(z=[\xi^{\intercal}~{}\theta]^{\intercal}=[q_{x}~{}q_{y}~{}\varphi~{}\theta]^{ \intercal}\in\mathcal{Z}=\mathcal{X}\times\mathbb{R}^{+}_{0}\) and the augmented input vector \(w=[u^{\intercal}~{}v]^{\intercal}=[s~{}\omega~{}v]^{\intercal}\in\mathcal{PC}( \mathcal{U}\times\mathcal{V})\subset\mathbb{R}^{3}\).
System (1) is _differentially flat_, and \([q_{x}~{}q_{y}]^{\intercal}\) is one of its flat outputs [12]. Therefore there is an input \(u_{r}=[s_{r}~{}\omega_{r}]\) which guarantees that path (2) is followed by the system.
The vector \(u_{r}\) is used as a reference for the input vectors and can be built by observing that the first two equations of system (1) satisfy \(s^{2}=\dot{q}_{x}^{2}+\dot{q}_{y}^{2}\), and thus:
\[\begin{split} s_{r}(\theta,v)&=\sqrt{\left(\frac{ \mathsf{d}p_{x}(\theta(t))}{\mathsf{d}t}\right)^{2}+\left(\frac{\mathsf{d}p_{y }(\theta(t))}{\mathsf{d}t}\right)^{2}}\\ &=v\sqrt{\left(p^{\prime}_{x}\right)^{2}+\left(p^{\prime}_{y} \right)^{2}}.\end{split} \tag{3}\]
Furthermore, from the last equation of system (1) we have \(\omega=\dot{\varphi}\), which yields
\[\begin{split}\omega_{r}(\theta,v)&=\frac{\mathsf{d}p_{ x}(\theta(t))}{\mathsf{d}t}\\ &=v\left(\left(p^{\prime}_{x}\right)^{2}+\left(p^{\prime}_{y} \right)^{2}\right)^{-1}\left(p^{\prime}_{x}p^{\prime\prime}_{y}-p^{\prime}_{y} p^{\prime\prime}_{x}\right),\\ &\text{with}~{}~{}p^{\prime\prime}_{x}=\frac{\partial^{2}p_{x}}{ \partial\theta^{2}},~{}~{}\text{and}~{}~{}p^{\prime\prime}_{y}=\frac{ \partial^{2}p_{y}}{\partial\theta^{2}}.\end{split} \tag{4}\]
Further details on the derivation can be found in [13, 3].
### _Model Predictive Path Following Control (MPFC)_
This section is based on the state-space MPFC scheme proposed in [11]. For paths defined in output spaces, we refer to [4, 2].
The sampling period is \(\delta>0\), and the prediction horizon is \(T=N\delta\), with \(N\in\mathbb{N}\). The extended state at the current sampling time \(t_{k}=k\delta\) is denoted \(z_{k}=\begin{bmatrix}\xi(t_{k})&\theta(t_{k})\end{bmatrix}\) and the extended control input is \(w=\begin{bmatrix}u&v\end{bmatrix}\). We consider the stage cost
\[\ell(z,w)=\left\|\begin{matrix}\xi-p(\theta)\\ \theta\end{matrix}\right\|_{Q}^{2}+\left\|\begin{matrix}u-u_{r}(\theta,v)\\ v-v_{r}\end{matrix}\right\|_{R}^{2},\]
with \(Q=Q^{\intercal}\succeq 0\) and \(R=R^{\intercal}\succ 0\), i.e., symmetric positive (semi)definite diagonal matrices. The Optimal Control Problem (OCP) to be solved repeatedly at each sampling instant \(t_{k}\) and using \(z_{k}\) as parametric data reads
\[\begin{split}\mathbf{w}^{*}=\underset{w\in\mathcal{PC}( \mathcal{W})}{\text{arg min}}&\int_{0}^{T}\ell(z(\tau),w(\tau))\mathsf{d}\tau\\ \text{subject to}&\dot{z}(\tau)=f(z(\tau),w(\tau)),~{}~{}~{}z(0)=z_{k}, \\ & z(\tau)\in\mathcal{Z},~{}w(\tau)\in\mathcal{W}.\end{split} \tag{5}\]
Although this OCP is formulated in continuous time, our MPFC _implementation_ is done in discrete time with
\(\{w_{0}^{*},\ldots,w_{N-1}^{*}\}\in\mathcal{W}^{N}\) a sequence of \(N\) input vectors. Typically, in MPC we only apply to the controlled system the first vector \(w=w_{0}^{*}\) in the sequence \(\textbf{w}^{*}\). The MPFC feedback controller based on (5) can be expressed as the function
\[w=\begin{bmatrix}u\\ v\end{bmatrix}=\mathbb{M}(z). \tag{6}\]
Observe that \(w\) entails the robot command \(u\) and the virtual control \(v\), which controls the evolution of the path parameter \(\theta\), cf. (II-B). Hence only \(u\) is applied to the robot.
## III Feedforward Neural Networks
Next, we recall the basics of how a function can be approximated by feedforward neural networks, the advantages of using deep architectures, and how to quantize them.
### _Deep Neural Networks_
The use of feedforward Neural Networks (NN) is motivated by their universal function approximation properties [14]. In particular, we are interested in approximating the MPFC feedback (6). Our goal is to train an NN that approximates \(\mathbb{M}(z)\) by defining the mapping \(w^{D}=\mathbb{D}(z;\Theta)\), where \(\Theta\) represents a set of \(N_{\Theta}\) unknown parameters, which are learned during _training_. Once we have a trained network, we can use the \(\mathbb{D}(z;\Theta)\) to _infer_ the values of \(w^{D}\approx\mathbb{M}(z)\).
To train our network, we rely on a training data set
\[\mathcal{T}=\{\nu^{1},\nu^{2},\ldots,\nu^{N_{T}}\},\text{ with }\nu^{j}= \begin{bmatrix}z^{j}\\ \mathbb{M}(z^{j})\end{bmatrix}\in\mathbb{R}^{7},\]
\(j\in\mathcal{J}=\{1,\ldots,N_{T}\}\), and \(N_{T}\) is large enough. The training algorithm aims to find the values of \(\Theta\) that make \(\mathbb{D}(z^{j};\Theta)\approx\mathbb{M}(z^{j}),\forall\,j\in\mathcal{J}\) using some statistical measure like the Mean Squared Error (MSE). It is common to use a gradient-based optimization algorithm during training to minimize the MSE. The trained network is said to _generalize_ well if \(\mathbb{D}(z;\Theta)\) is still a good approximation of \(\mathbb{M}(z)\) for values of \(z\) not seen during training, in particular those relevant to the application.
In general, an NN consists of \(H+2\) layers: one input layer, one output layer, and \(H\geq 1\) hidden layers. Each layer \(k\) consists of \(n_{k}\) units called neurons. Commonly, if there are only one or two hidden layers, the network is referred to as _shallow_, otherwise, it is called a _Deep_ Neural Network (DNN). The advantage of a DNN, compared to a shallow network, is that it can approximate a function like (6) with similar accuracy but with fewer parameters \(N_{\Theta}\) as fewer neurons (and hence parameters) are considered per layer. We refer to [15] for details.
Starting with the input \(z=h^{0}\) as the first layer, the output of layer \(k=1,2,\ldots,H+1\) is
\[h^{k}=\beta(b^{k}+W^{k}h^{k-1}), \tag{7}\]
with \(b^{k}\in\mathbb{R}^{n_{k}}\) a vector called _bias_ and \(W^{k}\in\mathbb{R}^{n_{k}\times n_{k-1}}\) a matrix called _weights_, and the function \(\beta(\cdot)\) is a saturating _activation_ function. The last layer is the output layer \(w^{D}=h^{H+1}\). Note that \(\Theta=\{b^{1},W^{1},\ldots,b^{H+1},W^{H+1}\}\), and the number of parameters of the network is given by:
\[N_{\Theta}=\sum_{k=1}^{H+1}n_{k}(1+n_{k-1}).\]
For example, a network with \(1\) hidden layer would be described as \(w^{D}=\mathbb{D}(z;\Theta)=\beta(b^{2}+W^{2}\beta(b^{1}+W^{1}z))\).
A frequently used activation function is the Rectifying Linear Unit (ReLU) ([15, p. 171]), defined as \(\beta(h)=\max(\textbf{0},h)\), where \(\max(\cdot)\) is computed element-wise. Other common activation functions include the tangent hyperbolic and the sigmoid function.
### _Network Training_
In practice, to find the set of parameters \(\Theta\) that make \(\mathbb{D}(\cdot)\) approximate \(\mathbb{M}(\cdot)\) sufficiently well the higher-level set of so-called _hyper-parameters_ needs to be determined. Common hyper-parameters include the network architecture (\(H\), \(n_{k}\), \(\beta\)), and the gradient-based optimization algorithm parameters (e.g., the step size, also called the _learning rate_) to name just a few. A suitable combination of hyper-parameters is typically determined experimentally [16].
It is helpful to normalize the training set to improve the numerical properties of the network. Here, we represent the training set as a matrix \(\mathcal{T}\in\mathbb{R}^{7\times N_{T}}\) for simplicity in notation. For each column \(j\), and row \(i\) of \(\mathcal{T}\) we have:
\[\bar{\nu}_{i}^{j}=N(\nu_{i}^{j};\mu_{i},\sigma_{i})=\frac{\nu_{i}^{j}-\mu_{i}}{ \sigma_{i}},\]
where \(\mu_{i}\) is the mean and \(\sigma_{i}\) is the standard deviation of row \(i\). Note that \(\nu^{j}\) represents column \(j\) of \(\mathcal{T}\). After applying this transformation, we obtain a normalized data set \(\mathcal{T}_{N}\) that has each row \(i\) with \(\bar{\mu}_{i}=0\) and \(\bar{\sigma}_{i}=1\). To recover the original set \(\mathcal{T}\), we apply the inverse transformation:
\[\nu_{i}^{j}=N^{-1}(\bar{\nu}_{i}^{j};\mu_{i},\sigma_{i})=\nu_{i}^{j}\sigma_{i}+ \mu_{i}.\]
These operations must be applied to the extended robot state \(z=\begin{bmatrix}\xi&\theta\end{bmatrix}\in\mathbb{R}^{4}\) and the extended input vector \(w^{D}=\begin{bmatrix}u&v\end{bmatrix}\in\mathbb{R}^{3}\) during inference. That is \(\bar{z}_{i}=N(z_{i};\mu_{i},\sigma_{i})\), for \(i=1,2,3,4\), and \(w_{i}^{D}=N^{-1}(\bar{w}_{i};\mu_{i+4},\sigma_{i+4})\), for \(i=1,2,3\) (refer to Fig. 3(a)).
### _Quantized DNN (QDNN)_
Quantization refers to storing the parameters of the network (weights and biases) as integer values. The main advantages are reduced memory required to store the \(N_{\Theta}\) parameters, faster execution, and higher energy efficiency during inference. The main disadvantage is the loss of accuracy in the inference [10].
It is common to use an \(8\)-bit integer representation (i8) to store the parameters set \(\Theta\). The network is trained first using floating point numbers often with single precision (32 bits). After the training is completed, the parameters \(\Theta\) are _quantized_ to an i8 approximation. There are different quantization methods [10]. Here we have used a uniform asymmetric quantization. That means that during inference,
the _normalized_ inputs \(\bar{z}\) in the network must be transformed from a floating point number to an integer using
\[\hat{z}=Q(z;\tilde{a},\hat{b})=\mathrm{i}8(\tilde{a}\bar{z})+\hat{b}, \tag{8}\]
where \(\tilde{a}\) is a floating point scaling, \(\hat{b}\) is an integer offset, and \(\mathrm{i}8\) refers to a mapping from floating point to 8-bit integer representation. Similarly, the output of the network \(\hat{w}\) must be transformed from an 8-bit integer to a floating-point normalized output \(\bar{w}\), i.e., it must be _dequantized_ using
\[\bar{w}=Q^{-1}(\hat{w};\tilde{c},\hat{d})=\mathrm{f}32(\hat{w}-\hat{d})\tilde {c}, \tag{9}\]
where \(\tilde{c}\) is a floating point scaling, \(\hat{d}\) is an integer offset, and \(\mathrm{f}32\) refers to a mapping from an 8-bit integer to a single-precision floating-point representation. The scaling and offset parameters are determined during the quantization of \(\Theta\). Fig. 4a depicts how the robot state \(z\) (input to the network) and input vector \(w^{D}\) (output of the network) are numerically transformed.
## IV QDNN-based MPFC
We now turn to a practical approach to approximate the MPFC problem presented in Section II using QDNNs as described in Section III. We denote this approach as MPFC-QDNN. This section also discusses how to augment the MPFC-QDNN with an online feedback controller to improve the accuracy of the path-following control. We denote this approach as MPFC-QDNN+P.
### _Generating a Training Set for MPFC_
Although it is possible to find a network that approximates \(\mathbb{M}(z)\ \forall\ z\in\mathcal{Z}\), this typically would require a network and set \(\mathcal{T}\) larger than necessary for the path-following problem. Under normal circumstances, a mobile robot following a path will mostly take poses \(\xi\) that are close to the reference path \(p(\theta)\). Based on this, a smaller set \(\mathcal{Z}_{C}\subset\mathcal{Z}\) can be used to significantly reduce the size of the network and the training set, without affecting the performance of the MPFC near the path. However, if the robot is driven far away from the path (e.g., due to large disturbances), the MPFC-QDNN may not be able to bring the robot back to following the path.
To generate a set \(\mathcal{T}\) appropriate for MPFC, we propose to use a corridor centered around the path (see Fig. 2). To build the set \(\mathcal{T}\), we select specific values of the path parameter \(\theta_{i}\), \(i=0,1,...,N_{p}\), and compute the path vector \(p(\theta_{i})\). At each \(\theta_{i}\), we build a corridor \(C(\theta_{i})\in\mathbb{R}^{3\times N_{c}}\) using a set of \(N_{c}\) points in the vicinity of \(p(\theta_{i})\).
We propose a corridor in the form of a cuboid centered around \(p(\theta_{i})\) along the orthonormal vectors \(\vec{t},\vec{n},\vec{o}\) (see Fig. 2), with width \(2c_{W}\), length \(2c_{L}\), and height \(2c_{H}\). The points \(C^{j}(\theta)=[p_{t}^{j}\ p_{n}^{j}\ p_{o}^{j}]^{\mathrm{ T}}\) are equidistant along each axis, with
\[\begin{bmatrix}-c_{W}\vec{t}\\ -c_{L}\vec{n}\\ -c_{H}\vec{o}\end{bmatrix}\leq\begin{bmatrix}p_{t}^{j}\\ p_{n}^{j}\\ p_{o}^{j}\end{bmatrix}\leq\begin{bmatrix}c_{W}\vec{t}\\ c_{L}\vec{n}\\ c_{H}\vec{o}\end{bmatrix}.\]
The set \(\mathcal{Z}_{C}\) has \(N_{p}N_{c}\) elements. The corridor can be defined in many different ways (e.g., using randomly selected points inside an ellipsoid). Here we have presented one way that has worked well in our experiments (a cuboid grid with equidistant points). Determining the optimal way to construct the set \(\mathcal{Z}_{C}\) is beyond the scope of this work.
The size of the corridor plays an important role in how well the MPFC-QDNN can follow the path in practice. If the corridor is too narrow, the network is not able to follow the path at all, due to inevitable errors inherent in any feedback control system. A broad corridor is thus preferred. However, that may require more data points in the set, and perhaps a larger network, to make the approximation \(\mathbb{D}(\cdot)\) useful.
### _Path-Following Error_
Although MPFC can follow the reference path \(\mathcal{P}\) very accurately, at any time \(t\) there might be an error \(e\) in the robot's Cartesian position \(q(t)\) with respect to the reference point in the path \(p(\theta(t))\). In the \(XY\) coordinates, the error is given by:
\[e_{XY}(q(t),p_{xy}(\theta(t)))=q(t)-p_{xy}(\theta(t)).\]
This error vector can be expressed in the basis formed by the orthonormal vectors \(\vec{t}(p_{\varphi}(\theta))\) and \(\vec{n}(p_{\varphi}(\theta))\), which are tangential and normal to the path at \(p_{xy}(\theta)\), respectively (refer to Fig. 3). That is:
\[e(q(t),p(\theta(t)))=e_{n}(q,p)\vec{n}(p_{\varphi})+e_{t}(q,p)\vec{t}(p_{ \varphi}), \tag{10}\]
where the scalars \(e_{n}\) and \(e_{t}\) are the projection of \(e_{XY}\) onto each orthonormal vector, computed by the dot product
\[e_{n}(q,p)=\left(q-p_{xy}\right)^{\mathrm{ T}}\vec{n}(p_{\varphi}),\] \[e_{t}(q,p)=\left(q-p_{xy}\right)^{\mathrm{ T}}\vec{t}(p_{\varphi}).\]
Fig. 2: Simplified 2-dimensional visualization of data point used to build the training set \(\mathcal{T}\). The vectors \(\vec{n}\), \(\vec{t}\), \(\vec{o}\) (coming out of the page) are orthonormal. Only \(p_{x}\) and \(p_{y}\) are shown (the orientation \(p_{\varphi}\) is not depicted). The figure shows the corridor for three values of \(\theta_{i}\), and at each point \(p(\theta_{i})\) (cross) a corridor \(C(\theta_{i})\) of \(9\) points (dots) is constructed. The width (\(0.02\) m in the figure), and the length (\(0.4\) m) of the corridor are measured normal (along \(\vec{n}\)) and tangential (along \(\vec{t}\)) to the path at \(p(\theta_{i})\), respectively.
In the case of an ellipse, the tangential and normal vectors are given by:
\[\vec{t}(p_{\varphi})=\begin{bmatrix}\cos(p_{\varphi})\\ \sin(p_{\varphi})\end{bmatrix},\ \ \ \vec{n}(p_{\varphi})=\begin{bmatrix}-\sin(p_{ \varphi})\\ \cos(p_{\varphi})\end{bmatrix}.\]
### _Augmented Control Scheme_
Due to the MPFC-QDNN being an approximation of the MPFC, the path-following error \(e\) resulting from \(\mathbb{D}(z;\Theta)\) is significantly larger than the error observed under the original MPFC controller \(\mathbb{M}(z)\) (see Section V). To compensate this error we extend the MPFC-QDNN controller with an additional linear feedback which acts on the tangential component \(e_{t}\) through the forward speed of the robot \(s\), and on the normal component \(e_{n}\) through the robot's angular speed \(\omega\). Put differently, the compensation term \(w^{P}\) is added to the control vector, i.e., \(w=w^{D}+w^{P}\). Here \(w^{P}=[s^{P}\ \omega^{P}\ 0]^{\intercal}\), with \(s^{P}=P_{t}e_{t}\), and \(\omega^{P}=P_{n}e_{n}\), where \(P_{t}\), and \(P_{n}\) are the proportional gains (see Fig. 4). We denote this approach MPFC-QDNN-P. We selected static feedback mainly due to its simplicity and effectiveness as shown in Section V.
### _Implementation_
We consider an ellipse as the path (see Fig. 1), which is defined by the parametrization
\[p(\theta)=\begin{bmatrix}0.1\cos(\theta)&2\sin(\theta)&\arctan\left(\frac{-0. 1\sin(\theta)}{2\cos(\theta)}\right)\end{bmatrix}^{\intercal},\]
which yields the input references (3) and (4) as
\[s_{r}(\theta,v) =2v\sqrt{1-0.9975\sin^{2}(\theta)},\] \[\omega_{r}(\theta,v) =2v\left(40-39.9\sin^{2}(\theta)\right)^{-1}.\]
To generate the training set \(\mathcal{T}\), we use a corridor consisting of a cuboid of width \(2c_{W}=0.02\), length \(2c_{L}=0.2\), and height \(2c_{H}=\frac{2\pi}{3}\). Each axis is split into \(5\), \(5\), and \(40\) equidistant points (\(N_{c}=1000\)), respectively. We split the path in \(N_{p}=4000\) equidistant segments between \(0\leq\theta\leq 2\pi\), which corresponds to a full turn around the path. The subset of states in the corridor \(\mathcal{Z}_{C}\) consists of \(N_{p}N_{c}=4E6\) points.
To solve the MPFC problem (5), and consequently find \(w=\mathbb{M}(z),\ \forall\ z\in\mathcal{Z}_{C}\) according to (6), we use the Optimization Engine (OpEn) [5], a fast solver for optimal control problems. The training set \(\mathcal{T}\) consists of \(N_{p}N_{c}\) pair of vectors \(z\), \(\mathbb{M}(z)\) for all \(z\in\mathcal{Z}_{C}\). We use a discretization time \(\delta=0.01\) s, and a horizon length \(T=0.6\) s in (5).
We use a random search approach to find the hyper-parameters of a network that is a good approximation to \(\mathbb{M}\) under the constraint that the number of parameters \(N_{\Theta}\) should remain _small_. i.e., to reduce the size of the network in the MCU's ROM. Random search typically delivers better results than manual or grid search for the same amount of computation during training [16]. The selected hyper-parameters were the number of hidden layers \(H\), the number of units in each hidden layer \(n_{k}\), \(k=1,\ldots,H\), and the learning rate of the optimization algorithm (see the Appendix). To find the hyper-parameters we use KerasTuner [17]. To perform the training and the quantization of the network we use the deep learning framework Keras/TensorFlow [18, 19].
We implement a Hardware-In-the-Loop (HIL) simulation where the MPFC-QDNN+P is deployed on an STM32F407 MCU, which is based on a Cortex-M4 processor core running at 168 MHz, which includes a single-precision floating-point unit and \(1\) MB flash ROM. The robot dynamics are simulated on a PC, see the Appendix for details.
The QDNN consists of 9 hidden layers with roughly 4700 parameters, using 8-bit integers to store the parameters (i.e. \(1\) parameter requires \(1\) byte of ROM). The quantized parameter set \(\Theta\) requires less than \(5\) kB of the MCU's flash memory.
## V Results
As our reference implementation (denoted MPFC-OpEn), we use OpEn (the same solver used for training) to solve the MPFC problem (5). The advantages of MPFC are illustrated in Fig. 5. The input \(w\) computed by OpEn to steer the robot along the path in Fig. 6 shows that when the path curvature is tight, i.e. top (\(\theta=\frac{\pi}{2}\)) and bottom (\(\theta=\frac{3\pi}{2}\)) of the ellipse, the path speed \(v\) is reduced, and consequently the robot's forward speed \(s\) is also reduced. This allows the robot to follow the tight curve. Similarly, \(v\) is reduced when the constraints on \(s\) are active (e.g. \(\theta=\pi\)) because the robot cannot otherwise closely follow the path.
The main advantage of using a DNN on an MCU is that it is relatively easy to implement quantization (8), inference (7), and dequantization (9) sequentially for all layers in the network. Furthermore, for a small network like the one used here (\(4700\) parameters), the inference is executed much faster than solving the OCP (5).
Table I shows the execution time for different implementations of MPFC. Our experiments ran on a PC with Ubuntu
Fig. 3: At any given time \(t_{k}\), the robot’s position in the global frame \(XY\) is given by \(q(t_{k})\). The unit vectors \(\vec{t}(\theta_{k})\) (not shown at scale), and \(\vec{n}(\theta_{k})\) are tangential and normal to the path at point \(p_{xy}(\theta_{k})\), respectively, and \(\theta_{k}=\theta(t_{k})\). The vector \(e(q,p)=q-p_{xy}=e_{t}(q,p)\vec{t}(p_{\varphi})+e_{n}(p,q)\vec{n}(p_{\varphi})\) is the current robot’s position with respect to the path.
Linux 22.04-LTS, and a x86-64 processor with a \(2.4\) GHz clock. Compared to MPFC-OpEn, the average execution time of the MPFC-QDNN+P implementation is about three orders of magnitude faster on the PC.
The QDNN implementation using \(8\)-bit integers requires on average \(230\) microseconds to execute on the MCU. Currently, running OpEn on an MCU is not supported.
Fig. 6 shows a comparison of the path in the Cartesian \(XY\) plane followed by the simulated robot using different implementations. The absolute Cartesian position error is shown in Fig. 7, with a summary presented in Table II. All implementations can follow the path, with OpEn being the most accurate. When the worst-case error is considered, using a regular (non-quantized) DNN is two orders of magnitude worse than the OpEn implementation. The QDNN implementation has worse overall performance than the non-quantized network. Finally, the proposed addition of two P controllers to the QDNN reduces its worst-case error by an order of magnitude and outperforms the DNN.
## VI Conclusions
The paper presented a model predictive path following implementation using quantized deep neural networks augmented with a controller for quantization error compensation. We showed a practical way to select the training set, and how to design the error compensation controller. Compared to a traditional MPFC using online optimization, our proposed approach requires only a fraction of the memory and runs several orders of magnitude faster on PC simulations. Although the path-following accuracy is slightly degraded, we believe the performance may still be good for low-cost applications. With a hardware-in-the-loop implementation
\begin{table}
\begin{tabular}{|c|c|c|} \hline & Mean & Max. \\ \hline OpEn & 1.9E-4 & 3.3E-4 \\ DNN & 7.5E-3 & 2.1E-2 \\ QDNN & 1.6E-2 & 4.8E-2 \\ QDNN+P & 6.1E-4 & 4.9E-3 \\ \hline \end{tabular}
\end{table} TABLE II: Mean and maximum Cartesian error in the path.
Fig. 4: Block diagram of the proposed combined controller. Inside the dashed block (MPFC-QDNN) is the MPFC controller approximated by a QDNN. The extended state \(z\) is fed into the MPFC-QDNN. The state is first normalized (\(\bar{z}\)) then quantized (\(\hat{z}\)), and finally fed into the QDNN block for inference of the approximate control input \(\hat{w}\). Dequantization and denormalization are applied to get the approximated augmented control input \(w^{D}\). The block \(E\) computes the robot’s path error vector \(e\) (10) used by the P controllers. The vector \(w^{P}\) is added to \(w^{D}\) to compute the input \(w\). The forward velocity \(s\), and angular velocity \(\omega\) are applied to the robot, whereas the path velocity \(v\) is integrated to compute the path variable \(\theta\).
Fig. 5: Control inputs vs. path parameter. When the path curvature is tight (near \(\frac{\pi}{2}\), and \(\frac{3\pi}{2}\)) the path speed \(v\), and the robot’s forward speed \(s\) are reduced. The path speed \(v\) is also reduced when the input constraints are active (near \(\pi\)).
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Implementation & Mean [s] & Std. [s] & Worst [s] \\ \hline OpEn (PC) & 1.2E-3 & 5.9E-4 & 7.8E-3 \\ QDNN+P (PC) & 7.3E-6 & 2.9E-6 & 3.2E-5 \\ \hline OpEn (MCU) & - & - & - \\ QDNN+P (MCU) & 2.3E-4 & 2.1E-6 & 2.4E-4 \\ \hline \end{tabular}
\end{table} TABLE I: Mean, standard deviation (Std.), worst-case execution (Worst) time in seconds for 10000 steps close to the path. The table is split into MPFC implementations on a personal computer (PC, top part) and a microcontroller (MCU, bottom part). Note that the execution of QDNN+P on the MCU is temporally deterministic. In the PC case, the QDNN+P has more variability due to the operating system.
using a microcontroller, we showed the effectiveness of this approach for low-cost embedded devices. Future work will discuss how to handle different path geometries with one trained QDNN and how to give performance guarantees.
The hyperparameters of the QDNN network are the learning rate \(=4.5E-4\), the activation function (ReLU), the number of hidden layers \(H=9\), and the units on each layer: input layer \(4\) units, followed by the hidden layers with \(48\), \(16\), \(24\), \(16\), \(16\), \(40\), \(24\), \(16\), and \(24\) units, and output layer \(3\) units.
The parameters of OCP (5) are the matrices \(Q=\text{diag}(2E5,2E5,1E5,0)\), \(R=\text{diag}(1E1,5E3,1E5)\), the box sets \(\mathcal{Z}=\{z\in\mathbb{R}^{4}\mid\underline{z}\leq z\leq\overline{z}\}\), with \(\underline{z}=[-5,-15,-\frac{\pi}{2},-5]\), \(\overline{z}=[5,15,\frac{\pi}{2},-5]\), and \(\mathcal{W}=\{w\in\mathbb{R}^{3}\mid\underline{w}\leq w\leq\overline{w}\}\), with \(\underline{w}=[-0.26,-0.455,0]\), \(\overline{w}=[0.26,0.455,0.15]\), the discretization time \(\delta=0.01\)s, and the horizon length steps \(N=60\).
|
2303.07912 | Error estimates of deep learning methods for the nonstationary
Magneto-hydrodynamics equations | In this study, we prove rigourous bounds on the error and stability analysis
of deep learning methods for the nonstationary Magneto-hydrodynamics equations.
We obtain the approximate ability of the neural network by the convergence of a
loss function and the convergence of a Deep Neural Network (DNN) to the exact
solution. Moreover, we derive explicit error estimates for the solution
computed by optimizing the loss function in the DNN approximation of the
solution. | Hailong Qiu | 2023-03-14T13:55:36Z | http://arxiv.org/abs/2303.07912v1 | # Error estimates of deep learning methods for the nonstationary Magneto-hydrodynamics equations
###### Abstract
In this study, we prove rigourous bounds on the error and stability analysis of deep learning methods for the nonstationary Magneto-hydrodynamics equations. We obtain the approximate ability of the neural network by the convergence of a loss function and the convergence of a Deep Neural Network (DNN) to the exact solution. Moreover, we derive explicit error estimates for the solution computed by optimizing the loss function in the DNN approximation of the solution.
Keywords:Magneto-hydrodynamics equations deep learning method error estimate stability Msc: 65N30 35M30 35M35
## 1 Introduction
Deep learning has been very successfully developed in the artificial intelligence revolution and forefront of the data science in the last thirty years. Meanwhile, there has a wide range of applications for deep learning methods in computer vision, natural language processing and image recognition [21; 10; 12]. Recently, as deep neural networks (DNN) are universal function approximators [13; 14; 15; 36], it is also natural to use them for the solutions of partial differential equations (PDEs) [7; 28; 27; 32; 30; 17; 18; 11; 38; 4; 6]. Prominent examples for the application of deep learning methods in PDEs include the deep neural network approximation of elliptic PDEs [31; 22] and nonlinear hyperbolic PDEs [19; 20] and Navier-Stokes equations [16; 35; 8; 19; 25] and references therein.
The need of deep learning comes from the fact that when applying traditional numerical methods in a high dimensional PDEs, the standard methods may become infeasible. High dimensional PDEs arise in many models for instance in a variety of contexts such as in derivative pricing systems, in the financial models, credit valuation adjustment problems and portfolio optimization problems. These high dimensional nonlinear PDEs are extraordinarily difficult to compute as the computational effort for traditional approximation methods grows with the dimension. For example, in finite difference methods or finite element methods, the number of grids increasing considerably needs the memory demands and computational cost as the dimension of the PDEs increases. However, the deep learning methods in PDEs present implicit regularization and can surmount the curse of high dimensions [1; 2]. In addition, deep learning methods provide a natural framework for bounding unknown parameters [27; 30; 35; 33].
Here our primary goal is on numerical analysis of the deep learning methods for solving the nonstationary Magneto-hydrodynamics equations. Magneto-hydrodynamics system mainly describes the hydrodynamical behaviors of conducting fluids subject to external magnetic fields. The Magneto-hydrodynamics systems are builded by a combination of Navier-Stokes problems and Maxwell problems. The research of Magneto-hydrodynamics model is a very importance in both mathematical theory and practical applications, such as astrophysics, geophysics, the design of cooling systems with liquid metals for a nuclear reactor, meteorology, plasma physics and magnetohydrodynamics generators [23; 5].
So far, there is a lot of literature raising numerical schemes applying DNN and machine learning tools for PDEs, including the Navier-Stokes equations and the Magneto-hydrodynamics equations [26; 27; 35; 24; 17; 29; 8; 25; 33; 37]. The computational algorithm used in deep learning of PDEs [27; 11; 29] involves representing the approximate solution by a DNN, in lieu of a finite difference method, spectral method or finite element method, and then establishing an appropriate loss function, minimizing with such representations, measuring the deviation of this representation from the PDEs and the initial and boundary conditions. It is well known that optimization of loss functions in a DNN is a non-convex optimization problem. Therefore, neither the existence nor the uniqueness of a global optimum is ensured. The main focus of this paper is considered the approximate ability of the neural network by the convergence of a loss function and the convergence of a DNN to the exact solution. Furthermore, we obtain explicit error estimates for the solution computed by optimizing the loss function in the DNN approximation of the solution. Our method is similar to the error analysis of machine learning algorithm for the Navier-Stokes equations in [3].
The paper is organized as follows. In section 2, we present some preliminary results, including the incompressible Magneto-hydrodynamics equations and neural networks. In section 3, we obtain convergence of the loss function and convergence of DNN to the unique solution. In section 4, we prove some con
vergence rates and stability of DNN for the the velocity field and the magnetic field.
## 2 Preliminaries
### The Magneto-hydrodynamics equations
In this paper, we study the deep learning methods for the nonstationary magneto-hydrodynamics fluid flow. The governing equations are given as follows:
\[\partial_{t}\mathbf{u}-\nu\Delta\mathbf{u}+(\mathbf{u}\cdot\nabla) \mathbf{u} \tag{1a}\] \[+S\mathbf{B}\times\mathrm{curl}\mathbf{B}+\nabla p =\mathbf{f},\text{ in }\Omega_{T}:=\Omega\times(0,T],\] \[\partial_{t}\mathbf{B}+\mu\mathrm{curl}(\mathrm{curl}\mathbf{B}) -\mathrm{curl}(\mathbf{u}\times\mathbf{B}) =0,\text{ in }\Omega_{T},\] (1b) \[\mathrm{div}\mathbf{u} =0,\text{ in }\Omega_{T},\] (1c) \[\mathrm{div}\mathbf{B} =0,\text{ in }\Omega_{T}. \tag{1d}\]
The homogeneous boundary conditions and initial conditions are presented:
\[\mathbf{u} =\mathbf{0},\quad\text{ on }\partial\Omega_{T}, \tag{2a}\] \[\mathbf{B}\cdot\mathbf{n} =0,\quad\text{ on }\partial\Omega_{T},\] (2b) \[\mathrm{curl}\ \mathbf{B}\times\mathbf{n} =0,\quad\text{ on }\partial\Omega_{T}, \tag{2c}\]
and
\[\mathbf{u}(\mathbf{x},0)=\mathbf{u}_{0}(\mathbf{x}),\ \ \mathbf{B}( \mathbf{x},0)=\mathbf{B}_{0}(\mathbf{x}),\ \ \text{in }\ \Omega, \tag{3}\]
where \(T>0\) denotes time, and \(\Omega\subset\mathbf{R}^{2}\) is a bounded and convex domain with continuous boundary \(\partial\Omega\). \(\mathbf{u}\), \(\mathbf{B}\) and \(p\) denote the velocity, the magnetic and the pressure, respectively. \(\mathbf{f}\) is the known body force. The positive constants \(\nu\) and \(\mu\) stands for the fluid viscous diffusivity coefficient and the magnetic diffusivity coefficient, respectively. \(S\) denotes the coupling coefficient.
We introduce some Sobolev spaces
\[\mathcal{X} :=H_{0}^{1}(\Omega)^{2}=\big{\{}\mathbf{v}\in H^{1}(\Omega)^{2}: \mathbf{v}|_{\partial\Omega}=0\big{\}},\] \[\mathcal{W} :=H_{n}^{1}(\Omega)^{2}:=\big{\{}\mathbf{w}\in H^{1}(\Omega)^{2}: \,\mathbf{v}\cdot\mathbf{n}|_{\partial\Omega}=0\big{\}},\] \[\mathcal{M} :=L_{0}^{2}(\Omega)=\big{\{}q\in L^{2}(\Omega),\int_{\Omega}qd \mathbf{x}=0\big{\}}.\]
For convenience, we also define some necessary bilinear terms
\[a_{f}(\mathbf{u},\mathbf{v}) =\int_{\Omega}\nu\nabla\mathbf{u}\cdot\nabla\mathbf{v}d\mathbf{x },\quad d(\mathbf{v},q)=\int_{\Omega}q\mathrm{div}\mathbf{v}d\mathbf{x},\] \[a_{B}(\mathbf{B},\mathbf{H}) =\int_{\Omega}\mu\mathrm{curl}\,\mathbf{B}\cdot\mathrm{curl}\, \mathbf{H}d\mathbf{x},+\int_{\Omega}\mu\mathrm{div}\,\mathbf{B}\cdot\mathrm{ div}\,\mathbf{H}d\mathbf{x},\]
and trilinear terms
\[b(\mathbf{w},\mathbf{u},\mathbf{v}) =\frac{1}{2}\int_{\Omega}[(\mathbf{w}\cdot\nabla)\mathbf{u}]\cdot \mathbf{v}-[(\mathbf{w}\cdot\nabla)\mathbf{v}]\cdot\mathbf{u}d\mathbf{x}=\int_{ \Omega}[(\mathbf{w}\cdot\nabla)\mathbf{u}]\cdot\mathbf{v}+\frac{1}{2}[(\nabla \cdot\mathbf{w})\mathbf{u}]\cdot\mathbf{v}d\mathbf{x},\] \[c_{\widehat{B}}(\mathbf{H},\mathbf{B},\mathbf{v}) =\int_{\Omega}S\mathbf{H}\times\mathrm{curl}\mathbf{B}\cdot \mathbf{v}d\mathbf{x},\ \ \ \ \ c_{\widetilde{B}}(\mathbf{u},\mathbf{B},\mathbf{H})=\int_{\Omega}( \mathbf{u}\times\mathbf{B})\cdot\mathrm{curl}\,\mathbf{H}d\mathbf{x}.\]
Additionally, thanks to integrating by parts, one finds that
\[b(\mathbf{u},\mathbf{v},\mathbf{v})=0,\
## 3 Convergence of DNN
The minimization problem of (1)-(3) is defined as
\[\inf_{(\mathbf{u},\mathbf{B},p)\in\ \text{appropriate Sobolev space}}\Big{\{}\|\mathfrak{L}_{f}[\mathbf{u}, \mathbf{B},p]\|_{L^{2}(\Omega_{T})}^{2}+\|\mathfrak{L}_{B}[\mathbf{u},\mathbf{ B}]\|_{L^{2}(\Omega_{T})}^{2} \tag{7}\] \[+\|\mathrm{div}\mathbf{u}\|_{L^{2}(\Omega_{T})}^{2}+\|\mathrm{div} \mathbf{B}\|_{L^{2}(\Omega_{T})}^{2}+\|\mathbf{u}_{|\partial\Omega}\|_{L^{2}( \partial\Omega_{T})}^{2}+\|\mathbf{B}\cdot\mathbf{n}_{|\partial\Omega}\|_{L^{2 }(\partial\Omega_{T})}^{2}\Big{\}}.\]
In order to approximate \((\mathbf{u},\mathbf{B},p)\) using a DDN, we consider the following loss function
\[L =\alpha_{1}\|\mathfrak{L}_{f}[\mathbf{u}_{\theta},\mathbf{B}_{ \theta},p_{\theta}]\|_{L^{2}(\Omega_{T})}^{2}+\alpha_{2}\|\mathfrak{L}_{B}[ \mathbf{u}_{\theta},\mathbf{B}_{\theta}]\|_{L^{2}(\Omega_{T})}^{2}\] \[\quad+\alpha_{3}\|\mathrm{div}\mathbf{u}\|_{L^{2}(\Omega_{T})}^{ 2}+\alpha_{4}\|\mathrm{div}\mathbf{B}\|_{L^{2}(\Omega_{T})}^{2}+\alpha_{5}\| \mathbf{u}_{\theta}|_{\partial\Omega}\|_{L^{2}(\partial\Omega_{T})}^{2}\] \[\quad+\alpha_{6}\|\mathbf{B}_{\theta}\cdot\mathbf{n}|_{\partial \Omega}\|_{L^{2}(\partial\Omega_{T})}^{2},\qquad(\mathbf{u}_{\theta},\mathbf{ B}_{\theta},p_{\theta})\in\mathfrak{F}_{N}.\]
Assume that \(\mathfrak{F}_{N}\) is a finite dimensional function space on a bounded domain \(\Omega\). Take a collocation points \(\{x_{j}\}_{j=1}^{m}\subset\Omega\) and \(\{y_{j}\}_{j=1}^{m}\subset\partial\Omega\), find
\[\inf_{(\mathbf{u},\mathbf{B},p)\in\mathfrak{F}_{N}}\Big{\{}\alpha _{1}\sum_{j=1}^{m}|\mathfrak{L}_{f}[\mathbf{u}(x_{j}),\mathbf{B}(x_{j}),p(x_{j })]|^{2}+\alpha_{2}\sum_{j=1}^{m}|\mathfrak{L}_{B}[\mathbf{u}(x_{j}),\mathbf{B }(x_{j})]|^{2} \tag{8}\] \[\quad+\alpha_{3}\sum_{j=1}^{m}|\mathrm{div}\mathbf{u}(x_{j})|^{2 }+\alpha_{4}\sum_{j=1}^{m}|\mathrm{div}\mathbf{B}(x_{j})|^{2}+\alpha_{5}\sum_{ j=1}^{n}|\|\mathbf{u}(y_{j})|_{\partial\Omega}|^{2}\] \[\quad+\alpha_{6}\sum_{j=1}^{n}|\mathbf{B}(y_{j})\cdot\mathbf{n}| _{\partial\Omega}|^{2}\Big{\}}.\]
Here, we note that (8) may be use the monte carlo method to compute the corresponding Lebesgue integrals. Therefore, let us consider the optimization problem as follows:
\[\inf_{(\mathbf{u},\mathbf{B},p)\in\mathfrak{F}_{N}}\Big{\{}\| \mathfrak{L}_{f}[\mathbf{u},\mathbf{B},p]\|_{L^{2}(\Omega_{T})}^{2}+\| \mathfrak{L}_{B}[\mathbf{u},\mathbf{B}]\|_{L^{2}(\Omega_{T})}^{2}+\|\mathrm{ div}\mathbf{u}\|_{L^{2}(\Omega_{T})}^{2} \tag{9}\] \[\quad+\|\mathrm{div}\mathbf{B}\|_{L^{2}(\Omega_{T})}^{2}+\| \mathbf{u}|_{\partial\Omega}\|_{L^{2}(\partial\Omega_{T})}^{2}+\|\mathbf{B} \cdot\mathbf{n}|_{\partial\Omega}\|_{L^{2}(\partial\Omega_{T})}^{2}\Big{\}}.\]
### Convergence of the loss function
**Lemma 1**: _[_3_]_ _Given \(\epsilon>0\), assume that \((\mathbf{u},\mathbf{B},p)\) is the solution of problem (1)-(3), Then there exists \((\mathbf{u}_{\theta},\mathbf{B}_{\theta},p_{\theta})\in\mathfrak{F}_{N}\) such that_
\[\sup_{t\in[0,T]}\|\mathbf{u}(t)-\mathbf{u}_{\theta}(t)\|_{L^{2}( \Omega)}\leq C\epsilon,\qquad\sup_{t\in[0,T]}\|\mathbf{B}(t)-\mathbf{B}_{ \theta}(t)\|_{L^{2}(\Omega)}\leq C\epsilon,\] \[\|\mathbf{u}-\mathbf{u}_{\theta}\|_{H^{1,2}(\Omega_{T})}\leq C\epsilon, \qquad\|\mathbf{B}-\mathbf{B}_{\theta}\|_{H^{1,2}(\Omega_{T})}\leq C\epsilon,\] \[\|\mathbf{u}-\mathbf{u}_{\theta}\|_{L^{4}([0,T]\times W^{1,4}( \Omega))}\leq C\epsilon,\qquad\|\mathbf{B}-\mathbf{B}_{\theta}\|_{L^{4}([0,T] \times W^{1,4}(\Omega))}\leq C\epsilon,\] \[\|p-p_{\theta}\|_{L^{2}([0,T]\times H^{1}(\Omega))}\leq C\epsilon.\]
**Theorem 1**: _Under the assumptions of **Lemma 1**, there exists \((\vec{u}_{\theta},\vec{B}_{\theta},p_{\theta})\in\mathfrak{F}_{N}\) such that_
\[\inf_{(\vec{u}_{\theta},\vec{B}_{\theta},p_{\theta})\in\mathfrak{F} _{N}}\Bigl{\{}\|\mathfrak{L}_{f}[\vec{u}_{\theta},\vec{B}_{\theta},p_{\theta}] \|_{L^{2}(\Omega_{T})}^{2}+\|\mathfrak{L}_{B}[\vec{u}_{\theta},\vec{B}_{\theta }]\|_{L^{2}(\Omega_{T})}^{2}+\|\text{div}\vec{u}_{\theta}\|_{L^{2}(\Omega)}^{2}\] \[\quad+\|\text{div}\vec{B}\|_{L^{2}(\Omega_{T})}^{2}+\|\vec{u}_{ \theta}|_{\partial\Omega}\|_{L^{2}(\partial\Omega_{T})}^{2}+\|\vec{B}_{ \theta}\cdot\vec{n}|_{\partial\Omega}\|_{L^{2}(\partial\Omega_{T})}^{2} \Bigr{\}}\leq C\epsilon.\]
_Proof:_ Let \((\vec{u},\vec{B},p)\) be the solution of (1)-(3). Thus one finds that
\[\|\mathfrak{L}_{f}[\vec{u}_{\theta},\vec{B}_{\theta},p_{\theta}] -\mathfrak{L}_{f}[\vec{u},\vec{B},p]\|_{L^{2}(\Omega_{T})}^{2}+\|\mathfrak{L}_ {B}[\vec{u}_{\theta},\vec{B}_{\theta}]-\mathfrak{L}_{B}[\vec{u},\vec{B}]\|_{L^ {2}(\Omega_{T})}^{2} \tag{10}\] \[\quad+\|\text{div}(\vec{u}_{\theta}-\vec{u})\|_{L^{2}(\Omega_{T}) }^{2}+\|\text{div}(\vec{B}_{\theta}-\vec{B})\|_{L^{2}(\Omega_{T})}^{2}\] \[\quad+\|(\vec{u}_{\theta}-\vec{u})|_{\partial\Omega}\|_{L^{2}( \partial\Omega_{T})}^{2}+\|(\vec{B}_{\theta}-\vec{B})\cdot\vec{n}|_{\partial \Omega}\|_{L^{2}(\partial\Omega_{T})}^{2}.\] \[=I_{1}+\ldots+I_{6}.\]
In the following, we will bound the terms of (10) one by one. For nonlinear term, we obtain
\[\|(\vec{u}\cdot\nabla)\vec{u}-(\vec{u}_{\theta}\cdot\nabla)\vec{u }_{\theta}\|_{L^{2}(\Omega_{T})}^{2} \tag{11}\] \[\leq C\|\vec{u}-\vec{u}_{\theta}\|_{L^{4}(\Omega_{T})}^{2}\|\vec{ u}_{\theta}\|_{L^{4}(\Omega_{T})}^{2}\] \[\quad+C\|\vec{u}\|_{L^{4}(\Omega_{T})}^{2}\|\nabla(\vec{u}-\vec{u }_{\theta})\|_{L^{4}(\Omega_{T})}^{2},\] \[\quad\|\vec{B}\times\text{curl}\vec{B}-\vec{B}_{\theta}\times \text{curl}\vec{B}_{\theta}\|_{L^{2}(\Omega_{T})}^{2}\] (12) \[\leq C\|\vec{B}-\vec{B}_{\theta}\|_{L^{4}(\Omega_{T})}^{2}\|\vec{ B}_{\theta}\|_{L^{4}(\Omega_{T})}^{2}\] \[\quad+C\|\vec{B}\|_{L^{4}(\Omega_{T})}^{2}\|\nabla(\vec{B}-\vec{ B}_{\theta})\|_{L^{4}(\Omega_{T})}^{2},\] \[\quad\|\text{curl}(\vec{u}\times\vec{B})-\text{curl}(\vec{u}_{ \theta}\times\vec{B}_{\theta})\|_{L^{2}(\Omega_{T})}^{2}\] (13) \[\leq C\|\vec{u}-\vec{u}_{\theta}\|_{L^{4}(\Omega_{T})}^{2}\|\vec{ B}_{\theta}\|_{L^{4}(\Omega_{T})}^{2}\] \[\quad+C\|\vec{u}\|_{L^{4}(\Omega_{T})}^{2}\|\nabla(\vec{B}-\vec{ B}_{\theta})\|_{L^{4}(\Omega_{T})}^{2}.\]
Using **Lemma 1**, it follows that
\[\|\partial_{t}\vec{u}-\partial_{t}\vec{u}_{\theta}\|_{L^{2}( \Omega_{T})}^{2}+\|\partial_{t}\vec{B}-\partial_{t}\vec{B}_{\theta}\|_{L^{2}( \Omega_{T})}^{2}\leq C\epsilon^{2}, \tag{14}\] \[\|\Delta\vec{u}-\Delta\vec{u}_{\theta}\|_{L^{2}(\Omega_{T})}^{2}+ C\|\text{curl}\,\text{curl}\,(\vec{B}-\vec{B}_{\theta})\|_{L^{2}(\Omega_{T})}^{2} \leq C\epsilon^{2},\] (15) \[\|\nabla p-\nabla p_{\theta}\|_{L^{2}(\Omega_{T})}^{2}\leq C \epsilon^{2}. \tag{16}\]
Applying the Triangle inequality, we obtain
\[\|\nabla\vec{u}_{\theta}\|_{L^{4}(\Omega_{T})}^{2}\leq C\|\nabla \vec{u}_{\theta}-\nabla\vec{u}\|_{L^{4}(\Omega_{T})}^{2}+\|\nabla\vec{u}\|_{L^{ 4}(\Omega_{T})}^{2}, \tag{17}\] \[\|\nabla\vec{B}_{\theta}\|_{L^{4}(\Omega_{T})}^{2}\leq C\|\nabla \vec{B}\|_{L^{4}(\Omega_{T})}^{2}+\|\nabla(\vec{B}-\vec{B}_{\theta})\|_{L^{4}( \Omega_{T})}^{2}. \tag{18}\]
and
\[\|\vec{u}\|_{L^{4}(H^{1}(\Omega)\times[0,T])}^{2}+\|\vec{B}\|_{L^{4}(H^{1}( \Omega)\times[0,T])}^{2}\leq C. \tag{19}\]
For term \(I_{1}\), we bound
\[\|\mathfrak{L}_{f}[\mathbf{u}_{\theta},\mathbf{B}_{\theta},p_{\theta} ]-\mathfrak{L}_{f}[\mathbf{u},\mathbf{B},p]\|_{L^{2}(\Omega_{T})}^{2}\] \[\leq\int_{0}^{T}\int_{\Omega}|\partial_{t}(\mathbf{u}_{\theta}- \mathbf{u})|^{2}dxdt+\int_{0}^{T}\int_{\Omega}\nu^{2}|\Delta(\mathbf{u}_{ \theta}-\mathbf{u})|^{2}dxdt\] \[\quad+\int_{0}^{T}\int_{\Omega}|(\mathbf{u}_{\theta}\cdot\nabla) \mathbf{u}_{\theta}-(\mathbf{u}\cdot\nabla)\mathbf{u}|^{2}dxdt+\int_{0}^{T} \int_{\Omega}|\nabla(p_{\theta}-p)|^{2}dxdt\] \[\quad+\int_{0}^{T}\int_{\Omega}S^{2}|\mathbf{B}_{\theta}\times \operatorname{curl}\mathbf{B}_{\theta}-\mathbf{B}\times\operatorname{curl} \mathbf{B}|^{2}dxdt\] \[\leq C\epsilon.\]
For term \(I_{2}\), we estimate
\[\|\mathfrak{L}_{B}[\mathbf{u}_{\theta},\mathbf{B}_{\theta}]- \mathfrak{L}_{B}[\mathbf{u},\mathbf{B}]\|_{L^{2}(\Omega_{T})}^{2}\] \[\leq\int_{0}^{T}\int_{\Omega}|\partial_{t}(\mathbf{B}_{\theta}- \mathbf{B})|^{2}dxdt+\int_{0}^{T}\int_{\Omega}\mu^{2}|\operatorname{curl} \operatorname{curl}\left(\mathbf{B}_{\theta}-\mathbf{B}\right)|^{2}dxdt\] \[\quad+\int_{0}^{T}\int_{\Omega}|\operatorname{curl}\left( \mathbf{u}_{\theta}\times\mathbf{B}_{\theta}\right)-\operatorname{curl} \left(\mathbf{u}\times\mathbf{B}\right)|^{2}dxdt\] \[\leq C\epsilon.\]
It is similar to \(I_{3}+I_{4}+I_{5}+I_{6}\), we have
\[I_{3}+I_{4}+I_{5}+I_{6}\leq C\epsilon.\]
The desired result is derived. The proof is completed.
### Convergence of DNN to the unique solution
**Theorem 2**: _Assume that \((\boldsymbol{u},\boldsymbol{B},p)\in\mathcal{X}\times\mathcal{W}\times\mathcal{M}\) is a unique solution to (1)-(3), Then when the sequence \((\boldsymbol{u}_{\theta}^{n},\boldsymbol{B}_{\theta}^{n},p_{\theta}^{n})\) of problem (9) is uniformly bounded and equicontinuous, the neural networks solution \((\boldsymbol{u}_{\theta}^{n},\boldsymbol{B}_{\theta}^{n},p_{\theta}^{n})\) converges strongly to \((\boldsymbol{u},\boldsymbol{B},p)\)._
_Proof:_ Let \((\mathbf{u}_{\theta}^{n},\mathbf{B}_{\theta}^{n},p_{\theta}^{n})\in\mathfrak{F}_ {N}\) is the solution of problem (9), then \((\mathbf{u}_{\theta}^{n},\mathbf{B}_{\theta}^{n},p_{\theta}^{n})\) satisfy
\[(\frac{d\mathbf{u}_{\theta}^{n}}{dt},\mathbf{v})+a(\mathbf{u}_{ \theta}^{n},\mathbf{v})+b(\mathbf{u}_{\theta}^{n},\mathbf{u}_{\theta}, \mathbf{v}) \tag{20a}\] \[\quad+Sc_{\widehat{B}}(\mathbf{B}_{\theta}^{n},\mathbf{B}_{ \theta}^{n},\mathbf{v})-d(\mathbf{v},p_{\theta})+d(\mathbf{u}_{\theta}^{n},q) =(\mathbf{f},\mathbf{v}),\] \[(\frac{d\mathbf{B}_{\theta}^{n}}{dt},\mathbf{H})+a_{B}(\mathbf{ B}_{\theta}^{n},\mathbf{H})-c_{\widetilde{B}}(\mathbf{u}_{\theta}^{n}, \mathbf{B}_{\theta}^{n},\mathbf{H})=0, \tag{20b}\]
for all \((\mathbf{v},\mathbf{H},q)\in\mathfrak{F}_{N}\).
Taking \((\mathbf{v},\mathbf{H},q)=(\mathbf{u}_{\theta}^{n},\mathbf{B}_{\theta}^{n},p_{ \theta}^{n})\) in (20), it follows that
\[\frac{d}{dt}\big{(}\|\mathbf{u}_{\theta}^{n}\|_{L^{2}(\Omega)}^{2}+S\|\mathbf{B }_{\theta}^{n}\|_{L^{2}(\Omega)}^{2}\big{)}+\nu\|\nabla\mathbf{u}_{\theta}^{n} \|_{L^{2}(\Omega)}^{2}+c_{0}\mu S\|\nabla\mathbf{B}_{\theta}^{n}\|_{L^{2}( \Omega)}^{2}\leq C\|\mathbf{f}\|_{L^{2}(\Omega)}^{2}.\]
Using the Gronwall inequality, we can get
\[\sup_{0\leq t\leq T}\big{(}\|\mathbf{u}_{\theta}^{n}(t)\|_{L^{2}( \Omega)}^{2}+\|\mathbf{B}_{\theta}^{n}(t)\|_{L^{2}(\Omega)}^{2}\big{)}+\int_ {0}^{T}\|\nabla\mathbf{u}_{\theta}^{n}\|_{L^{2}(\Omega)}^{2}dt \tag{21}\] \[+\int_{0}^{T}\|\nabla\mathbf{B}_{\theta}^{n}\|_{L^{2}(\Omega)}^ {2}dt\leq C.\]
Setting \((\mathbf{v},\mathbf{H},q)=(-\Delta\mathbf{u}_{\theta}^{n},\operatorname{ curl}\operatorname{curl}\mathbf{B}_{\theta}^{n},0)\) in (20), we obtain
\[\frac{1}{2}\frac{d}{dt}\big{(}\|\nabla\mathbf{u}_{\theta}^{n}\|_{L^{2}(\Omega)} ^{2}+Sc_{0}\|\nabla\mathbf{B}_{\theta}^{n}\|_{L^{2}(\Omega)}^{2}\big{)}+\nu\|A \mathbf{u}_{\theta}^{n}\|_{L^{2}(\Omega)}^{2}+\mu S\|\Delta\mathbf{B}_{\theta }^{n}\|_{L^{2}(\Omega)}^{2} \tag{22}\] \[\leq|(\mathbf{f},A\mathbf{u}_{\theta}^{n})+|b(\mathbf{u}_{\theta }^{n},\mathbf{u}_{\theta}^{n},A\mathbf{u}_{\theta})|+|Sc_{\widehat{B}}(\mathbf{ B}_{\theta}^{n},\mathbf{B}_{\theta}^{n},A\mathbf{u}_{\theta}^{n})|+|Sc_{ \widehat{B}}(\mathbf{u}_{\theta}^{n},\mathbf{B}_{\theta}^{n},-\Delta\mathbf{B }_{\theta}^{n})|.\]
By the Holder, Young and Sobolev inequalities, we have
\[|(\mathbf{f},A\mathbf{u}_{\theta}^{n})| \leq\frac{\nu}{8}\|A\mathbf{u}_{\theta}^{n}\|_{L^{2}(\Omega)}^{2 }+C\|\mathbf{f}\|_{L^{2}(\Omega)}^{2},\] \[|b(\mathbf{u}_{\theta},\mathbf{u}_{\theta}^{n},\mathbf{u}_{ \theta})| \leq\|\mathbf{u}_{\theta}^{n}\|_{L^{2}(\Omega)}^{\frac{1}{2}}\| \nabla\mathbf{u}_{\theta}^{n}\|_{L^{2}(\Omega)}\|A\mathbf{u}_{\theta}^{n}\|_{ L^{2}(\Omega)}^{\frac{3}{2}}\] \[\leq\frac{\nu}{8}\|A\mathbf{u}_{\theta}^{n}\|_{L^{2}(\Omega)}^{2} +C\|\mathbf{u}_{\theta}^{n}\|_{L^{2}(\Omega)}^{2}\|\nabla\mathbf{u}_{\theta}^ {n}\|_{L^{2}(\Omega)}^{4},\] \[|Sc_{\widehat{B}}(\mathbf{B}_{\theta}^{n},\mathbf{B}_{\theta},A \mathbf{u}_{\theta}^{n})| \leq C\|\mathbf{B}_{\theta}^{n}\|_{L^{4}(\Omega)}\|\operatorname{ curl}\mathbf{B}_{\theta}^{n}\|_{L^{4}(\Omega)}\|A\mathbf{u}_{\theta}^{n}\|_{L^{2}( \Omega)}\] \[\leq C\|\mathbf{B}_{\theta}^{n}\|_{L^{2}(\Omega)}^{\frac{1}{2}}\| \mathbf{B}_{\theta}^{n}\|_{H^{1}(\Omega)}^{\frac{1}{2}}\|\operatorname{ curl}\mathbf{B}_{\theta}^{n}\|_{L^{4}(\Omega)}\|A\mathbf{u}_{\theta}^{n}\|_{L^{2}( \Omega)}\] \[\leq\frac{\nu}{8}\|A\mathbf{u}_{\theta}^{n}\|_{L^{2}(\Omega)}^{2} +\frac{\mu S}{8}\|\Delta\mathbf{B}_{\theta}^{n}\|_{L^{2}(\Omega)}^{2}\] \[\quad+\|\mathbf{B}_{\theta}^{n}\|_{L^{2}(\Omega)}^{2}\|\mathbf{B} _{\theta}^{n}\|_{H^{1}(\Omega)}^{2}\|\nabla\mathbf{B}_{\theta}^{n}\|_{L^{2}( \Omega)},\] \[|Sc_{\widehat{B}}(\mathbf{u}_{\theta}^{n},\mathbf{B}_{\theta}^{n}, -\Delta\mathbf{B}_{\theta}^{n})| \leq C\big{(}\|\mathbf{u}_{\theta}^{n}\|_{L^{4}(\Omega)}\|\nabla \mathbf{B}_{\theta}^{n}\|_{L^{4}(\Omega)}+\|\nabla\mathbf{u}_{\theta}^{n}\|_{L^{4 }(\Omega)}\|\mathbf{B}_{\theta}^{n}\|_{L^{4}(\Omega)}\big{)}\|\Delta\mathbf{B }_{\theta}^{n}\|_{L^{2}(\Omega)}\] \[\leq\frac{\nu}{8}\|A\mathbf{u}_{\theta}^{n}\|_{L^{2}(\Omega)}^{2} +\frac{\mu S}{8}\|\Delta\mathbf{B}_{\theta}^{n}\|_{L^{2}(\Omega)}^{2}+C\big{(} \|\mathbf{u}_{\theta}^{n}\|_{L^{2}(\Omega)}^{2}\|\nabla\mathbf{u}_{\theta}\|_{ L^{2}(\Omega)}^{2}\] \[\quad+\|\mathbf{B}_{\theta}^{n}\|_{L^{2}(\Omega)}^{2}\|\nabla \mathbf{B}_{\theta}^{n}\|_{L^{2}(\Omega)}^{2}\big{)}\big{(}\|\nabla\mathbf{u}_{ \theta}^{n}\|_{L^{2}(\Omega)}+\|\nabla\mathbf{B}_{\theta}^{n}\|_{L^{2}(\Omega)} \big{)}.\]
Combining the above inequalities with (22), we obtain
\[\frac{d}{dt}\big{(}\|\nabla\mathbf{u}_{\theta}^{n}\|_{L^{2}(\Omega)} ^{2}+Sc_{0}\|\nabla\mathbf{B}_{\theta}^{n}\|_{L^{2}(\Omega)}^{2}\big{)}+\nu\|A \mathbf{u}_{\theta}^{n}\|_{L^{2}(\Omega)}^{2}+\mu S\|\Delta\mathbf{B}_{\theta}^{n} \|_{L^{2}(\Omega)}^{2}\] \[\leq C\|\mathbf{f}\|_{L^{2}(\Omega)}^{2}+\big{(}\|\mathbf{u}_{ \theta}^{n}\|_{L^{2}(\Omega)}^{2}\|\nabla\mathbf{u}_{\theta}^{n}\|_{L^{2}(\Omega)} ^{2}+\|\mathbf{B}_{\theta}^{n}\|_{L^{2}(\Omega)}^{2}\|\nabla\mathbf{B}_{\theta}^{n} \|_{L^{2}(\Omega)}^{2}\big{)}\] \[\quad\times\big{(}\|\nabla\mathbf{u}_{\theta}^{n}\|_{L^{2}(\Omega)}^{2 }+\|\nabla\mathbf{B}_{\theta}^{n}\|_{L^{2}(\Omega)}^{2}\big{)}.\]
With employing the Gronwall inequality, it follows that
\[\sup_{0\leq t\leq T}\bigl{(}\|\mathbf{u}_{\theta}^{n}(t)\|_{H^{1}( \Omega)}^{2}+\|\mathbf{B}_{\theta}^{n}(t)\|_{H^{1}(\Omega)}^{2}\bigr{)}+\int_{0} ^{T}\|\mathbf{u}_{\theta}^{n}\|_{H^{2}(\Omega)}^{2}dt \tag{23}\] \[+\int_{0}^{T}\|\mathbf{B}_{\theta}^{n}\|_{H^{2}(\Omega)}^{2}dt \leq C.\]
Differentiate both sides of (20) with respect to \(t\), we have
\[(\frac{d^{2}\mathbf{u}_{\theta}^{n}}{dt^{2}},\mathbf{v})+a(\frac{ d\mathbf{u}_{\theta}^{n}}{dt},\mathbf{v})+b(\frac{d\mathbf{u}_{\theta}^{n}}{dt}, \mathbf{u}_{\theta},\mathbf{v})+b(\mathbf{u}_{\theta}^{n},\frac{d\mathbf{u}_ {\theta}^{n}}{dt},\mathbf{v}) \tag{24a}\] \[+Sc_{\widehat{B}}(\frac{d\mathbf{B}_{\theta}^{n}}{dt},\mathbf{B}_ {\theta},\mathbf{v})+Sc_{\widehat{B}}(\mathbf{B}_{\theta}^{n},\frac{d\mathbf{ B}_{\theta}^{n}}{dt},\mathbf{v})-d(\mathbf{v},\frac{dp_{\theta}^{n}}{dt})+d( \frac{d\mathbf{u}_{\theta}^{n}}{dt},q)=(\mathbf{f}_{t},\mathbf{v}),\] \[(\frac{d^{2}\mathbf{B}_{\theta}^{n}}{dt^{2}},\mathbf{H})+a_{B}( \frac{d\mathbf{B}_{\theta}^{n}}{dt},\mathbf{H})-c_{\widetilde{B}}(\frac{d \mathbf{u}_{\theta}^{n}}{dt},\mathbf{B}_{\theta}^{n},\mathbf{H})-c_{ \widetilde{B}}(\mathbf{u}_{\theta}^{n},\frac{d\mathbf{B}_{\theta}^{n}}{dt}, \mathbf{H})=0, \tag{24b}\]
for all \((\mathbf{v},\mathbf{H},q)\in\mathfrak{F}_{N}\).
Taking \((\mathbf{v},\mathbf{H},q)=(\frac{d\mathbf{u}_{\theta}^{n}}{dt},\frac{d\mathbf{ B}_{\theta}^{n}}{dt},\frac{dp_{\theta}^{n}}{dt})\) in (24), one finds that
\[\frac{1}{2}\frac{d}{dt}\Bigl{(}\Bigl{\|}\frac{d\mathbf{u}_{\theta}^{n}}{dt} \Bigr{\|}_{L^{2}(\Omega)}^{2}+S\Bigl{\|}\frac{d\mathbf{B}_{\theta}^{n}}{dt} \Bigr{\|}_{L^{2}(\Omega)}^{2}\Bigr{)}+\nu\Bigl{\|}\nabla\frac{d\mathbf{u}_{ \theta}^{n}}{dt}\Bigr{\|}_{L^{2}(\Omega)}^{2}+\mu Sc_{0}\Bigl{\|}\nabla\frac{d \mathbf{B}_{\theta}^{n}}{dt}\Bigr{\|}_{L^{2}(\Omega)}^{2} \tag{25}\]
\[\leq|(\mathbf{f}_{t},\frac{d\mathbf{u}_{\theta}^{n}}{dt})|+|b(\frac{d\mathbf{ u}_{\theta}^{n}}{dt},\mathbf{u}_{\theta}^{n},\frac{d\mathbf{u}_{\theta}}{dt})|+|Sc_{ \widehat{B}}(\mathbf{B}_{\theta}^{n},\frac{d\mathbf{B}_{\theta}^{n}}{dt}, \frac{d\mathbf{u}_{\theta}^{n}}{dt})|+|Sc_{\widetilde{B}}(\mathbf{u}_{\theta }^{n},\frac{d\mathbf{B}_{\theta}^{n}}{dt},\frac{d\mathbf{B}_{\theta}^{n}}{dt})|.\]
Making using of the Holder, Young and Embedded inequalities, we derive
\[|(\mathbf{f}_{t},\frac{d\mathbf{u}_{\theta}^{n}}{dt})| \leq C\|\mathbf{f}_{t}\|_{L^{2}(\Omega)}^{2}+\frac{\nu}{6}\Bigl{\|} \frac{d\mathbf{u}_{\theta}^{n}}{dt}\Bigr{\|}_{L^{2}(\Omega)}^{2},\] \[|b(\frac{d\mathbf{u}_{\theta}^{n}}{dt},\mathbf{u}_{\theta},\frac{d \mathbf{u}_{\theta}}{dt})| \leq C\Bigl{\|}\frac{d\mathbf{u}_{\theta}^{n}}{dt}\Bigr{\|}_{L^{4} (\Omega)}\Bigl{\|}\frac{d\mathbf{u}_{\theta}^{n}}{dt}\Bigr{\|}_{L^{2}(\Omega) }\Bigl{\|}\nabla\mathbf{u}_{\theta}^{n}\|_{L^{4}(\Omega)}\] \[\leq\frac{\nu}{6}\Bigl{\|}\nabla\frac{d\mathbf{u}_{\theta}^{n}}{ dt}\Bigr{\|}_{L^{2}(\Omega)}^{2}+C\Bigl{\|}\frac{d\mathbf{u}_{\theta}^{n}}{dt} \Bigr{\|}_{L^{2}(\Omega)}\Bigl{\|}\mathbf{u}_{\theta}\|_{H^{2}(\Omega)},\] \[|Sc_{\widehat{B}}(\mathbf{B}_{\theta},\frac{d\mathbf{B}_{\theta}}{ dt},\frac{d\mathbf{u}_{\theta}}{dt})| \leq C\Bigl{\|}\frac{d\mathbf{u}_{\theta}}{dt}\Bigr{\|}_{L^{4}( \Omega)}\Bigl{\|}\frac{d\mathbf{B}_{\theta}}{dt}\Bigr{\|}_{L^{2}(\Omega)} \Bigl{\|}\nabla\mathbf{B}_{\theta}^{n}\|_{L^{4}(\Omega)}\] \[\leq\frac{\nu}{6}\Bigl{\|}\nabla\frac{d\mathbf{u}_{\theta}^{n}}{ dt}\Bigr{\|}_{L^{2}(\Omega)}^{2}+C\Bigl{\|}\frac{d\mathbf{B}_{\theta}^{n}}{dt} \Bigr{\|}_{L^{2}(\Omega)}\Bigl{\|}\mathbf{B}_{\theta}^{n}\|_{H^{2}(\Omega)},\] \[|Sc_{\widetilde{B}}(\mathbf{u}_{\theta}^{n},\frac{d\mathbf{B}_{ \theta}^{n}}{dt},\frac{d\mathbf{B}_{\theta}^{n}}{dt})| \leq C\|\mathbf{u}_{\theta}^{n}\|_{L^{\infty}(\Omega)}\Bigl{\|} \frac{d\mathbf{B}_{\theta}^{n}}{dt}\Bigr{\|}_{L^{2}(\Omega)}\Bigl{\|}\nabla \frac{d\mathbf{B}_{\theta}^{n}}{dt}\|_{L^{2}(\Omega)}\] \[\leq\frac{\mu S}{6}\Bigl{\|}\nabla\frac{d\mathbf{B}_{\theta}^{n}}{ dt}\Bigr{\|}_{L^{2}(\Omega)}^{2}+C\Bigl{\|}\frac{d\mathbf{B}_{\theta}^{n}}{dt} \Bigr{\|}_{L^{2}(\Omega)}\Bigl{\|}\mathbf{u}_{\theta}^{n}\|_{H^{2}(\Omega)}.\]
Combining the above inequalities with (25) yields
\[\frac{d}{dt}\Bigl{(}\Bigl{\|}\frac{d\mathbf{u}_{\theta}^{n}}{dt} \Bigr{\|}_{L^{2}(\Omega)}^{2}+S\Bigl{\|}\frac{d\mathbf{B}_{\theta}^{n}}{dt} \Bigr{\|}_{L^{2}(\Omega)}^{2}\Bigr{)}+\nu\Bigl{\|}\nabla\frac{d\mathbf{u}_{ \theta}^{n}}{dt}\Bigr{\|}_{L^{2}(\Omega)}^{2}+\mu Sc_{0}\Bigl{\|}\nabla\frac{d \mathbf{B}_{\theta}^{n}}{dt}\Bigr{\|}_{L^{2}(\Omega)}^{2}\] \[\leq C\|\mathbf{f}_{t}\|_{L^{2}(\Omega)}^{2}+C\bigl{(}\|\mathbf{u }_{\theta}^{n}\|_{H^{2}(\Omega)}^{2}+\|\mathbf{B}_{\theta}^{n}\|_{H^{2}(\Omega)}^{2} \bigr{)}\Bigl{(}\Bigl{\|}\frac{d\mathbf{u}_{\theta}^{n}}{dt}\Bigr{\|}_{L^{2}( \Omega)}^{2}+S\Bigl{\|}\frac{d\mathbf{B}_{\theta}^{n}}{dt}\Bigr{\|}_{L^{2}( \Omega)}^{2}\Bigr{)}.\]
By applying the Gronwall inequality, we obtain
\[\sup_{0\leq t\leq T}\Bigl{(}\Bigl{\|}\frac{d\mathbf{u}_{\theta}^{n}}{ dt}\Bigr{\|}_{L^{2}(\Omega)}^{2} +\Bigl{\|}\frac{d\mathbf{B}_{\theta}^{n}}{dt}\Bigr{\|}_{L^{2}( \Omega)}^{2}\Bigr{)}+\int_{0}^{T}\Bigl{\|}\nabla\frac{d\mathbf{u}_{\theta}^{n}} {dt}\Bigr{\|}_{L^{2}(\Omega)}^{2}dt \tag{26}\] \[+\int_{0}^{T}\Bigl{\|}\nabla\frac{d\mathbf{B}_{\theta}^{n}}{dt} \Bigr{\|}_{L^{2}(\Omega)}^{2}dt\leq C.\]
From (21), (23) and (26), we can obtain that \(\{\mathbf{u}_{\theta}^{n}\},\{\frac{d\mathbf{u}_{\theta}^{n}}{dt}\},\{ \mathbf{B}_{\theta}^{n}\}\) and \(\{\frac{d\mathbf{B}_{\theta}^{n}}{dt}\}\) are uniformly bounded in \(L^{2}([0,T],H^{1}(\Omega))\). Applying the Aubin-Lions's compactness lemma, there exists a subsequence of \(\{\mathbf{u}_{\theta}^{n}\},\{\frac{d\mathbf{u}_{\theta}^{n}}{dt}\},\{ \mathbf{B}_{\theta}^{n}\}\) and \(\{\frac{d\mathbf{B}_{\theta}^{n}}{dt}\}\) (still denoted by the \(\{\mathbf{u}_{\theta}^{n}\},\{\frac{d\mathbf{u}_{\theta}^{n}}{dt}\},\{ \mathbf{B}_{\theta}^{n}\}\) and \(\{\frac{d\mathbf{B}_{\theta}^{n}}{dt}\}\)), which converges to \(\mathbf{u}\in L^{\infty}([0,T],L^{2}(\Omega))\cap L^{2}([0,T],\mathcal{X})\) and \(\mathbf{B}\in L^{\infty}([0,T],L^{2}(\Omega))\cap L^{2}([0,T],\mathcal{W})\) such that
\[\mathbf{u}_{\theta}^{n}\rightarrow\mathbf{u}\qquad\text{in}\qquad L^{2}([0,T ],L^{2}(\Omega)),\]
and
\[\mathbf{B}_{\theta}^{n}\rightarrow\mathbf{B}\qquad\text{in}\qquad L^{2}([0,T ],L^{2}(\Omega)).\]
Finally, we have been ready for passing to the limit as \(n\rightarrow\infty\) in the weak sense, it is not difficult to show that \((\mathbf{u},\mathbf{B})\) satisfy (20) in a weak formulation. This result presents that \(\mathbf{u}_{\theta}^{n}\) and \(\mathbf{B}_{\theta}^{n}\) converge strongly to \(\mathbf{u}\) and \(\mathbf{B}\) in \(L^{2}([0,T],L^{2}(\Omega))\). Similarly, using (6), we can derive \(p_{\theta}^{n}\) converge strongly to \(p\) in \(L^{2}([0,T],L^{2}(\Omega))\). The desired result is derived. The proof is completed.
## 4 Convergence rates and stability of DNN
In this section, we drive some convergence rates and stability of DNN for problem (1)-(3). Assume that a DNN solution \((\mathbf{u}_{\theta},\mathbf{B}_{\theta},p_{\theta})\) of problem (9) satisfy
\[\frac{d\mathbf{u}_{\theta}}{dt}+\nu A_{f}\mathbf{u}_{\theta}+B[ \mathbf{u}_{\theta},\mathbf{u}_{\theta}]+SC_{f}[\mathbf{B}_{\theta},\mathbf{B }_{\theta}]=\mathbb{Pf}, \tag{27}\] \[\frac{d\mathbf{B}_{\theta}}{dt}+\mu A_{B}\mathbf{B}_{\theta}-C_{B }[\mathbf{u}_{\theta},\mathbf{B}_{\theta}]=0, \tag{28}\]
where \(\mathbb{P}\) and \(\mathbb{Q}\) are Leray projections, \(A_{f}:=-\mathbb{P}\Delta\) and \(A_{B}:=\mathbb{Q}\mathrm{curl}\,\mathrm{curl}\) are Stokes operator and Maxwell operator, respectively. \(B[\mathbf{u},\mathbf{u}]\), \(C_{f}[\mathbf{B},\mathbf{B}]\) and \(C_{B}[\mathbf{u},\mathbf{B}]\) are defined as follows:
\[B[\mathbf{u},\mathbf{u}]:=\mathbb{P}[(\mathbf{u}\cdot\nabla)\mathbf{u}],\quad C _{f}[\mathbf{B},\mathbf{B}]:=\mathbb{P}[\mathrm{curl}\,(\mathbf{B}\times \mathbf{B}],\quad C_{B}[\mathbf{u},\mathbf{B}]:=\mathbb{Q}[\mathrm{curl}\,( \mathbf{u}_{\theta}\times\mathbf{B}_{\theta})].\]
Here we use the similar technique [3], thus we introduce the Hodge decomposition. The main idea of Hodge decomposition is to decompose a vector \(\mathbf{w}\in L^{2}(\Omega)\) uniquely into a divergence-free part \(\mathbf{w}^{1}\) and an irrotational part \(\mathbf{w}^{2}\), which is orthogonal in \(L^{2}(\Omega)\) to \(\mathbf{w}^{1}\), i.e.,
\[\mathbf{w}=\mathbf{w}^{1}+\mathbf{w}^{2},\qquad\nabla\cdot\mathbf{w}^{1}=0, \qquad(\mathbf{w}^{1},\mathbf{w}^{2})=0. \tag{29}\]
Consider an approximate solution \((\mathbf{u}_{\theta},\mathbf{B}_{\theta},p_{\theta})\in\mathfrak{F}_{N}\) and denote
\[\mathfrak{L}_{f}[\mathbf{u}_{\theta},\mathbf{B}_{\theta},p_{\theta}] =\widehat{\mathbf{f}}, \tag{30}\] \[\mathfrak{L}_{B}[\mathbf{u}_{\theta},\mathbf{B}_{\theta}] =\mathbf{0},\] (31) \[\nabla\cdot\mathbf{u}_{\theta} =g,\] (32) \[\nabla\cdot\mathbf{B}_{\theta} =h. \tag{33}\]
Applying the Hodge decomposition on \((\mathbf{u}_{\theta},\mathbf{B}_{\theta})\), we have
\[\mathbf{u}_{\theta} =\mathbb{P}\mathbf{u}_{\theta}+(\mathbb{I}-\mathbb{P})\mathbf{u }_{\theta}=:\mathbf{u}_{\theta}^{1}+\mathbf{u}_{\theta}^{2}, \tag{34}\] \[\mathbf{B}_{\theta} =\mathbb{Q}\mathbf{B}_{\theta}+(\mathbb{I}-\mathbb{Q})\mathbf{B }_{\theta}=:\mathbf{B}_{\theta}^{1}+\mathbf{B}_{\theta}^{2}, \tag{35}\]
**Theorem 3**: _Assume that \((\mathbf{u},\mathbf{B},p)\) is a strong solution of problem (1)-(3) and \((\mathbf{u}_{\theta},\mathbf{B}_{\theta},p_{\theta})\) such that_
\[\frac{d\mathbf{u}_{\theta}^{1}}{dt}+\nu A_{f}\mathbf{u}_{\theta} ^{1}+B[\mathbf{u}_{\theta}^{1},\mathbf{u}_{\theta}^{1}]+SC_{f}[\mathbf{B}_{ \theta}^{1},\mathbf{B}_{\theta}^{1}]=\mathbb{P}\boldsymbol{f}+\Lambda, \tag{36}\] \[\frac{dB_{\theta}^{1}}{dt}+\mu A_{B}\mathbf{B}_{\theta}^{1}-C_{B} [\mathbf{u}_{\theta}^{1},\mathbf{B}_{\theta}^{1}]=\Pi, \tag{37}\]
_where_
\[\int_{0}^{T}\|\Lambda\|_{H^{-1}}^{2}dt+\int_{0}^{T}\|\Pi\|_{\mathcal{W}^{-1}} ^{2}dt\leq C\epsilon, \tag{38}\]
_and assume_
\[\|\mathbf{u}_{\theta,0}^{1}-\mathbf{u}_{0}\|_{L^{2}(\Omega)}^{2}+\|\mathbf{B} _{\theta,0}^{1}-\mathbf{B}_{0}\|_{L^{2}(\Omega)}^{2}\leq C\epsilon, \tag{39}\]
_then the following bound satisfy_
\[\sup_{t\in[0,T]}\left(\|\mathbf{u}(t)-\mathbf{u}_{\theta}^{1}(t)\|_{L^{2}( \Omega)}^{2}+\|\mathbf{B}(t)-\mathbf{B}_{\theta}^{1}(t)\|_{L^{2}(\Omega)}^{2} \right)\leq C\epsilon. \tag{40}\]
_Proof:_ Denoting \(\mathbf{w}:=\mathbf{u}-\mathbf{u}_{\theta}^{1}\) and \(\mathbf{H}:=\mathbf{B}-\mathbf{B}_{\theta}^{1}\), we can obtain the following error equations:
\[\frac{d\mathbf{w}}{dt}+\nu A_{f}\mathbf{w}+B[\mathbf{u},\mathbf{u }]-B[\mathbf{u}_{\theta}^{1},\mathbf{u}_{\theta}^{1}]+SC_{f}[\mathbf{B}, \mathbf{B}]-SC_{f}[\mathbf{B}_{\theta}^{1},\mathbf{B}_{\theta}^{1}]=\Lambda, \tag{41}\] \[\frac{d\mathbf{H}}{dt}+\mu A_{B}\mathbf{H}-C_{B}[\mathbf{u}, \mathbf{B}]+C_{B}[\mathbf{u}_{\theta}^{1},\mathbf{B}_{\theta}^{1}]=\Pi. \tag{42}\]
Since
\[B[\mathbf{u},\mathbf{u}]-B[\mathbf{u}_{\theta}^{1},\mathbf{u}_{ \theta}^{1}]=B[\mathbf{u},\mathbf{w}]+B[\mathbf{w},\mathbf{u}]-B[\mathbf{w}, \mathbf{w}], \tag{43}\] \[SC_{f}[\mathbf{B},\mathbf{B}]-SC_{f}[\mathbf{B}_{\theta}^{1}, \mathbf{B}_{\theta}^{1}]=SC_{f}[\mathbf{B},\mathbf{H}]+SC_{f}[\mathbf{H}, \mathbf{B}]-SC_{f}[\mathbf{H},\mathbf{H}],\] (44) \[C_{B}[\mathbf{u},\mathbf{B}]-C_{B}[\mathbf{u}_{\theta}^{1}, \mathbf{B}_{\theta}^{1}]=C_{B}[\mathbf{u},\mathbf{H}]+C_{B}[\mathbf{w}, \mathbf{B}]-C_{B}[\mathbf{w},\mathbf{H}]. \tag{45}\]
Taking inner product of (41)-(42) with \((\mathbf{w},\mathbf{H})\), one finds that
\[\frac{1}{2}\frac{d}{dt}\|\mathbf{w}\|_{L^{2}(\Omega)}^{2}+\frac{S}{ 2}\frac{d}{dt}\|\mathbf{H}\|_{L^{2}(\Omega)}^{2}+\nu\|\nabla\mathbf{w}\|_{L^{2} (\Omega)}^{2}+\mu S\|\text{curl}\,\mathbf{H}\|_{L^{2}(\Omega)}^{2} \tag{46}\] \[\quad+b(\mathbf{w},\mathbf{u},\mathbf{w})+Sc_{\widetilde{B}}( \mathbf{H},\mathbf{B},\mathbf{w})-Sc_{\widehat{B}}(\mathbf{w},\mathbf{B}, \mathbf{H})=-(\Lambda,\mathbf{w})-S(\Pi,\mathbf{H}).\]
Then it follows that
\[\frac{1}{2}\frac{d}{dt}\|\mathbf{w}\|_{L^{2}(\Omega)}^{2}+\frac{S }{2}\frac{d}{dt}\|\mathbf{H}\|_{L^{2}(\Omega)}^{2}+\nu\|\nabla\mathbf{w}\|_{L ^{2}(\Omega)}^{2}+S\mu\|\text{curl}\,\mathbf{H}\|_{L^{2}(\Omega)}^{2} \tag{47}\] \[\leq|(\Lambda,\mathbf{w})|+S|(\Pi,\mathbf{H})|+|b(\mathbf{w}, \mathbf{u},\mathbf{w})|+|Sc_{\widetilde{B}}(\mathbf{H},\mathbf{B},\mathbf{w} )|+S|c_{\widehat{B}}(\mathbf{w},\mathbf{B},\mathbf{H})|.\]
Thanks to the Holder, Young and Embedded inequalities, we bound as follows:
\[|(\Lambda,\mathbf{w})| \leq C\|\Lambda\|_{H^{-1}}^{2}+\frac{\nu}{8}\|\nabla\mathbf{w}\|_ {L^{2}(\Omega)}^{2},\] \[|S(\Pi,\mathbf{H})| \leq C\|\Pi\|_{\mathcal{W}^{-1}}^{2}+\frac{\mu S}{8}\|\text{curl} \,\mathbf{H}\|_{L^{2}(\Omega)}^{2},\] \[|b(\mathbf{w},\mathbf{u},\mathbf{w})| \leq C\|\mathbf{w}\|_{L^{2}(\Omega)}^{2}\|\nabla\mathbf{u}\|_{L^{ 2}(\Omega)}^{2}+\frac{\nu}{8}\|\nabla\mathbf{w}\|_{L^{2}(\Omega)}^{2},\] \[|Sc_{\widetilde{B}}(\mathbf{H},\mathbf{B},\mathbf{w})| \leq C\|\mathbf{H}\|_{L^{2}(\Omega)}^{2}\|\text{curl}\,\mathbf{B} \|_{L^{2}(\Omega)}^{2}+\frac{\nu}{8}\|\nabla\mathbf{w}\|_{L^{2}(\Omega)}^{2},\] \[|Sc_{\widehat{B}}(\mathbf{w},\mathbf{B},\mathbf{H})| \leq C\|\mathbf{w}\|_{L^{2}(\Omega)}^{2}\|\text{curl}\,\mathbf{B} \|_{L^{2}(\Omega)}^{2}+\frac{\mu S}{8}\|\text{curl}\,\mathbf{H}\|_{L^{2}(\Omega )}^{2}.\]
Combining the above inequalities with (47), we derive
\[\frac{d}{dt}\big{(}\|\mathbf{w}\|_{L^{2}(\Omega)}^{2}+\|\mathbf{H }\|_{L^{2}(\Omega)}^{2}\big{)} -C\big{(}\|\nabla\mathbf{u}\|_{L^{2}(\Omega)}^{2}+\|\text{curl} \,\mathbf{B}\|_{L^{2}(\Omega)}^{2}\big{)} \tag{48}\] \[\times\big{(}\|\mathbf{w}\|_{L^{2}(\Omega)}^{2}+\|\mathbf{H}\|_{L ^{2}(\Omega)}^{2}\big{)}\leq C\big{(}\|\Lambda\|_{H^{-1}}^{2}+\|\Pi\|_{ \mathcal{W}^{-1}}^{2}\big{)}.\]
Applying the Gronwall inequality, one finds that
\[\|\mathbf{w}\|_{L^{2}(\Omega)}^{2}+\|\mathbf{H}\|_{L^{2}(\Omega)}^ {2} \tag{49}\] \[\leq\exp\bigl{[}\int_{0}^{t}C\big{(}\|\nabla\mathbf{u}\|_{L^{2} (\Omega)}^{2}+\|\text{curl}\,\mathbf{B}\|_{L^{2}(\Omega)}^{2}\big{)}ds\bigr{]} \big{(}\|\mathbf{w}(0)\|_{L^{2}(\Omega)}^{2}+\|\mathbf{H}(0)\|_{L^{2}(\Omega)} ^{2}\big{)}\] \[\quad+\|\text{curl}\,\mathbf{B}\|_{L^{2}(\Omega)}^{2}\big{)}d\tau \big{)}\big{(}\|\Lambda\|_{H^{-1}}^{2}+\|\Pi\|_{\mathcal{W}^{-1}}^{2}\big{)}ds.\]
Moreover, we have
\[\exp\bigl{[}\int_{0}^{t}C\big{(}\|\nabla\mathbf{u}\|_{L^{2}(\Omega )}^{2}+\|\text{curl}\,\mathbf{B}\|_{L^{2}(\Omega)}^{2}\big{)}ds\bigr{]} \leq C\] \[\exp\bigl{[}-\int_{0}^{s}C\big{(}\|\nabla\mathbf{u}\|_{L^{2}( \Omega)}^{2}+\|\text{curl}\,\mathbf{B}\|_{L^{2}(\Omega)}^{2}\big{)}d\tau\bigr{]} \leq 1,\] \[\|\mathbf{w}(0)\|_{L^{2}(\Omega)}^{2}+\|\mathbf{H}(0)\|_{L^{2}( \Omega)}^{2} \leq C\epsilon^{2}.\]
Then we obtain
\[\sup_{t\in[0,T]}\bigl{(}\|\mathbf{u}(t)-\mathbf{u}^{1}_{\theta}(t)\|^{2}_{L^{2}( \Omega)}+\|\mathbf{B}(t)-\mathbf{B}^{1}_{\theta}(t)\|^{2}_{L^{2}(\Omega)}\bigr{)} \leq C\epsilon. \tag{50}\]
The proof is completed.
**Lemma 2**: _Assume that_
\[\|\mathbf{u}^{1}_{\theta}|_{\partial\Omega}\|^{4}_{L^{4}([0,T],H^{ \frac{1}{2}}(\partial\Omega))}+\|\mathbf{B}^{1}_{\theta}\cdot\mathbf{n}|_{\partial \Omega}\|^{4}_{L^{4}([0,T],H^{\frac{1}{2}}(\partial\Omega))} \tag{51}\] \[\|\nabla\cdot\mathbf{u}_{\theta}\|^{4}_{L^{4}([0,T],L^{2}(\Omega))}+ \|\nabla\cdot\mathbf{B}_{\theta}\|^{4}_{L^{4}([0,T],L^{2}(\Omega))}\leq C\epsilon^ {2},\]
_and applying the Hodge decomposition (34)-(35), we have_
\[\|\mathbf{u}^{2}_{\theta}\|^{4}_{L^{4}([0,T],\mathcal{X})}+\|\mathbf{B}^{2}_{\theta}\| ^{4}_{L^{4}([0,T],\mathcal{W})}\leq C\epsilon^{2}. \tag{52}\]
_Proof:_ Using the similar lines in [3], here we omit its proof.
**Theorem 4**: _Assume that \((\mathbf{u},\mathbf{B},p)\) is a strong solution of (1)-(3) and \((\mathbf{u}_{\theta},\mathbf{B}_{\theta},p_{\theta})\) such that_
\[\|\mathbf{u}_{\theta}|_{\partial\Omega}\|^{4}_{L^{4}([0,T],H^{\frac{1} {2}}(\partial\Omega))}+\|\mathbf{u}_{\theta,0}-\mathbf{u}_{0}\|^{2}_{L^{2}(\Omega)} \tag{53}\] \[\quad+\|\mathfrak{L}_{f}[\mathbf{u}_{\theta},\mathbf{B}_{\theta},p_{ \theta}]\|^{2}_{L^{2}([0,T]\times\Omega)}+\|\mathfrak{L}_{B}[\mathbf{u}_{\theta}, \mathbf{B}_{\theta}]\|^{2}_{L^{2}([0,T]\times\Omega)}\] \[\quad+\|\mathbf{B}_{\theta,0}-\mathbf{B}_{0}\|^{2}_{L^{2}(\Omega)}+\|\bm {B}_{\theta}\cdot\mathbf{n}|_{\partial\Omega}\|^{4}_{L^{4}([0,T],H^{\frac{1}{2}}( \partial\Omega))}\] \[\quad+\|\nabla\cdot\mathbf{u}_{\theta}\|^{4}_{L^{4}([0,T],L^{2}( \Omega))}+\|\nabla\cdot\mathbf{B}_{\theta}\|^{4}_{L^{4}([0,T],L^{2}(\Omega))}\] \[\quad+\|\mathbf{u}_{\theta}\|^{4}_{L^{4}([0,T],H^{1}(\Omega))}+\|\bm {B}_{\theta}\|^{4}_{L^{4}([0,T],H^{1}(\Omega))}\leq C\epsilon^{2}.\]
_Then we have_
\[\|\mathbf{u}-\mathbf{u}_{\theta}\|^{4}_{L^{2}(\Omega_{T})}+\|\mathbf{B}-\mathbf{B}_{\theta}\| ^{4}_{L^{2}(\Omega_{T})}\leq C\epsilon^{2}. \tag{54}\]
_Proof:_ Let \((\mathbf{u}^{1}_{\theta},\mathbf{B}^{1}_{\theta})\) satisfy the following equations:
\[\frac{d\mathbf{u}^{1}_{\theta}}{dt} +\nu A_{f}\mathbf{u}^{1}_{\theta}+B[\mathbf{u}_{\theta},\mathbf{u }_{\theta}]-B[\mathbf{u}^{1}_{\theta},\mathbf{u}^{1}_{\theta}] \tag{55}\] \[+SC_{f}[\mathbf{B},\mathbf{B}]-SC_{f}[\mathbf{B}^{1}_{\theta}, \mathbf{B}^{1}_{\theta}]=\mathbf{f}+\mathbb{P}\widehat{\mathbf{f}},\] \[\frac{d\mathbf{B}^{1}_{\theta}}{dt} +\mu A_{B}\mathbf{B}^{1}_{\theta}-C_{B}[\mathbf{u}_{\theta}, \mathbf{B}_{\theta}]+C_{B}[\mathbf{u}^{1}_{\theta},\mathbf{B}^{1}_{\theta}]=0, \tag{56}\]
For nonlinear term, adding and subtracting some terms, we can rewrite,
\[B[\mathbf{u}_{\theta},\mathbf{u}_{\theta}]-B[\mathbf{u}^{1}_{ \theta},\mathbf{u}^{1}_{\theta}]=B[\mathbf{u}^{2}_{\theta},\mathbf{u}_{\theta}] +B[\mathbf{u}^{1}_{\theta},\mathbf{u}^{2}_{\theta}]:=\Psi_{1}, \tag{57}\] \[C_{f}[\mathbf{B}_{\theta},\mathbf{B}_{\theta}]-C_{f}[\mathbf{B}^ {1}_{\theta},\mathbf{B}^{1}_{\theta}]=C_{f}[\mathbf{B}^{2}_{\theta},\mathbf{B} _{\theta}]+C_{f}[\mathbf{B}^{1}_{\theta},\mathbf{B}^{2}_{\theta}]:=\Psi_{2},\] (58) \[C_{B}[\mathbf{u}_{\theta},\mathbf{B}_{\theta}]-C_{B}[\mathbf{u}^ {1}_{\theta},\mathbf{B}^{1}_{\theta}]=C_{B}[\mathbf{u}^{2}_{\theta},\mathbf{B} _{\theta}]+C_{B}[\mathbf{u}^{1}_{\theta},\mathbf{B}^{2}_{\theta}]:=\Psi_{3}. \tag{59}\]
We will estimate \(\int_{0}^{T}\|\Psi_{1}\|_{H^{-1}}^{2}dt\), \(\int_{0}^{T}\|\Psi_{2}\|_{H^{-1}}^{2}dt\) and \(\int_{0}^{T}\|\Psi_{3}\|_{\mathcal{W}^{-1}}^{2}dt\) as follows. Note that
\[\|\Psi_{1}\|_{H^{-1}} =\sup_{\mathbf{w}\in\mathcal{X},\|\nabla\mathbf{w}\|_{L^{2}( \Omega)}\leq 1}\bigl{[}b(\mathbf{u}_{\theta}^{2},\mathbf{u}_{\theta},\mathbf{w})+ b(\mathbf{u}_{\theta}^{1},\mathbf{u}_{\theta}^{2},\mathbf{w})\bigr{]}, \tag{60}\] \[\|\Psi_{2}\|_{H^{-1}} =\sup_{\mathbf{w}\in\mathcal{X},\|\nabla\mathbf{w}\|_{L^{2}( \Omega)}\leq 1}\bigl{[}c_{\widetilde{B}}(\mathbf{B}_{\theta}^{2},\mathbf{B}_{ \theta},\mathbf{w})+c_{\widetilde{B}}(\mathbf{B}_{\theta}^{1},\mathbf{B}_{ \theta}^{2},\mathbf{w})\bigr{]},\] (61) \[\|\Psi_{3}\|_{\mathcal{W}^{-1}} =\sup_{\mathbf{H}\in\mathcal{W},\|\text{curl}\,\mathbf{H}\|_{L^{2 }(\Omega)}\leq 1}\bigl{[}c_{\widehat{B}}(\mathbf{u}_{\theta}^{2},\mathbf{B}_{ \theta},\mathbf{H})+c_{\widehat{B}}(\mathbf{u}_{\theta}^{1},\mathbf{B}_{ \theta}^{2},\mathbf{H})\bigr{]}. \tag{62}\]
We bound the term (60)-(62) as follows:
\[b(\mathbf{u}_{\theta}^{2},\mathbf{u}_{\theta},\mathbf{w}) \leq C\|\mathbf{u}_{\theta}^{2}\|_{L^{4}(\Omega)}\|\nabla\mathbf{u }_{\theta}\|_{L^{2}(\Omega)}\|\mathbf{w}\|_{L^{4}(\Omega)}\] \[\leq C\|\mathbf{u}_{\theta}^{2}\|_{L^{2}(\Omega)}^{\frac{1}{2}} \|\nabla\mathbf{u}_{\theta}^{2}\|_{L^{2}(\Omega)}^{\frac{1}{2}}\|\nabla \mathbf{u}_{\theta}\|_{L^{2}(\Omega)}\|\nabla\mathbf{w}\|_{L^{2}(\Omega)},\] \[b(\mathbf{u}_{\theta}^{1},\mathbf{u}_{\theta}^{2},\mathbf{w}) \leq C\|\mathbf{u}_{\theta}^{1}\|_{L^{4}(\Omega)}\|\nabla\mathbf{u }_{\theta}^{2}\|_{L^{2}(\Omega)}\|\mathbf{w}\|_{L^{4}(\Omega)}\] \[\leq C\|\mathbf{u}_{\theta}^{1}\|_{H^{1}(\Omega)}\|\nabla \mathbf{u}_{\theta}^{2}\|_{L^{2}(\Omega)}\|\nabla\mathbf{w}\|_{L^{2}(\Omega)},\]
\[Sc_{\widetilde{B}}(\mathbf{B}_{\theta}^{2},\mathbf{B}_{\theta}, \mathbf{w}) \leq C\|\mathbf{B}_{\theta}^{2}\|_{L^{4}(\Omega)}\|\nabla\mathbf{ B}_{\theta}\|_{L^{2}(\Omega)}\|\mathbf{w}\|_{L^{4}(\Omega)}\] \[\leq C\|\mathbf{B}_{\theta}^{2}\|_{L^{2}(\Omega)}^{\frac{1}{2}} \|\nabla\mathbf{B}_{\theta}^{2}\|_{L^{2}(\Omega)}^{\frac{1}{2}}\|\nabla \mathbf{B}_{\theta}\|_{L^{2}(\Omega)}\|\nabla\mathbf{w}\|_{L^{2}(\Omega)},\] \[Sc_{\widetilde{B}}(\mathbf{B}_{\theta}^{1},\mathbf{B}_{\theta}^{2}, \mathbf{w}) \leq C\|\mathbf{B}_{\theta}^{1}\|_{L^{4}(\Omega)}\|\nabla\mathbf{ B}_{\theta}^{2}\|_{L^{2}(\Omega)}\|\mathbf{w}\|_{L^{4}(\Omega)}\] \[\leq C\|\mathbf{B}_{\theta}^{1}\|_{H^{1}(\Omega)}\|\nabla\mathbf{ B}_{\theta}^{2}\|_{L^{2}(\Omega)}\|\nabla\mathbf{w}\|_{L^{2}(\Omega)},\]
and
\[c_{\widehat{B}}(\mathbf{u}_{\theta}^{2},\mathbf{B}_{\theta}, \mathbf{H}) \leq C\|\mathbf{u}_{\theta}^{2}\|_{L^{4}(\Omega)}\|\mathbf{B}_{ \theta}\|_{L^{4}(\Omega)}\|\text{curl}\,\mathbf{H}\|_{L^{2}(\Omega)}\] \[\leq C\|\mathbf{u}_{\theta}^{2}\|_{L^{2}(\Omega)}^{\frac{1}{2}} \|\nabla\mathbf{u}_{\theta}^{2}\|_{L^{2}(\Omega)}^{\frac{1}{2}}\|\mathbf{B}_{ \theta}\|_{H^{1}(\Omega)}\|\text{curl}\,\mathbf{H}\|_{L^{2}(\Omega)},\] \[c_{\widehat{B}}(\mathbf{u}_{\theta}^{1},\mathbf{B}_{\theta}^{2}, \mathbf{H}) \leq C\|\mathbf{u}_{\theta}^{1}\|_{L^{4}(\Omega)}\|\nabla\mathbf{ B}_{\theta}^{2}\|_{L^{2}(\Omega)}\|\text{curl}\,\mathbf{H}\|_{L^{2}(\Omega)}\] \[\leq C\|\nabla\mathbf{u}_{\theta}^{1}\|_{L^{2}(\Omega)}\|\mathbf{ B}_{\theta}^{2}\|_{H^{1}(\Omega)}\|\text{curl}\,\mathbf{H}\|_{L^{2}(\Omega)}.\]
Then we have
\[\int_{0}^{T}\|\Psi_{1}\|_{H^{-1}}^{2}dt \tag{63}\] \[\leq C\int_{0}^{T}\|\nabla\mathbf{u}_{\theta}\|_{L^{2}(\Omega)}^{2 }\|\mathbf{u}_{\theta}^{2}\|_{L^{2}(\Omega)}\|\nabla\mathbf{u}_{\theta}^{2}\|_{L^{ 2}(\Omega)}dt\] \[\quad+C\int_{0}^{T}\|\nabla\mathbf{u}_{\theta}^{2}\|_{L^{2}( \Omega)}^{2}\|\nabla\mathbf{u}_{\theta}^{1}\|_{L^{2}(\Omega)}^{2}dt\] \[\leq C\Bigl{(}\int_{0}^{T}\|\nabla\mathbf{u}_{\theta}\|_{L^{2}( \Omega)}^{4}dt\Bigr{)}^{\frac{1}{2}}\Bigl{(}\int_{0}^{T}\|\nabla\mathbf{u}_{ \theta}^{2}\|_{L^{2}(\Omega)}^{4}dt\Bigr{)}^{\frac{1}{4}}\Bigl{(}\int_{0}^{T}\| \mathbf{u}_{\theta}^{2}\|_{L^{2}(\Omega)}^{4}dt\Bigr{)}^{\frac{1}{4}}\] \[\quad+C\Bigl{(}\int_{0}^{T}\|\nabla\mathbf{u}_{\theta}^{2}\|_{L^{ 2}(\Omega)}^{4}dt\Bigr{)}^{\frac{1}{2}}\Bigl{(}\int_{0}^{T}\|\nabla\mathbf{u}_{ \theta}^{1}\|_{H^{1}(\Omega)}^{4}dt\Bigr{)}^{\frac{1}{2}}\] \[\leq C\epsilon,\]
\[\int_{0}^{T}\|\Psi_{2}\|_{H^{-1}}^{2}dt \tag{64}\] \[\leq C\int_{0}^{T}\|\nabla\mathbf{B}_{\theta}\|_{L^{2}(\Omega)}^{2} \|\mathbf{B}_{\theta}^{2}\|_{L^{2}(\Omega)}\|\nabla\mathbf{B}_{\theta}^{2}\|_{L ^{2}(\Omega)}dt\] \[\quad+C\int_{0}^{T}\|\nabla\mathbf{B}_{\theta}^{2}\|_{L^{2}( \Omega)}^{2}\|\mathbf{B}_{\theta}^{1}\|_{H^{1}(\Omega)}^{2}dt\] \[\leq C\Big{(}\int_{0}^{T}\|\nabla\mathbf{B}_{\theta}\|_{L^{2}( \Omega)}^{4}dt\Big{)}^{\frac{1}{2}}\Big{(}\int_{0}^{T}\|\nabla\mathbf{B}_{ \theta}^{2}\|_{L^{2}(\Omega)}^{4}dt\Big{)}^{\frac{1}{4}}\Big{(}\int_{0}^{T}\| \mathbf{B}_{\theta}^{2}\|_{L^{2}(\Omega)}^{4}dt\Big{)}^{\frac{1}{4}}\] \[\quad+C\Big{(}\int_{0}^{T}\|\nabla\mathbf{B}_{\theta}^{2}\|_{L^{ 2}(\Omega)}^{4}dt\Big{)}^{\frac{1}{2}}\Big{(}\int_{0}^{T}\|\mathbf{B}_{ \theta}^{1}\|_{H^{1}(\Omega)}^{4}dt\Big{)}^{\frac{1}{2}}\] \[\leq C\epsilon,\]
and
\[\int_{0}^{T}\|\Psi_{3}\|_{\mathcal{W}^{-1}}^{2}dt \tag{65}\] \[\leq C\int_{0}^{T}\|\nabla\mathbf{u}_{\theta}^{2}\|_{L^{2}(\Omega )}\|\mathbf{u}_{\theta}^{2}\|_{L^{2}(\Omega)}\|\mathbf{B}_{\theta}\|_{H^{1}( \Omega)}^{2}dt\] \[\quad+C\int_{0}^{T}\|\nabla\mathbf{u}_{\theta}^{1}\|_{L^{2}( \Omega)}^{2}\|\mathbf{B}_{\theta}^{2}\|_{H^{1}(\Omega)}^{2}dt\] \[\leq C\Big{(}\int_{0}^{T}\|\nabla\mathbf{u}_{\theta}^{2}\|_{L^{2 }(\Omega)}^{4}dt\Big{)}^{\frac{1}{4}}\Big{(}\int_{0}^{T}\|\nabla\mathbf{u}_{ \theta}^{2}\|_{L^{2}(\Omega)}^{4}dt\Big{)}^{\frac{1}{4}}\Big{(}\int_{0}^{T}\| \mathbf{B}_{\theta}\|_{H^{1}(\Omega)}^{4}dt\Big{)}^{\frac{1}{2}}\] \[\quad+C\Big{(}\int_{0}^{T}\|\nabla\mathbf{u}_{\theta}^{1}\|_{L^{2 }(\Omega)}^{4}dt\Big{)}^{\frac{1}{2}}\Big{(}\int_{0}^{T}\|\mathbf{B}_{\theta} ^{2}\|_{H^{1}(\Omega)}^{4}dt\Big{)}^{\frac{1}{2}}\] \[\leq C\epsilon.\]
Moreover, since
\[\|\mathbb{P}\Delta\mathbf{u}_{\theta}^{2}\|_{H^{-1}} \leq C\|\nabla\mathbf{u}_{\theta}^{2}\|_{L^{2}(\Omega)}\|\mathbf{ w}\|_{H^{1}(\Omega)}, \tag{66}\] \[\|\mathbb{Q}\mathrm{curl}\,\mathrm{curl}\,\mathbf{B}_{\theta}^{2} \|_{\mathcal{W}^{-1}} \leq C\|\mathrm{curl}\,\mathbf{B}_{\theta}^{2}\|_{L^{2}(\Omega)} \|\mathrm{curl}\,\mathbf{H}\|_{L^{2}(\Omega)}.\]
Thus, it follows that
\[\int_{0}^{T}\|\mathbb{P}\Delta\mathbf{u}_{\theta}^{2}\|_{H^{-1}}^ {2}dt \leq C\int_{0}^{T}\|\nabla\mathbf{u}_{\theta}^{2}\|_{L^{2}(\Omega )}^{2}\|\mathbf{w}\|_{H^{1}(\Omega)}^{2}dt \tag{67}\] \[\leq C\Big{(}\int_{0}^{T}\|\nabla\mathbf{u}_{\theta}^{2}\|_{L^{2} (\Omega)}^{4}dt\Big{)}^{\frac{1}{2}}\Big{(}\int_{0}^{T}\|\mathbf{w}\|_{H^{1}( \Omega)}^{4}dt\Big{)}^{\frac{1}{2}}\] \[\leq C\epsilon,\] \[\int_{0}^{T}\|\mathbb{Q}\mathrm{curl}\,\mathrm{curl}\,\mathbf{B}_ {\theta}^{2}\|_{\mathcal{W}^{-1}}^{2}dt \leq C\int_{0}^{T}\|\mathrm{curl}\,\mathbf{B}_{\theta}^{2}\|_{L^{2 }(\Omega)}^{2}\|\mathrm{curl}\,\mathbf{H}\|_{L^{2}(\Omega)}^{2}dt\] (68) \[\leq C\Big{(}\int_{0}^{T}\|\mathrm{curl}\,\mathbf{B}_{\theta}^{2} \|_{L^{2}(\Omega)}^{4}dt\Big{)}^{\frac{1}{2}}\Big{(}\int_{0}^{T}\|\mathrm{curl} \,\mathbf{H}\|_{L^{2}(\Omega)}^{4}dt\Big{)}^{\frac{1}{2}}\] \[\leq C\epsilon.\]
Denoting \(\phi:=\widehat{\mathbb{P}\mathbf{\hat{f}}}+\mathbb{P}(\Delta\mathbf{u}_{\theta}^{2} )-\Psi_{1}-\Psi_{2}\) and \(\psi:=\mathbb{Q}(\operatorname{curl}\operatorname{curl}\mathbf{B}_{\theta}^{2} )-\Psi_{3}\), we have
\[\int_{0}^{T}\|\phi\|_{H^{-1}}^{2}dt \leq C\epsilon, \tag{69}\] \[\int_{0}^{T}\|\psi\|_{\mathcal{W}^{-1}}^{2}dt \leq C\epsilon. \tag{70}\]
Then we have
\[\frac{d\mathbf{u}_{\theta}^{1}}{dt}+\nu A_{f}\mathbf{u}_{\theta}^ {1}+B[\mathbf{u}_{\theta}^{1},\mathbf{u}_{\theta}^{1}]+SC_{f}[\mathbf{B}_{ \theta}^{1},\mathbf{B}_{\theta}^{1}] =\widehat{\mathbb{P}\mathbf{\hat{f}}}+\phi, \tag{71}\] \[\frac{d\mathbf{B}_{\theta}^{1}}{dt}+\mu A_{B}\mathbf{B}_{\theta}^ {1}-C_{B}[\mathbf{u}_{\theta}^{1},\mathbf{B}_{\theta}^{1}] =\psi, \tag{72}\]
Applying **Theorem 3**, we have
\[\sup_{t\in[0,T]}\|\mathbf{u}(t)-\mathbf{u}_{\theta}^{1}(t)\|_{L^ {2}(\Omega)} \leq C\epsilon, \tag{73}\] \[\sup_{t\in[0,T]}\|\mathbf{B}(t)-\mathbf{B}_{\theta}^{1}(t)\|_{L^ {2}(\Omega)} \leq C\epsilon. \tag{74}\]
Moreover, since
\[\left(\int_{0}^{T}\|\mathbf{u}_{\theta}^{2}\|_{L^{2}}^{4}dt\right)^{\frac{1}{ 4}}+\left(\int_{0}^{T}\|\mathbf{B}_{\theta}^{2}\|_{L^{2}}^{4}dt\right)^{\frac {1}{4}}\leq C\sqrt{\epsilon}. \tag{75}\]
Then we have
\[\int_{0}^{T}\|\mathbf{u}-\mathbf{u}_{\theta}\|_{L^{2}}^{4}dt \leq\int_{0}^{T}\|\mathbf{u}-\mathbf{u}_{\theta}^{1}\|_{L^{2}}^{4} dt+\int_{0}^{T}\|\mathbf{u}_{\theta}^{2}\|_{L^{2}}^{4}dt\leq C\epsilon^{2}, \tag{76}\] \[\int_{0}^{T}\|\mathbf{B}-\mathbf{B}_{\theta}\|_{L^{2}}^{4}dt \leq\int_{0}^{T}\|\mathbf{B}-\mathbf{B}_{\theta}^{1}\|_{L^{2}}^{4} dt+\int_{0}^{T}\|\mathbf{B}_{\theta}^{2}\|_{L^{2}}^{4}dt\leq C\epsilon^{2}. \tag{77}\]
The proof is completed.
Theorem 5.: _Given \(\epsilon>0\), we can find \((\boldsymbol{u}_{\theta},\boldsymbol{B}_{\theta},p_{\theta})\in\mathfrak{F}_{N}\), such that_
\[\|\boldsymbol{u}_{\theta}|_{\partial\Omega}\|_{L^{4}([0,T],H^{ \frac{1}{2}}(\partial\Omega))}^{4}+\|\boldsymbol{u}_{\theta,0}-\boldsymbol{u} _{0}\|_{L^{2}(\Omega)}^{2} \tag{78}\] \[\quad+\|\mathfrak{L}_{f}[\boldsymbol{u}_{\theta},\boldsymbol{B}_{ \theta},p_{\theta}]\|_{L^{2}(\Omega_{T})}^{2}+\|\mathfrak{L}_{B}[\boldsymbol{u }_{\theta},\boldsymbol{B}_{\theta}]\|_{L^{2}(\Omega_{T})}^{2}\] \[\quad+\|\boldsymbol{B}_{\theta,0}-\boldsymbol{B}_{0}\|_{L^{2}( \Omega)}^{2}+\|\boldsymbol{B}_{\theta}\cdot\boldsymbol{n}|_{\partial\Omega} \|_{L^{4}([0,T],H^{\frac{1}{2}}(\partial\Omega))}^{4}\] \[\quad+\|\nabla\cdot\boldsymbol{u}_{\theta}\|_{L^{4}([0,T],L^{2}( \Omega))}^{4}+\|\nabla\cdot\boldsymbol{B}_{\theta}\|_{L^{4}([0,T],L^{2}( \Omega))}^{4}\] \[\quad+\|\boldsymbol{u}_{\theta}\|_{L^{4}([0,T],H^{1}(\Omega))}^{4} +\|\boldsymbol{B}_{\theta}\|_{L^{4}([0,T],H^{1}(\Omega))}^{4}\leq C\epsilon^{2}.\]
_Moveover, our scheme is approximately stable._
Proof:From **Lemma 1**, given \(\epsilon>0\), let \((\mathbf{u},\mathbf{B},p)\) be a strong solution of problem (1)-(3) and \((\mathbf{u}_{\theta},\mathbf{B}_{\theta},p_{\theta})\in\mathfrak{F}_{N}\) satisfying
\[\sup_{t\in[0,T]}\|\mathbf{u}(t)-\mathbf{u}_{\theta}(t)\|_{L^{2}( \Omega)}\leq C\epsilon, \tag{79a}\] \[\sup_{t\in[0,T]}\|\mathbf{B}(t)-\mathbf{B}_{\theta}(t)\|_{L^{2}( \Omega)}\leq C\epsilon,\] (79b) \[\|\mathbf{u}-\mathbf{u}_{\theta}\|_{H^{1,2}(\Omega_{T})}\leq C\epsilon,\] (79c) \[\|\mathbf{B}-\mathbf{B}_{\theta}\|_{H^{1,2}(\Omega_{T})}\leq C\epsilon,\] (79d) \[\|\mathbf{u}-\mathbf{u}_{\theta}\|_{L^{4}([0,T]\times W^{1,4}( \Omega))}\leq C\epsilon,\] (79e) \[\|\mathbf{B}-\mathbf{B}_{\theta}\|_{L^{4}([0,T]\times W^{1,4}( \Omega))}\leq C\epsilon,\] (79f) \[\|p-p_{\theta}\|_{L^{2}([0,T]\times H^{1}(\Omega))}\leq C\epsilon. \tag{79g}\]
By (79a) and (79b), we have
\[\|\mathbf{u}(0)-\mathbf{u}_{\theta}(0)\|_{L^{2}(\Omega)}^{2}+\|\mathbf{B}(0)- \mathbf{B}_{\theta}(0)\|_{L^{2}(\Omega)}^{2}\leq C\epsilon^{2}. \tag{80}\]
Using (79e) and (79f), we obtain
\[\|\nabla\cdot\mathbf{u}\|_{L^{4}([0,T],L^{2}(\Omega))}^{4}+\| \nabla\cdot\mathbf{B}\|_{L^{4}([0,T],L^{2}(\Omega))}^{4} \tag{81}\] \[=\|\nabla\cdot\mathbf{u}-\nabla\cdot\mathbf{u}_{\theta}\|_{L^{4} ([0,T],L^{2}(\Omega))}^{4}+\|\nabla\cdot\mathbf{B}-\nabla\cdot\mathbf{B}_{ \theta}\|_{L^{4}([0,T],L^{2}(\Omega))}^{4}\] \[\leq\|\mathbf{u}-\mathbf{u}_{\theta}\|_{L^{4}([0,T],H^{1}(\Omega ))}^{4}+\|\mathbf{B}-\mathbf{B}_{\theta}\|_{L^{4}([0,T],H^{1}(\Omega))}^{4}\] \[\leq C\epsilon^{4}.\]
Taking \(\gamma>0\) small enough, one finds that
\[\gamma\big{(}\|\mathbf{u}_{\theta}\|_{L^{4}([0,T],H^{1}(\Omega)) }^{4}+\|\mathbf{B}_{\theta}\|_{L^{4}([0,T],H^{1}(\Omega))}^{4}\big{)} \tag{82}\] \[\leq\gamma C\big{(}\|\mathbf{u}-\mathbf{u}_{\theta}\|_{L^{4}([0,T ],H^{1}(\Omega))}^{4}+\|\mathbf{B}-\mathbf{B}_{\theta}\|_{L^{4}([0,T],H^{1}( \Omega))}^{4}\big{)}\] \[\quad+\gamma C\big{(}\|\mathbf{u}\|_{L^{4}([0,T],H^{1}(\Omega))} ^{4}+\|\mathbf{B}\|_{L^{4}([0,T],H^{1}(\Omega))}^{4}\big{)}\] \[\leq C\epsilon^{4}.\]
Consider
\[\|\mathfrak{L}_{f}[\mathbf{u}_{\theta},\mathbf{B}_{\theta},p_{ \theta}]-\mathfrak{L}_{f}[\mathbf{u},\mathbf{B},p]\|_{L^{2}(\Omega_{T})}^{2}+ \|\mathfrak{L}_{B}[\mathbf{u}_{\theta},\mathbf{B}_{\theta}]-\mathfrak{L}_{B}[ \mathbf{u},\mathbf{B}]\|_{L^{2}(\Omega_{T})}^{2} \tag{83}\] \[\leq C\|\partial_{t}\mathbf{u}-\partial_{t}\mathbf{u}_{\theta}\|_ {L^{2}(\Omega_{T})}^{2}+C\|\partial_{t}\mathbf{B}-\partial_{t}\mathbf{B}_{ \theta}\|_{L^{2}(\Omega_{T})}^{2}\] \[\quad+C\|\Delta\mathbf{u}-\Delta\mathbf{u}_{\theta}\|_{L^{2}( \Omega_{T})}^{2}+C\|\mathrm{curl}\,\mathrm{curl}\,(\mathbf{B}-\mathbf{B}_{ \theta})\|_{L^{2}(\Omega_{T})}^{2}\] \[\quad+C\|\nabla p-\nabla p_{\theta}\|_{L^{2}(\Omega_{T})}^{2}+C \|B[\mathbf{u},\mathbf{u}]-B[\mathbf{u}_{\theta},\mathbf{u}_{\theta}]\|_{L^{ 2}(\Omega_{T})}^{2}\] \[\quad+C\|C_{f}[\mathbf{B},\mathbf{B}]-C_{f}[\mathbf{B}_{\theta}, \mathbf{B}_{\theta}]\|_{L^{2}(\Omega_{T})}^{2}\] \[\quad+C\|C_{B}[\mathbf{u},\mathbf{B}]-C_{B}[\mathbf{u}_{\theta}, \mathbf{B}_{\theta}]\|_{L^{2}(\Omega_{T})}^{2}.\]
Using (11)-(13), we arrive at
\[\|B[\mathbf{u},\mathbf{u}]-B[\mathbf{u}_{\theta},\mathbf{u}_{ \theta}]\|_{L^{2}(\Omega_{T})}^{2}\leq C\epsilon^{2}, \tag{84}\] \[\|C_{f}[\mathbf{B},\mathbf{B}]-C_{f}[\mathbf{B}_{\theta},\mathbf{ B}_{\theta}]\|_{L^{2}(\Omega_{T})}^{2}\leq C\epsilon^{2},\] (85) \[\|C_{B}[\mathbf{u},\mathbf{B}]-C_{B}[\mathbf{u}_{\theta},\mathbf{ B}_{\theta}]\|_{L^{2}(\Omega_{T})}^{2}\leq C\epsilon^{2}. \tag{86}\]
Thanks to (14)-(19) and (84)-(86), we have
\[\|\mathfrak{L}_{f}[\mathbf{u}_{\theta},\mathbf{B}_{\theta},p_{\theta}]-\mathfrak{ L}_{f}[\mathbf{u},\mathbf{B},p]\|_{L^{2}(\Omega_{T})}^{2}+\|\mathfrak{L}_{B}[ \mathbf{u}_{\theta},\mathbf{B}_{\theta}]-\mathfrak{L}_{B}[\mathbf{u},\mathbf{B }]\|_{L^{2}(\Omega_{T})}^{2}\leq C\epsilon^{2}. \tag{87}\]
Using (79e)-(79f) and the Triangle inequality, we derive
\[\|\mathbf{u}|_{\partial\Omega}\|_{L^{4}([0,T],H^{\frac{1}{2}} \partial\Omega))}^{4}+\|\mathbf{B}\cdot\mathbf{n}|_{\partial\Omega}\|_{L^{4}( [0,T],H^{\frac{1}{2}}\partial\Omega))}^{4} \tag{88}\] \[=\|\mathbf{u}|_{\partial\Omega}-\mathbf{u}_{\theta}|_{\partial \Omega}\|_{L^{4}([0,T],H^{\frac{1}{2}}\partial\Omega))}^{4}+\|\mathbf{B} \cdot\mathbf{n}|_{\partial\Omega}-\mathbf{B}_{\theta}\cdot\mathbf{n}|_{ \partial\Omega}\|_{L^{4}([0,T],H^{\frac{1}{2}}\partial\Omega))}^{4}\] \[\leq\|\mathbf{u}-\mathbf{u}_{\theta}\|_{L^{4}([0,T],H^{1}(\Omega) )}^{4}+\|\mathbf{B}-\mathbf{B}_{\theta}\|_{L^{4}([0,T],H^{1}(\Omega))}^{4}\] \[\leq C\epsilon^{4}.\]
We obtain the desired result. The proof is finished.
Theorem 4.1: _Assume that \((\boldsymbol{u}_{\theta,1},\boldsymbol{B}_{\theta,1},p_{\theta,1})\in\mathfrak{ F}_{N}\) is the approximate solution of_
\[\partial_{t}\boldsymbol{u}-\nu\Delta\boldsymbol{u}+(\boldsymbol{u }\cdot\nabla)\boldsymbol{u} \tag{89a}\] \[+S\boldsymbol{B}\times\text{curl}\boldsymbol{B}+\nabla p= \boldsymbol{f}_{1},\] \[\partial_{t}\boldsymbol{B}+\mu\text{curl}\boldsymbol{B}-\text{ curl}(\boldsymbol{u}\times\boldsymbol{B}) =0,\] (89b) \[div\boldsymbol{u} =0,\] (89c) \[div\boldsymbol{B} =0,\] (89d) \[\boldsymbol{u}|_{\partial\Omega} =\boldsymbol{0},\] (89e) \[\boldsymbol{B}\cdot\boldsymbol{n}|_{\partial\Omega} =0,\] (89f) \[\text{curl}\ \boldsymbol{B}\times\boldsymbol{n}|_{\partial\Omega} =0,\] (89g) \[\boldsymbol{u}(\boldsymbol{x},0) =\boldsymbol{u}_{0,1}(\boldsymbol{x}),\] (89h) \[\boldsymbol{B}(\boldsymbol{x},0) =\boldsymbol{B}_{0,1}(\boldsymbol{x}). \tag{89i}\]
_Assume that \((\boldsymbol{u}_{\theta,2},\boldsymbol{B}_{\theta,2},p_{\theta,2})\in\mathfrak{ F}_{N}\) is the approximate solution of_
\[\partial_{t}\boldsymbol{u}-\nu\Delta\boldsymbol{u}+(\boldsymbol{u }\cdot\nabla)\boldsymbol{u} \tag{90a}\] \[+SB\times\text{curl}\boldsymbol{B}+\nabla p= \boldsymbol{f}_{2},\] \[\partial_{t}\boldsymbol{B}+\mu\text{curl}\boldsymbol{B}-\text{ curl}(\boldsymbol{u}\times\boldsymbol{B}) =0,\] (90b) \[div\boldsymbol{u} =0,\] (90c) \[div\boldsymbol{B} =0,\] (90d) \[\boldsymbol{u}|_{\partial\Omega} =\boldsymbol{0},\] (90e) \[\boldsymbol{B}\cdot\boldsymbol{n}|_{\partial\Omega} =0,\] (90f) \[\text{curl}\ \boldsymbol{B}\times\boldsymbol{n}|_{\partial\Omega} =0,\] (90g) \[\text{curl}\ \boldsymbol{u}(\boldsymbol{x},0) =\boldsymbol{u}_{0,2}(\boldsymbol{x}),\] (90h) \[\boldsymbol{B}(\boldsymbol{x},0) =\boldsymbol{B}_{0,2}(\boldsymbol{x}). \tag{90i}\]
_Then we have_
\[\|\mathbf{u}_{\theta,1}-\mathbf{u}_{\theta,2}\|_{L^{4}([0,T],L^{2}(\Omega))}+ \|\mathbf{B}_{\theta,1}-\mathbf{B}_{\theta,2}\|_{L^{4}([0,T],L^{2}(\Omega))}\] \[\leq C\epsilon^{\frac{1}{2}}+C\|\mathbf{u}_{0,1}-\mathbf{u}_{0,2}\|_{L^{2} (\Omega)}+C\|\mathbf{B}_{0,1}-\mathbf{B}_{0,2}\|_{L^{2}(\Omega)}+C\|\mathbf{f}_{1}-\mathbf{f}_ {2}\|_{L^{2}(0,T,L^{2}(\Omega))}.\]
_Moveover, our scheme is approximately stable._
Proof.: Using the Triangle inequality, we obtain
\[\|\mathbf{u}_{\theta,1}-\mathbf{u}_{\theta,2}\|_{L^{4}([0,T],L^{2 }(\Omega))}+\|\mathbf{B}_{\theta,1}-\mathbf{B}_{\theta,2}\|_{L^{4}([0,T],L^{2 }(\Omega))} \tag{91}\] \[\leq \|\mathbf{u}_{\theta,1}-\mathbf{u}_{1}\|_{L^{4}([0,T],L^{2}( \Omega))}+\|\mathbf{u}_{\theta,2}-\mathbf{u}_{2}\|_{L^{4}([0,T],L^{2}(\Omega) )}+\|\mathbf{u}_{1}-\mathbf{u}_{2}\|_{L^{4}([0,T],L^{2}(\Omega))}\] \[+\|\mathbf{B}_{\theta,1}-\mathbf{B}_{1}\|_{L^{4}([0,T],L^{2}( \Omega))}+\|\mathbf{B}_{\theta,2}-\mathbf{B}_{2}\|_{L^{4}([0,T],L^{2}(\Omega) )}+\|\mathbf{B}_{1}-\mathbf{B}_{2}\|_{L^{4}([0,T],L^{2}(\Omega))}\]
Thanks to
\[\|\mathbf{u}_{1}-\mathbf{u}_{\theta,1}\|_{L^{4}([0,T],L^{2}(\Omega))}+\| \mathbf{B}_{1}-\mathbf{B}_{\theta,1}\|_{L^{4}([0,T],L^{2}(\Omega))}\leq C \epsilon^{\frac{1}{2}}. \tag{92}\]
and
\[\|\mathbf{u}_{2}-\mathbf{u}_{\theta,2}\|_{L^{4}([0,T],L^{2}(\Omega))}+\| \mathbf{B}_{2}-\mathbf{B}_{\theta,2}\|_{L^{4}([0,T],L^{2}(\Omega))}\leq C \epsilon^{\frac{1}{2}}. \tag{93}\]
And using the stability of MHD, we have
\[\|\mathbf{u}_{1}-\mathbf{u}_{2}\|_{L^{4}([0,T],L^{2}(\Omega))}+ \|\mathbf{B}_{1}-\mathbf{B}_{2}\|_{L^{4}([0,T],L^{2}(\Omega))} \tag{94}\] \[\leq C\|\mathbf{u}_{0,1}-\mathbf{u}_{0,2}\|_{L^{2}(\Omega)}+C\| \mathbf{B}_{0,1}-\mathbf{B}_{0,2}\|_{L^{2}(\Omega)}+C\|\mathbf{f}_{1}-\mathbf{ f}_{2}\|_{L^{2}(0,T,L^{2}(\Omega))}.\]
Combining (92)-(94) with (91), the desired result is obtained. The proof is finished.
|
2306.13161 | Transmission of vortex electrons through a solenoid | We argue that it is generally nonstationary Laguerre-Gaussian states (NSLG)
rather than the Landau ones that appropriately describe electrons with orbital
angular momentum both in their dynamics at a hard-edge boundary between a
solenoid and vacuum and inside the magnetic field. It is shown that the r.m.s.
radius of the NSLG state oscillates in time and its period-averaged value can
significantly exceed the r.m.s. radius of the Landau state, even far from the
boundary. We propose to study the unconventional features of quantum dynamics
inside a solenoid in several experimental scenarios with vortex electrons
described by the NSLG states. Relevance for processes in scanning and
transmission electron microscopes, as well as for particle accelerators with
relativistic beams is emphasized. | G. K. Sizykh, A. D. Chaikovskaia, D. V. Grosman, I. I. Pavlov, D. V. Karlovets | 2023-06-22T18:36:59Z | http://arxiv.org/abs/2306.13161v2 | # Transmission of vortex electrons through a solenoid
###### Abstract
We argue that it is generally nonstationary Laguerre-Gaussian states (NSLG) rather than the Landau ones that appropriately describe electrons with orbital angular momentum both in their dynamics at a hard-edge boundary between a solenoid and vacuum and inside the magnetic field. It is shown that the r.m.s. radius of the NSLG state oscillates in time and its period-averaged value can significantly exceed the r.m.s. radius of the Landau state, even far from the boundary. We propose to study the unconventional features of quantum dynamics inside a solenoid in several experimental scenarios with vortex electrons described by the NSLG states. Relevance for processes in scanning and transmission electron microscopes, as well as for particle accelerators with relativistic beams is emphasized.
**Introduction**. Manipulation of electrons with orbital angular momentum (OAM), dubbed twisted or vortex electrons [1; 2], is a useful tool with great prospects of applications in electron microscopy, nanomaterials studies, particle physics, accelerator physics, and other fields [3; 4; 5; 6; 7; 8]. The most common technique to generate twisted electrons is to let the beam go through a phase plate [9; 10] or a hologram [11; 12; 3]. The states obtained with these methods can often be described as the Laguerre-Gaussian wave packets [1; 8]. The probability density of such states evolves in time, and a solenoid (magnetic lens) can be used to effectively control spreading of the packets, both in an electron microscope [13; 14] and in a particle accelerator [15].
Within the hard-edge approximation, a thick magnetic lens can be described as a semi-infinite magnetic field. In a real-life experiment (see Fig. 1), a free electron first propagates in vacuum towards the lens while spreading, then enters the lens, and continues its propagation inside it. Common description of the transmission of an electron from the field-free space to the solenoid relies on evaluating the dynamics of the observables via the Heisenberg equation of motion, and so no assumptions regarding the electron state are needed. However, far from the boundary the electron state is conventionally thought of as a stationary Landau state [16; 8; 17] that does not spread in time. There have been several approaches to extend the description of an electron in the field beyond the Landau states [18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 192; 193; 194; 195; 196; 197; 198; 1999; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 213; 214; 215; 216; 217; 218; 219; 223; 224; 225; 226; 227; 228; 229; 230; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 241; 242; 243; 244; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 255; 256; 257; 258; 259; 260; 261; 262; 263; 264; 265; 266; 267; 268; 269; 270; 271; 272; 273; 274; 275; 276; 277; 278; 279; 283; 284; 285; 286; 287; 288; 289; 290; 287; 288; 289; 291; 289; 292; 300; 301; 302; 303; 304; 305; 306; 307; 308; 309; 310; 311; 329; 320; 331; 332; 341; 342; 353; 361; 362; 363; 371; 388; 393; 394; 395; 396; 397; 398; 400; 401; 402; 403; 404; 405; 406; 407; 408; 409; 411; 429; 430; 44; 44; 451; 452; 46; 47; 48; 49; 453; 49; 46; 49; 47; 49; 48; 49; 50; 51; 52; 53; 54; 55; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 88; 89; 91; 92; 93; 94; 95; 96; 97; 98; 99; 101; 11; 12; 13; 14; 15; 16; 17; 18; 19; 19; 18; 19; 202; 21; 22; 23; 24; 25; 26; 27; 28; 29; 31; 32; 33; 34; 35; 36; 37; 38; 39; 41; 42; 42; 43; 44; 45; 46; 47; 48; 49; 51; 52; 54; 56; 57; 58; 59; 61; 70; 71; 72; 72; 73; 74; 75; 76; 77; 78; 79; 81; 82; 84; 85; 86; 87; 89; 90; 910; 11; 122; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 35; 36; 37; 38; 39; 40; 41; 41; 43; 45; 46; 47; 48; 49; 50; 52; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 71; 72; 73; 75; 74; 76; 77; 78; 79; 80; 82; 83; 84; 85; 86; 87; 88; 89; 92; 93; 940; 88; 89; 94; 95; 96; 97; 98; 99; 100; 11; 12; 14; 15; 16; 17; 19; 18; 19; 19; 20; 21; 23; 24; 25; 26; 27; 28; 29; 30; 32; 33; 34; 35; 37; 36; 38; 39; 42; 43; 44; 45; 46; 47; 48; 49; 50; 52; 53; 54; 56; 57; 58; 59; 62; 73; 75; 76; 77; 78; 79; 82; 89; 93; 941; 86; 87; 88; 95; 96; 97; 98; 99; 102; 99; 11; 13; 14; 15; 16; 17; 18; 19; 21; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 36; 37; 38; 39; 40; 41; 42; 43; 45; 46; 47; 49; 52; 48; 47; 48; 49; 53; 54; 56; 57; 58; 59; 63; 64; 65; 66; 67; 68; 79; 83; 85; 86; 87; 89; 99; 90; 91; 103; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 23; 24; 25; 26; 27; 28; 29; 3
agates rectilinearly along the \(z\)-axis with the mean velocity \(v\). To characterize the transverse dynamics of the packet, we study dynamics of the r.m.s. transverse radius \(\rho(t)=\sqrt{\langle\rho^{2}\rangle(t)}\).
The wave packet generated in free space at a time instant \(t_{\text{g}}\) is known to spread in time according to
\[\rho_{\text{f}}(t)=\rho_{\text{w}}\sqrt{1+(t-t_{\text{g}})^{2}/\tau_{\text{d}} ^{2}}, \tag{1}\]
where \(\tau_{\text{d}}=\rho_{\text{w}}/u\) is the diffraction time, \(\rho_{\text{w}}\) is the beam waist, \(u\) is the transverse velocity dispersion and the subscript "f" stands for "free" [8]. As such an electron travels from the source to the lens, it acquires a non-zero divergence rate \(\rho_{0}^{\prime}=d\rho_{\text{f}}/dt|_{t=t_{0}}\) and its r.m.s. radius grows by a factor of \(\rho_{0}/\rho_{\text{w}}=\sqrt{1+(t_{0}-t_{\text{g}})^{2}/\tau_{\text{d}}^{2}}\), where \(t_{0}=|z_{0}-z_{\text{g}}|/v\) is the moment the electron enters the lens. Notice that the divergence rate \(\rho_{0}^{\prime}\) can both be positive or negative as well as zero.
Inside the field, the system is described by the Hamiltonian
\[\hat{\mathcal{H}}=-\frac{\lambda_{\text{C}}}{2}\Delta+\frac{\omega}{2}\hat{L}_ {z}+\frac{\omega^{2}}{8\lambda_{\text{C}}}\rho^{2}=\hat{\mathcal{H}}_{\perp}- \frac{\lambda_{\text{C}}}{2}\partial_{z}^{2} \tag{2}\]
and the r.m.s. radius of the electron starts oscillating according to the Heisenberg equation of motion [8; 16; 28]:
\[\begin{split}\rho^{2}(t)&=\rho_{\text{st}}^{2}+ \left(\rho_{0}^{2}-\rho_{\text{st}}^{2}\right)\cos\left(\omega\tau\right)+ \frac{2\rho_{0}\rho_{0}^{\prime}}{\omega}\sin\left(\omega\tau\right),\\ \rho_{\text{st}}^{2}&=2\lambda_{\text{C}}\omega^{- 1}\left(2\omega^{-1}\langle\hat{\mathcal{H}}_{\perp}\rangle+\langle\hat{L}_{z }\rangle\right).\end{split} \tag{3}\]
Here \(\omega=eH\lambda_{\text{C}}\) is the cyclotron frequency and the argument \(\tau=t-t_{0}>0\). The subscript "st" stands for "stationary" and the square root of period-averaged mean square radius \(\rho_{\text{st}}\) is the characteristic radius around which the oscillations occur.
Both in vacuum (1) and inside the lens (3) the expressions for the r.m.s. radii can be obtained without specifying the electron state. Nonetheless, the latter quantitatively affects the oscillations of the r.m.s. radius by dictating the constant term \(\rho_{\text{st}}^{2}\) in Eq. (3), as well as the beam waist \(\rho_{\omega}\) and the diffraction time \(\tau_{\text{d}}\) in Eq. (1). Far from the boundary, electrons are usually believed to be described by the Landau states, and the time averaged mean square radius \(\rho_{\text{st}}^{2}\) is commonly evaluated with the aid of the Landau wave functions as
\[\rho_{\text{st}}^{2}\big{|}_{\text{Landau}}=\rho_{\text{L}}^{2}=(2n+|l|+1) \sigma_{\text{L}}^{2}, \tag{4}\]
where \(\sigma_{\text{L}}=\sqrt{2/|eH|}\) and \(\rho_{\text{L}}\) is the r.m.s. radius of the Landau state with a quantum number \(n=0,1,2,...\) and an OAM \(l=0,\pm 1,\pm 2,...\)[8; 16; 28].
As we show hereafter, it is generally _not the case_ that \(\rho_{\text{st}}=\rho_{\text{L}}\). Eq. (4) is satisfied only for the specific values of the r.m.s. radius \(\rho_{0}\) and the divergence rate \(\rho_{0}^{\prime}\) at the boundary that are not governed by any physical principle. In experiment, the parameters \(\rho_{0}\) and \(\rho_{0}^{\prime}\) can vary from these specific values, leading to a significant increase of \(\rho_{\text{st}}\) as compared to \(\rho_{\text{L}}\). That affects the main characteristics of the oscillations.
**NSLG states**. Let us find an alternative to the Landau state that would describe a twisted electron inside a solenoid after entering it from free space with arbitrary parameters \(\rho_{0}\) and \(\rho_{0}^{\prime}\) at the boundary. Following the seminal work of Silenko et al. [29], we note that the transverse electron wave function admits a general form, both in vacuum (\(z<z_{0}\)) and in the magnetic field (\(z>z_{0}\)) [8; 29]:
\[\begin{split}\Psi_{n,l}(\mathbf{\rho},t)=N\frac{\rho^{|l|}}{\sigma^{ |l|+1}(t)}L_{\text{n}}^{|l|}\left(\frac{\rho^{2}}{\sigma^{2}(t)}\right)\times \\ \exp\left[il\varphi-i\Phi_{\text{G}}(t)-\frac{\rho^{2}}{2\sigma^{ 2}(t)}\left(1-i\frac{\sigma^{2}(t)}{\lambda_{\text{C}}R(t)}\right)\right]. \end{split} \tag{5}\]
We refer to it as a _nonstationary Laguerre-Gaussian_ state. The wave function (5) describes a vortex electron with an OAM \(l\), and the difference between the NSLG states in free space (NSLG\({}_{\text{f}}\)) and in the magnetic field (NSLG\({}_{\text{H}}\)) is governed by the optical functions: \(\sigma(t)\) - the dispersion of the transverse coordinate, \(R(t)\) - the radius of curvature and \(\Phi_{\text{G}}(t)\) - the Gouy phase.
The root-mean square radius of the NSLG state is
\[\rho(t)=\sqrt{2n+|l|+1}\,\sigma(t), \tag{6}\]
both in vacuum and in the field. Equations for the optical functions of the NSLG\({}_{\text{H}}\) state follow from the Schrodinger equation:
\[\begin{split}&\frac{1}{R(t)}=\frac{\sigma^{\prime}(t)}{\sigma(t)}, \\ &\frac{1}{\lambda_{\text{C}}^{2}R^{2}(t)}+\frac{1}{\lambda_{\text{ C}}^{2}}\left[\frac{1}{R(t)}\right]^{\prime}=\frac{1}{\sigma^{4}(t)}-\frac{1}{ \sigma_{\text{L}}^{4}},\\ &\frac{1}{\lambda_{\text{C}}}\Phi_{\text{G}}^{\prime}(t)=\frac{l} {\sigma_{\text{L}}^{2}}+\frac{(2n+|l|+1)}{\sigma^{2}(t)}.\end{split} \tag{7}\]
The first equation in (7) allows us to further use the divergence rate \(\sigma^{\prime}(t)\) rather than the radius of curvature \(R(t)\) as a characteristic of the NSLG packet.
A special choice of the initial conditions \(\sigma(t_{0})=\sigma_{\text{L}}\), \(\sigma^{\prime}(t_{0})=0\) for the system (7) leads to a non-spreading solution
\[\sigma(t)=\sigma_{\text{L}},\,\sigma^{\prime}(t)=0,\,\Phi_{\text{G}}(t)=\varepsilon _{\perp}t, \tag{8}\]
where \(\varepsilon_{\perp}=\omega(2n+|l|+l+1)/2\) is the energy of the Landau state. These optical functions turn the state (5) exactly into the Landau one with the stationary radius \(\rho_{\text{st}}\) given by Eq. (4).
To find the more general form of the NSLG\({}_{\text{H}}\) state, we suggest solving the system (7) with initial conditions for the dispersion and its derivative given by the NSLG\({}_{\text{f}}\) state at the time \(t_{0}\) when the electron enters the lens:
\[\begin{split}&\sigma(t_{0})=\sigma_{0}=\frac{\rho_{0}}{\sqrt{2n+|l|+1} },\\ &\sigma^{\prime}(t_{0})=\sigma_{0}^{\prime}=\frac{\rho_{0}^{ \prime}}{\sqrt{2n+|l|+1}}.\end{split} \tag{9}\]
where \(\rho_{0}\) and \(\rho_{0}^{\prime}\) are the r.m.s. radius (1) and the divergence rate of the NSLG\({}_{\rm f}\) state generated in field-free space at the time \(t_{\rm g}\), respectively, and the factor in the denominators comes from Eq. (6). The Gouy phase does not affect the dynamics of the r.m.s. radius, and hence we abstain from writing it down here. The dispersion of the NSLG\({}_{\rm H}\) packet then reads
\[\begin{split}&\sigma(t)=\sigma_{0}\sqrt{A^{2}+\sqrt{A^{4}-\left( \frac{\sigma_{\rm L}}{\sigma_{0}}\right)^{4}\sin\left[s(\sigma_{0},\sigma_{0} ^{\prime})\omega(t-t_{0})+\theta\right]}},\\ & A^{2}=\frac{1}{2}\left(1+\left(\frac{\sigma_{\rm L}}{\sigma_{0} }\right)^{4}+\left(\frac{\sigma_{0}^{\prime}\sigma_{\rm L}^{2}}{\lambda_{\rm C }\sigma_{0}}\right)^{2}\right),\\ &\theta=\arcsin\frac{1-A^{2}}{\sqrt{A^{4}-\left(\frac{\sigma_{ \rm L}}{\sigma_{0}}\right)^{4}}},\end{split} \tag{10}\]
where the sign inside the trigonometric function is defined by
\[s(\sigma_{0},\sigma_{0}^{\prime})=\begin{cases}\text{sgn}(\sigma_{0}^{\prime} ),\ \sigma_{0}^{\prime}\neq 0,\\ \text{sgn}(\sigma_{\rm L}-\sigma_{0}),\ \sigma_{0}^{\prime}=0,\\ 0,\ \sigma_{0}=\sigma_{\rm L}\text{ and }\sigma_{0}^{\prime}=0.\end{cases} \tag{11}\]
The mean energy of the NSLG\({}_{\rm H}\) state is
\[\langle E_{\perp}\rangle=\frac{\omega}{2}(2n+|l|+1)\frac{\sigma_{0}^{2}}{ \sigma_{\rm L}^{2}}A^{2}+\frac{\omega}{2}l\geq\varepsilon_{\perp}. \tag{12}\]
We note that it is always greater than the energy of the Landau state (\(\varepsilon_{\perp}\) given in Eq. (8)) because the factor \(A^{2}\sigma_{0}^{2}/\sigma_{\rm L}^{2}\) is always greater than 1. Resulting energy excess can be attributed to the intrinsic motion of the wave packet due to the r.m.s. radius oscillations. This NSLG\({}_{\rm H}\) state's "breathing" is also reflected in a larger scale of the period-averaged square radius \(\rho_{\rm st}^{2}\) from Eq.(3), when evaluated with the NSLG\({}_{\rm H}\) state,
\[\rho_{\rm st}^{2}\big{|}_{\rm NSLG_{H}}=\rho_{\rm L}^{2}\frac{\sigma_{0}^{2}}{ \sigma_{\rm L}^{2}}A^{2}\geq\rho_{\rm L}^{2}. \tag{13}\]
Notice that one may find such a value of the radial quantum number \(n_{\rm eff}\) that the energy \(\varepsilon_{\perp}=\omega(2n_{\rm eff}+|l|+l+1)/2\) and the mean square radius \((2n_{\rm eff}+|l|+1)\sigma_{\rm L}^{2}\) of the corresponding Landau state would approximate the mean energy and the stationary radius square given by Eqs. (12), (13) correspondingly. This, however, still does not mean that the electron is in the Landau state with \(n_{\rm eff}\) due to the nonstationary nature of the problem.
To discuss the distinction between the NSLG\({}_{\rm H}\) state and the Landau one in more detail, we select the following two combinations appearing in Eq. (10) as a measure of deviation:
\[\xi_{1}=\frac{\sigma_{\rm L}}{\sigma_{0}},\quad\xi_{2}=\frac{|\sigma_{0}^{ \prime}|\sigma_{\rm L}^{2}}{\lambda_{\rm C}\sigma_{0}}. \tag{14}\]
When \(\xi_{1}=1\) and \(\xi_{2}=0\), the period-averaged square radius \(\rho_{\rm st}^{2}\) of the NSLG\({}_{\rm H}\) state (13) turns exactly into the one given by the Landau state (4). However, in this case Eq. (3) degenerates into \(\rho^{2}(t)=\rho_{\rm L}^{2}\) and no oscillations occur at all, which is not surprising since the Landau states are stationary. Deviations of \(\xi_{1}\) from 1 and \(\xi_{2}\) from 0 lead to a growth in \(\rho_{\rm st}^{2}\) and in the magnitude of the oscillations as well. From Eq. (13) it follows that \(\rho_{\rm st}^{2}\big{|}_{\rm NSLG_{H}}\gg\rho_{\rm L}^{2}\) when either \(\xi_{1}\gg 1\), \(\xi_{1}\ll 1\) or \(\xi_{2}\gg 1\).
The typical \(\sim 1\) T magnetic fields in electron microscopes and accelerators yield \(\sigma_{\rm L}=36.3\) nm. At the same time, the parameters \(\sigma_{0}\) and \(\sigma_{0}^{\prime}\) can have almost arbitrary values. As an example let us take \(\sigma_{0}=47.7\) nm, \(\sigma_{0}^{\prime}=-3.1\times 10^{-4}\) from Ref. [13] (see Fig. 2 there). Taking into account \(\lambda_{\rm C}=3.86\times 10^{-4}\) nm, we get \(\xi_{1}=0.76\) and \(\xi_{2}=22.2\gg 1\) leading to \(\rho_{\rm st}^{2}\big{|}_{\rm NSLG_{H}}=20.7\,\rho_{\rm L}\gg\rho_{\rm L}\). Thus, in order for oscillations of the r.m.s. radius to occur around the Landau state r.m.s. radius, _very specific_ parameters of the incoming electron packet must align. Otherwise, \(\rho_{\rm st}\) of the electron in the magnetic field significantly exceeds that given by the Landau state.
To illustrate the current approach, we compare our results with the dynamics of twisted electrons investigated experimentally in the works [13; 14]. The authors obtained a free electron state that enters a magnetic lens and shrinks in size while propagating inside it. During the time when the size of the electron wave packet inside the solenoid stays comparable to the Landau state r.m.s. radius, the electron is thought of as a Landau state.
However, the electron state inside the lens might better be interpreted as the NSLG packet introduced above. One can reproduce the obtained behaviour of the r.m.s. radius inside the lens (see Fig. 2b in [13]) using Eqs. (10) and (6) and the parameters \(n=0\), \(|l|=1\), \(\sigma_{0}=4.77\times 10^{-2}\mu\)m, \(\sigma_{0}^{\prime}=-3.1\times 10^{-4}\). Thus, we argue that what was observed in [13] might be a part of the oscillations predicted by the NSLG states formalism and that further experiments are needed to reliably conclude whether the electron ends up in a Landau state or in the NSLG one.
**Experimental feasibility**. To observe the oscillations of the r.m.s. radius described by the NSLG\({}_{\rm H}\) state in a solenoid, we propose several experimental scenarios for different setups and energy scales: a scanning electron microscope (SEM), a transmission electron microscope (TEM), a low-energy linear accelerator (for instance, for medical applications), and a conventional linac. In almost all practical cases, the transverse dynamics remains nonrelativistic. In an experiment, the distribution of a twisted electron probability density in the transverse plane can be measured consecutively at various distances \(z\) along the solenoid axis with, for instance, a CCD camera. Subsequently, the r.m.s. radius \(\sqrt{\langle\rho^{2}\rangle}\), obtained as a function of the longitudinal coordinate \(z\), can be expressed in terms of \(t=z/v\) and compared to our predictions.
The parameters for different setups are presented in Table 1 and the corresponding oscillations of the r.m.s. radii are depicted in Fig. 2. We take \(\sigma_{\rm w}=1\,\mu\)m (a characteristic scale [3; 9; 11] for the devices generating
twisted electron) and consider quantum numbers \(n=0\), \(l=3\), that results in \(\rho_{\rm w}=2\mu\)m. Different choice of quantum numbers would lead to rescaling of the r.m.s. radius according to Eq. (6), but the oscillating behavior is preserved.
In Table 1 we also use the longitudinal energy \(E_{\parallel}\), magnetic field strength \(H\), and the distance between the source of twisted electrons and the magnetic field \(d\) that are typical for the proposed experimental scenarios [30]. For a SEM, we take a particular value \(d=5.16\) cm for calculation convenience, though any distance of the order of several cm is appropriate. Whenever experimentally feasible, we adjust the magnetic field strength in order to observe several oscillation periods at realistic distances for each setup. For instance, for a linac we take \(0.01\) T magnetic field in order to observe oscillations at several meters. If needed, one may increase the field strength to proportionally decrease the observation distance. On the other hand, SEMs and TEMs usually have magnetic fields of the order of \(1\) T and their observation distances are somewhat limited by their design.
For the \(\rm NSLG_{f}\) state with \(\sigma_{\rm w}=1\)\(\mu\)m the diffraction time is \(\tau_{\rm d}=\ \sigma_{\rm w}^{2}/\lambda_{\rm C}=8.6\) ns, and the Rayleigh length, \(z_{\rm R}=v\tau_{\rm d}\), scales with the electron energy. For example, in the second row of Table 1 the Rayleigh length for TEM, \(z_{\rm R}=179\) cm, is much greater than the distance between the source and the solenoid, \(d=10\) cm. This leads to the r.m.s. radius at the boundary \(\rho_{0}\approx 2\mu\)m being almost the same as that at the electron source \(\rho_{\rm w}=2\)\(\mu\)m. The similar picture holds for all the devices we deal with. The divergence rate \(d\rho/dz|_{z=z_{0}}=\rho_{0}^{\prime}/v\) reflects the change in the r.m.s. radius with the distance travelled by the electron along the field near the boundary. For the proposed scenarios, \(\xi_{2}\), the dimensionless analogue of the divergence rate, shows that the divergence rate is low and does not affect the dynamics of the electron in solenoids.
Notice the sharp wedge-like pattern of the r.m.s. radius oscillations in the bottom parts of Figs. 2a - 2c. It illustrates the influence of the parameters \(\xi_{1}\) and \(\xi_{2}\) on the electron behavior inside the field. Deviations of \(\xi_{1}\) from \(1\) and \(\xi_{2}\) from \(0\) in all the entries of Table 1 emphasize the distinction between the \(\rm NSLG_{H}\) state and the Landau one. For SEM, TEM and medical linac the stationary radius (dot-dashed green line in Fig. 2) is _almost an order of magnitude greater_ than the r.m.s. radius of the Landau state (dashed blue line). On the other hand, for linac (Fig. 2d) the parameters \(\xi_{1}\) and \(\xi_{2}\) do not differ as much from \(1\) and \(0\), correspondingly, and \(\rho_{st}\) is just twice larger than \(\rho_{\rm L}\).
**Conclusion**. We have put forward an approach to the problem of transmission of a free twisted electron through a sharp boundary between a solenoid and vacuum based on the description in terms of \(\rm NSLG\) states. This formalism enables the smooth transition of a free \(\rm NSLG_{f}\) state to a single \(\rm NSLG_{H}\) mode inside the field. Transformation of a free Laguerre-Gaussian electron inside the lens into the \(\rm NSLG_{H}\) state leads to oscillations of the r.m.s. radius. These oscillations have usually been expected to occur around the value predicted by the stationary Landau state. Somewhat counter-intuitively, the time-averaged value of the r.m.s. radius can generally be much larger (up to several orders of magnitude) than the Landau state r.m.s. radius. For instance, for typical TEM parameters \(H=1.9\) T, \(\sigma_{0}=47.7\) nm, \(\sigma_{0}^{\prime}=-3.1\times 10^{-4}\) from Ref. [13], the characteristic scale for the period-averaged r.m.s. radius is \(20\) times larger than the one
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline Setup & \(E_{\parallel}\) & \(v\) & \(H\) & \(\rho_{\rm L}\) & \(d\) & \(z_{\rm R}\) & \(\rho_{0}\) & \(d\rho/dz|_{z=z_{0}}\) & \(\xi_{1}\) & \(\xi_{2}\) \\ \hline SEM & \(100\) eV & \(0.02c\) & \(1\) T & \(72.6\) nm & \(5.16\) cm & \(5.16\) cm & \(2.82\)\(\mu\)m & \(27\) pm/\(\mu\)m & \(0.025\) & \(6.6\times 10^{-4}\) \\ \hline TEM & \(200\) KeV & \(0.70c\) & \(1.9\) T & \(52.7\) nm & \(10\) cm & \(179\) cm & \(2\)\(\mu\)m & \(62\) pm/mm & \(0.026\) & \(3.9\times 10^{-5}\) \\ \hline Medical linac & \(1\) MeV & \(0.94c\) & \(0.1\) T & \(0.23\)\(\mu\)m & \(10\) cm & \(243\) cm & \(2\)\(\mu\)m & \(0.34\) nm/cm & \(0.115\) & \(5.5\times 10^{-4}\) \\ \hline Linac & \(1\) GeV & \(c\) & \(0.01\) T & \(0.72\)\(\mu\)m & \(100\) cm & \(258\) cm & \(2.14\)\(\mu\)m & \(0.28\)\(\mu\)m/m & \(0.339\) & \(0.045\) \\ \hline \end{tabular}
\end{table}
Table 1: Experimental scenarios for observing the oscillations of the r.m.s. radius \(\rho(z)\). We take \(\sigma_{\rm w}=1\mu\)m, \(n=0\), \(l=3\). The parameters \(\xi_{1}=\sigma_{\rm L}/\sigma_{0}\) and \(\xi_{2}=\sigma_{0}^{\prime}\sigma_{\rm L}^{2}/(\lambda_{\rm C}\sigma_{0})\) reflect the discrepancy between the \(\rm NSLG_{H}\) state and the Landau one, the latter being reproduced when \(\xi_{1}=1\) and \(\xi_{2}=0\).
Figure 2: Oscillations of the r.m.s. radius of the \(\rm NSLG_{H}\) wave packet in a magnetic field (solid red line), the stationary radius \(\rho_{\rm st}\) (dot-dashed green line) and the r.m.s. radius of the Landau state (dashed blue line). The parameters are listed in Table 1. (a) SEM, (b) TEM, (c) medical linac, (d) conventional linac.
predicted by the Landau states. Another important case is \(\sigma_{0}\simeq\sigma_{\mathrm{L}}\) and \(\sigma_{0}^{\prime}\ll\lambda_{\mathrm{C}}/\sigma_{\mathrm{L}}\). For such parameters, the \(\mathrm{NSLG_{H}}\) states resemble the Landau ones and the oscillations occur around the Landau r.m.s. radius with a low magnitude.
Although there is evidence that the \(\mathrm{NSLG_{H}}\) states describe more adequately quantum dynamics of vortex electrons inside a magnetic lens, further experimental scrutiny is required. We have proposed several experimental scenarios that have the potential to observe the r.m.s. radius oscillations with setups ranging from SEMs to linear accelerators.
**Acknowledgments.** We are grateful to N. Sheremet, V. Ivanov and S. Baturin for offering their opinion on the draft. The work is funded by Russian Science Foundation and St. Petersburg Science Foundation, project num. 22-22-20062, [https://www.rscf.ru/project/22-22-20062/](https://www.rscf.ru/project/22-22-20062/).
|
2304.02615 | Formation and Destiny of White Dwarf and Be Star Binaries | The binary systems consisting of a Be star and a white dwarf (BeWDs) are very
interesting.They can originate from the binaries composed of a Be star and a
subdwarf O or B star (BesdOBs), and they can merge into red giants via luminous
red nova or can evolve into double WD potentially detected by $LISA$ mission.
Using the method of population synthesis, we investigate the formation and the
destiny of BeWDs,and discuss the effects of the metallicity ($Z$) and the
common envelope evolution parameters. We find that BesdOBs are significant
progenitors of BeWDs. About 30\% ($Z=0.0001$)-50\% ($Z=0.02$) of BeWDs come
from BesdOBs. About 60\% ($Z=0.0001$) -70\% ($Z=0.02$) of BeWDs turn into red
giants via a merger between a WD and a non-degenerated star. About 30\%
($Z=0.0001$) -40\% ($Z=0.02$) of BeWDs evolve into double WDs which are
potential gravitational waves of $LISA$ mission at a frequency band between
about $3\times10^{-3}$ and $3\times10^{-2}$ Hz. The common envelope evolution
parameter introduces an uncertainty with a factor of about 1.3 on BeWD
populations in our simulations. | ChunHua Zhu, GuoLiang Lü, Xizhen Lu, Jie He | 2023-04-05T17:34:46Z | http://arxiv.org/abs/2304.02615v1 | # Formation and Destiny of White Dwarf and Be Star Binaries
###### Abstract
The binary systems consisting of a Be star and a white dwarf (BeWDs) are very interesting. They can originate from the binaries composed of a Be star and a subdwarf O or B star (BesdOBs), and they can merge into red giants via luminous red nova or can evolve into double WD potentially detected by \(LISA\) mission. Using the method of population synthesis, we investigate the formation and the destiny of BeWDs, and discuss the effects of the metallicity (\(Z\)) and the common envelope evolution parameters. We find that BesdOBs are significant progenitors of BeWDs. About 30% (\(Z=0.0001\))-50% (\(Z=0.02\)) of BeWDs come from BesdOBs. About 60% (\(Z=0.0001\)) -70% (\(Z=0.02\)) of BeWDs turn into red giants via a merger between a WD and a non-degenerated star. About 30% (\(Z=0.0001\)) -40% (\(Z=0.02\)) of BeWDs evolve into double WDs which are potential gravitational waves of \(LISA\) mission at a frequency band between about \(3\times 10^{-3}\) and \(3\times 10^{-2}\) Hz. The common envelope evolution parameter introduces an uncertainty with a factor of about 1.3 on BeWD populations in our simulations.
binaries: close-stars: evolution-stars: white dwarfs-stars: rotation 20XX Vol. **X** No. **XX**, 000-000
## 1 Introduction
High mass X-ray binaries (HMXBs) consist of a massive star and a compact object, in which the massive star may be a red supergiant star or a Be star, and the compact object may be a black hole or a neutron star (NS). There are about more than 240 HMXBs observed in the Galaxy and the Magellanic Clouds (MCs) (Liu et al., 2005, 2006). The majority of the known HMXBs are composed of Be stars and NSs (BeNSs), which are called as Be/X-ray binaries. Meurs & van den Heuvel (1989) estimated that there should be 2000-20000 Be/X-ray binaries in the Galaxy.
The compact stars in Be/X-ray binaries can also be white dwarfs (WDs). They are marked as BeWDs in this paper. Based on the model of binary evolution, Raguzova (2001) predicted that the number of BeWDs in the Galaxy should be 7 times more than that of BeNSs. Unfortunately, there are only 7 BeWDs or candidates observed in the MCs, which are listed in Table 1. Compared with BeNSs, the X-ray spectrum produced by BeWDs is very soft, which is absorbed more easily. Especially, due to the much higher extinction rates in the plane of the Milky Way, up to now, no BeWD is known in the Galaxy. Beside the above observational biases, Kennea et al. (2021) considered that the metallicity may significantly affect the evolution of BeWDs.
In spite of observational constraints, there should be a large number of BeWDs in the Universe. They involve a Be star and an accreting WD. The former has a B-type spectrum, a high rotational velocity and a decretion disk (Porter & Rivinius, 2003). The origin of this disk is still unclear. The majority
of Be stars are usually produced by binary interaction (e. g., Ablimit & Lu, 2013; de Mink et al., 2013; Hastings et al., 2021). Therefore, the progenitor of WD in BeWD transfers enough matter to spin up its companion, even it can lose whole envelope, and becomes a naked helium star. On observations, naked helium star is considered as subdwarf O or B (sdOB) star(Sargent & Searle, 1968; Heber, 1986; Han et al., 2002). A binary system, consisting a sdOB star and a Be star, is called as BesdOB. Naze et al. (2022) listed known 25 BesdOB and candidates (Wang et al., 2018, 2021, references therein), which are listed in Table 2. Obviously, it is very interesting to discuss whether these BesdOBs can evolve into BeWDs. The latter is the potential progenitor for type Ia supernova (SN Ia) (e. g., Wang & Han, 2012) or millisecond pulsar (MSP) (e. g., D'Antona & Tailo, 2020), which depends on accreting compact object being CO WD or ONe WD. In addition, the mass of Be stars is between about 2 and 20 M\({}_{\odot}\)(Porter & Rivinius, 2003). They finally become WDs or NSs. Then, BeWDs evolve into systems consisting of double compact objects which are good gravitational sources detected by the Laser Interferometer Space Antenna (\(LISA\)) (Amaro-Seoane et al., 2017; Lu et al., 2020).
There are many theoretical studies for Be binaries, such as Be/X-ray binaries, BesdOBs, BeWDs, and so on (e. g., Brown et al., 2019; Raguzova, 2001; Shao & Li, 2014, 2021). Especially, Brown et al. (2019) considered the interaction between NS and decretion disk in Be/X-ray binaries. However, this interaction seldom is included in BeWDs. In the present paper, we focus on the formation of BeWDs via BesdOBs or other channels, and their destiny (SN Ia or MSP) when WD's mass reaches Chandrasekhar mass by accreting the decretion disk of Be star. In Section 2, the assumptions and some details of the modelling algorithm are given. In Section 3, the properties of the model population of BeWDs, their formation channel and destiny are presented. Conclusions follow in Section 4.
## 2 Model
BeWDs' formation and evolutionary involve binary interactions: tidal interaction, mass transfer, common envelope evolution (CEE), and so on. We use the rapid binary star evolution (BSE) code (Hurley et al., 2002; Kiel & Hurley, 2006). In BSE code, the stellar structure and evolution is described by a serial of fitting formula which depend on the stellar mass, the metallicity and evolutionary age (Hurley et al., 2000). The binary evolution is determined by some binary interactions: mass transfer, tidal interaction, CEE, gravitational radiation, magnetic braking, coalescence. Among them, the mass transfer, the tidal interaction and the magnetic braking can directly impact on the stellar rotation. These interactions may introduce some uncertain input parameters. If any input parameter is not specially mentioned in the next subsection, it is taken as the default value in these papers.
### Be star
The mean value of rotational velocities of Be stars is about \(70\%-90\%\) of the Keplerian critical velocity (\(v_{\rm cr}\)), and the range of their masses is between 3 and 22 M\({}_{\odot}\)(Ekstrom et al., 2008; Porter & Rivinius,
\begin{table}
\begin{tabular}{l c c c c c} \hline BeWD & \(P_{\rm orb}\) (days) & \(L_{\rm X}\) (erg s\({}^{-1}\)) & \(M_{\rm WD}\) (M\({}_{\odot}\)) & Be star & galaxy & References \\ \hline XMMU J052016.0-692505 & 510 or 1020 & 10\({}^{34}\)-10\({}^{38}\) & 0.9-1.0 & B0-B3e & LMC & K06 \\ XMMU J010147-715550 & 1264 & \(\sim\)4.4\(\times\)10\({}^{33}\) & 1.0 & O7IIle-B0Ie & SMC & S12 \\ MAXI J0158-744 & - & \(>10^{37}\); 10\({}^{40}\) of peak luminosity & 1.35 & B1-2IIIe & SMC & L12,M13 \\ SWIFT J011511.0-725611 & 17.402 & 2\(\times\)10\({}^{33}\)-3.3\(\times\)10\({}^{36}\) & 1.2 & O9IIle & SMC & K21 \\ SWIFT J004427.3734801 & 21.5 & \(5.7-2.9\times 10^{36}\) & - & O9Ve-B2IIIe & SMC & C20 \\ RX J0527.8-6954 & - & \(4-9\times 10^{36}\) & - & B5eV & LMC & O10 \\ \hline \end{tabular}
\end{table}
Table 1: Parameters of the observed BeWDs. Columns 1 to 7 list the name of BeWDs, orbital period \(P_{\rm orb}\), X-ray luminosity, estimated WD’s mass, the spectral type of Be star, hosted galaxy and references. References: K06-Kahabka et al. (2006); S12-Sturm et al. (2012); L12-Li et al. (2012); M13-Morii et al. (2013);K21-Kennea et al. (2021); C20-Coe et al. (2020); O10-Oliveira et al. (2010).
2003). Following Ekstrom et al. (2008), we assume that a main-sequence star becomes Be star when its rotational velocity exceeds 0.7 \(v_{\rm cr}\) and its mass is between 3 and 22 M\({}_{\odot}\).
The formation of Be star has been investigated by a lot of works (e. g., Ekstrom et al., 2008; de Mink et al., 2013; Shao & Li, 2014, 2021). They meticulously discussed the effects of many uncertain parameters, which include the critical mass ratio (\(q_{\rm cr}\)) for dynamically unstable mass transfer, the efficiency of mass accretion when the accretor is spun up to \(v_{\rm cr}\), the combining parameter \(\lambda\times\alpha_{\rm CE}\) during CEE, the initial mass function, the initial separation and mass, and so on.Considering that BeWDs and their progenitors locate at the MCs or the Milk Way, we take different metallicities (\(Z\)=0.0001, 0.004, 0.008 and 0.02) for different galaxies. In addition, based on the crucial importance of CEE for binary evolution, the effects of combining parameter \(\lambda\times\alpha_{\rm CE}\) on BeWD population are discussed.
### Decretion disk and mass-loss rate of Be star
Because the decretion disk around Be star is quite complicated, its formation and dynamical evolution still is poorly known (Haubois et al., 2012; Rivinius et al., 2013). Based on a large number of numerical simulations, the decretion disk is fed by material ejected from the Be star, and it diffuses outwards (e. g., Rimulo et al., 2018; Ghoreyshi et al., 2018). The observational evidences and theoretical estimates show that the typical mass of a decretion disk is between \(10^{-8}\) and \(10^{-10}\) M\({}_{\odot}\)(Vieira et al., 2017; Rimulo et al., 2018). Based on the theoretical simulations of Panoglou et al. (2016); Ghoreyshi et al. (2018), the mass ejection rate from a Be star to its decretion disk is typically about \(10^{-7}\) M\({}_{\odot}\) yr\({}^{-1}\)
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline BesOB & \(P_{\rm orb}\) (days) & \(M_{\rm sOB}\) (M\({}_{\odot}\)) & \(T_{\rm eff}\) (kK)(sdB) & \(\log L(L_{\odot})\) (sdB) & \(M_{\rm Be}\) (M\({}_{\odot}\)) & \(T_{\rm eff}\) (kK)(Be) & \(\log L(L_{\odot})\) (Be) & References \\ \hline V2119 Cyg & 63.1 & \(1.62\pm 0.28\) & 43.5 & \(2.92^{0.15}_{-0.23}\) & \(8.65\pm 0.35\) & 25.6 & \(3.83\pm 0.02\) & K22 \\
60 Cyg & 147.68 & \(1.2\pm 0.2\) & 42 & 2.78 & \(7.3\pm 1.1\) & 27 & \(3.99\pm 0.04\) & K22 \\
28 Cyg & 246 & - & 45 & \(<\)2.39 & - & 20.47 & \(3.76\pm 0.02\) & K22 \\ \(o\) Puppis & 28.9 & - & - & - & - & - & - & K12 \\ \(\varphi\) Persei & 126.67 & \(1.2\pm 0.2\) & 53 & \(3.79\pm 0.13\) & \(9.6\pm 0.3\) & 29.3 & \(4.16\pm 0.04\) & M15 \\ HR 2142 & 80.9 & 0.7 & \(>43\) & \(>1.7\) & 9 & 21 & \(4.17\pm 0.10\) & P16 \\
59 Cygni & 28.2 & 0.79 & 52.1 & \(3.0\pm 0.1\) & 7.9 & 21.8 & \(4.14\pm 0.12\) & P13 \\ FY CMa & 37.3 & 1.26 & 45 & 3.38 & 12.6 & 27.5 & \(4.43\pm 0.03\) & P13 \\ HD 55606 & 93.8 & 0.9 & 40.9 & \(2.27^{+0.13}_{-0.19}\) & 6.2 & 27.35 & \(3.60\pm 0.03\) & C18 \\ HR 6819 & 40.335 & \(0.46\pm 0.26\) & 16 & \(3.12\pm 0.10\) & \(7\pm 2\) & 20 & \(3.77\pm 0.04\) & B20 \\ \(\zeta\) Tau & 132.987 & 0.87\(\cdot\)1.02 & - & - & 11 & 19.3 & \(3.75\pm 0.04\) & R09 \\ AIS8775 & 78.999 & 1.5 & 12.7 & 2.8 & \(7\pm 2\) & 18 & \(3.10\pm 0.07\) & S20 \\ MX Pup & 5.1526 & 0.6-6.6 & - & - & 15 & 25.1 & \(4.24\pm 0.03\) & C02 \\ \(\chi\) Oph & 34.1 or 138.8 & 3.8 & - & - & 10.9 & 20.9 & \(3.75\pm 0.02\) & A78, H87,T08 \\ HD 161306 & 99.9 & 0.0567 & - & - & - & - & \(3.56\pm 0.01\) & K14 \\ V1150 Tau & - & - & 40 & \(2.11^{+0.14}_{-0.21}\) & - & 20.53 & \(3.47\pm 0.02\) & W21 \\ HR 2249 & - & - & 38.2 & \(2.68^{+0.15}_{-0.23}\) & 8.5 & 21.5 & \(3.55\pm 0.02\) & W21 \\ QY Gem & - & - & 43.5 & \(2.75^{+0.13}_{-0.18}\) & - & 20 & \(3.49\pm 0.03\) & W21 \\ V378 Pup & - & - & 42 & \(2.83^{+0.14}_{-0.20}\) & - & 20 & \(3.99\pm 0.03\) & W21 \\ LS Mus & - & - & 45 & \(2.82^{+0.14}_{-0.53}\) & - & 22.8 & \(3.86\pm 0.03\) & W21 \\ kap01 Aps & - & - & 40 & \(2.64^{+0.14}_{-0.20}\) & - & 23.95 & \(3.83\pm 0.02\) & W21 \\ V846 Ara & - & - & 42 & \(2.28^{+0.14}_{-0.21}\) & - & 19.8 & \(3.39\pm 0.01\) & W21 \\ \(\iota\) Ara & - & - & 33.8 & \(2.60^{+0.15}_{-0.23}\) & - & 25.86 & \(3.95\pm 0.02\) & W21 \\ V750 Ara & - & - & 45 & \(<2.61\) & - & 25 & \(4.49\pm 0.04\) & W21 \\ SLac A & - & - & 45 & \(<2.71\) & - & 27.38 & \(4.17\pm 0.05\) & W21 \\ \hline \end{tabular}
\end{table}
Table 2: Parameters of the observed BesODBs in the Galaxy. Columns 1 to 9 list the name of BesdOB, orbital period \(P_{\rm orb}\), mass, effective temperature and luminosity of sdOB, mass, effective temperature and luminosity of Be star and references. References: K22-Klement et al. (2022); K12-Koubsky et al. (2012); M15-Mourard et al. (2015); P13-Peters et al. (2013); P16-Peters et al. (2016); C18-Chojnowski et al. (2018);B20-Bodensteiner et al. (2020); R09-Ruzdjak et al. (2009); S20-Shenar et al. (2020); O02-Carrier et al. (2002); A78-Abit & Levy (1978); H87-Harmanec (1987); T08-Tycner et al. (2008); K14-Koubsky et al. (2014); W21-Wang et al. (2021).
while most of ejected materials lose their angular momentum because of themselves interaction, and are re-accreted by the Be star. The typical mass-loss rate is about \(10^{-9}\) M\({}_{\odot}\) yr\({}^{-1}\)(Panoglou et al., 2016; Ghoreyshi et al., 2018).
The mass-loss rate of B stars has been theoretically developed by Vink et al. (2000) and Vink et al. (2001), and can be calculated by stellar parameters (luminosity, effective temperature, metallicity). Considering the stellar rotation, Langer (1998) gave the mass-loss rate enhanced by
\[\dot{M}_{\Omega}=\left(\frac{1}{1-\Omega/\Omega_{\rm cr}}\right)^{\beta}\dot {M}_{0}, \tag{1}\]
where \(\dot{M}_{0}\) is the mass-loss rate calculated by Vink et al. (2001), \(\beta=0.43\)(Langer, 1998), \(\Omega\) and \(\Omega_{\rm cr}\) are the angular velocity and the critical angular velocity, respectively.
The out flow of Be star consists of a slow equatorial decreretion disk and a polar wind (Bogovalov & Petrov, 2021). The mass density of decreretion disk can be higher than that of polar wind by two orders of magnitude (Bjorkman & Cassinelli, 1993). Following Lu et al. (2009), we use aspherical structure to simulate the stellar wind of Be star, and introduce a parameter, \(f_{\rm W}\), to describe the mass-loss rate via decretion disk:
\[\dot{M}_{\rm disk}=f_{\rm W}\times\dot{M}_{\Omega}. \tag{2}\]
Here, if \(f_{\rm W}\)=0, there is no decretion disk but a spherical stellar wind; if \(f_{\rm W}\)=0.9, there is a decretion disk where 90% mass of stellar wind is in and the left stellar wind is spherical. Considering the density structure of stellar wind from Be star (Bjorkman & Cassinelli, 1993), we take \(f_{\rm W}\)=0.9 in this work.
For the spherical stellar wind, we use a Bondi & Hoyle (1944) formula to estimate the mass-accretion rate(See details in Hurley et al. (2002)). For the decretion disk, there is not an accepted accretion model because it depends on the disk structure, dynamical model and the inclination between the orbital plane and the disk. It is well known that Be star is misaligned with the orbital plane of BeXB. The main reason is that the binary system experiences an aspherical symmetry supernova when NS is born. However, considering that the progenitors of BeWDs do not undergo violent event like supernova, we assume that the decretion disk is aligned with the orbital plane of BeWD. Therefore, for simplicity, we assume that WD in the BeWDs can accrete all materials in the accretion disk. Of course, we can change \(f_{\rm W}\) value to control the ratio of accreted materials to the matter lost by Be star.
### Method of population synthesis
With the help of the method of population synthesis which has been used by a series of papers (Lu et al., 2009, 2012, 2013; Zhu et al., 2015, 2019, 2021; Han et al., 2020), we simulate the formation and evolution of BeWD's population. The method of population synthesis involves several input parameters: the initial mass function (IMF) of the primaries, the mass-ratio distribution of the binaries and the distribution of initial orbital separations. Following the previous papers, we use the IMF of Miller & Scalo (1979), a constant mass-ratio distribution, and assume a flat distribution over \(1<\log a_{\rm i}/R_{\odot}<6\), where \(a_{\rm i}\) is the initial orbital separation. We take \(10^{8}\) initial binary systems with initially circular orbits for each simulation. Many input parameters (metallicity, alpha value for the CE, accretion efficiency during mass transfer, the criteria of dynamical mass transfer, et al.) can result in the uncertainties of binary population synthesis (BPS) (Han et al., 2020). For simplicity, we only consider the effects of the metallicity and the CEE on BeWD population in the present paper.
## 3 Results
As mentioned in the Introduction, the present paper focuses on the formation channel of BeWDs (Especially, the channel from BesdOB), and the destiny of BeWDs in which WDs accrete the materials via Be disk. We calculate \(10^{8}\) binary evolution with different metallicity (\(Z=0.0001,0.004,0.008\) and 0.02) and combining parameters (\(\lambda\times\alpha_{\rm CE}=0.25,0.5\) and 1.0). The effects of \(\lambda\times\alpha_{\rm CE}\) is only discussed in the next subsection. If it is not specially mentioned, \(\lambda\times\alpha_{\rm CE}\) is always taken as 0.5.
### Progenitors of BeWDs
In all \(10^{8}\) initial binary systems, about 4.1% (\(Z=0.02\)) - 3.7% (\(Z=0.0001\)) of them can evolve into BeWDs, and about 4.4%(\(Z=0.02\)) - 6.2%(\(Z=0.0001\)) of them can become BesdOBs. About 33% (\(Z=0.0001\)) - 51% (\(Z=0.002\)) of BesdOBs can evolve into BeWDs. It means that BesdOBs are the most important progenitors of BeWDs. The average lifetime of BeWDs in our simulations is between about 25 (\(Z=0.02\)) and 20 (\(Z=0.0001\)) Myr. Following Shao & Li (2014), a constant star formation rate of 5 M\({}_{\odot}\)yr\({}^{-1}\) in the Galaxy is taken. We can estimate that there are about \(4.0\times 10^{5}\) BeWDs in the Galaxy. This estimated number is approximately consistent with the results in Shao & Li (2014). The above uncertainty originates from metallicity. Low metallicity results in a low opacity. Therefore, the stars with low metallicity have radius smaller than those with high metallicity. Long orbital period is unfavor for the formation of BeWDs and BesdOBs, as shown by Figure 1 which gives the distributions of the initial primary masses and the initial orbital periods for the progenitors of BeWDs, BesdOBs and those binaries in which BesdOBs can evolve into BeWDs.
The initial masses of primaries in the progenitors of BeWDs (the initial masses of WD's progenitors) are between about 2 and 12 M\({}_{\odot}\), and mainly are located between about 3 and 8 M\({}_{\odot}\). The main reason is that these primaries must transfer enough mass so that their companions can be spun up into Be stars whose masses are higher than 3 M\({}_{\odot}\), and themselves can evolve into WDs. The initial orbital periods of BeWDs' progenitors mainly distribute three zones noted by two gaps around \(\sim P_{\rm orb}^{\rm i}=10^{3}\) and \(10^{2}\) days. The binary systems in which \(P_{\rm orb}^{\rm i}>\sim 10^{3}\) days undergo CEE when the primaries evolve into AGB, those in which \(10^{2}<P_{\rm orb}^{\rm i}<\sim 10^{3}\) days do it when the primaries evolve into Hertzsprung gap or FGB, while those with \(P_{\rm orb}^{\rm i}<\sim 10^{2}\) days undergo stable mass transfer when the primaries fill their Roche lobes during the MS phase. These progenitors with \(P_{\rm orb}^{\rm i}<\sim 10^{2}\) days first turn into BesdOBs, and then evolve into BeWDs. Another ranges from hundreds to thousands of day. The detailed evolution appears in the third subsection.
Besides the metallicity, CEE also has great effects on the BeWDs. In this work, we calculate the effects of combining parameter \(\lambda\times\alpha_{\rm CE}\) on BeWD and BesdOB binaries. The \(\lambda\times\alpha_{\rm CE}\) is taken as 0.25, 0.5 and 1.0 in the models with \(Z=0.02\), respectively. We find that the fraction of the binary systems evolving into BeWD are 3.5%, 4.1% and 4.6%. The larger \(\lambda\times\alpha_{\rm CE}\) is, the more easily the CE is ejected, that is, binary systems just forming WD can more easily survive after CEE. Combining Figures 1 and 2, CEE mainly affects the evolution of binary systems in which \(10^{2}<P_{\rm orb}^{\rm i}<\sim 10^{3}\) days. As the mentioned in the last paragraph, these binary systems can undergo CEE. When \(\lambda\times\alpha_{\rm CE}\) is small, the binary systems hardly survive because the proportion of orbital energy used to eject CE is too small. Therefore, these binary systems can not evolve into BeWDs when \(\lambda\times\alpha_{\rm CE}=0.25\).
Compared with the BeWD progenitors, those of BesdOBs have shallower distribution of the initial orbital periods because the primaries must lose their H-rich envelope before the He in the core is exhausted; they have wider distribution of the initial primary masses because sdOBs can evolve into NSs or even black holes. As Figure 3 shown, the initial masses of secondaries have a similar distribution.
As the important progenitors of BeWDs, BesdOBs also are very interesting. Figure 4 shows the distributions of sdOB masses and Be star masses with the orbital periods in BesdOBs. In our simulations, there are two peaks for the distribution of orbital periods. One is at about 3 days, and these BeWDs have undergone CEE. Another is at about 40 days, and most of them have undergone the stable mass transfer via Roche lobe. In the Galaxy, there are about 11 known BesdOBs or candidates whose orbital periods are observed. As Figure 4 shown, these known BesdOBs and candidates are covered by simulated BesdOBs with long orbital periods. On the observations, there is lack of BesdOBs with the orbital periods between about 2 and 10 days. One main reason is that the orbital periods are too short so that Be stars soon fill their Roche lobes and BesdOBs undergo the second CEE. Another is that Be disk hardly form within so short orbital periods. Theoretically, the He star with a mass higher than about 1.7 M\({}_{\odot}\) finally evolves into a NS or a black hole (e. g., Hurley et al., 2000). Except \(\chi\) Oph, the masses of sdOBs in 11 known BesdOBs are lower than about 1.7 M\({}_{\odot}\), and they have orbital periods longer than about 28 days. Therefore, these BesdOBs will become BeWDs. In our simulations, about 25% of BesdOBs have massive He stars with a mass higher than 1.7 M\({}_{\odot}\). They can evolve into Be/X-ray binaries if the binaries
Figure 1: The initial primary masses vs. the initial orbital periods for the progenitors of BeWDs (top panels), BesdOBs (middle panels) and those binaries in which BesdOBs can evolve into BeWDs (bottom panels). Metallicities in different simulations are given in the top-middle zone of each panel.
Figure 2: Similar to Figure 1, but for the different \(\alpha_{\rm CE}\times\lambda\) during CEE.
can survive after a supernova. If the Be star companion in \(\chi\) Oph is a sdOB star, and its mass is about 3.8 M\({}_{\odot}\)(Harmanec, 1987), \(\chi\) Oph may evolve into a Be/X-ray binaries.
Figure 5 gives HR diagrams of Be stars and sdOBs in BesdOBs. Be stars in the known BesdOBs are covered very well by our simulations with high metallicity. However, most of the sdOBs are located at region where the possibility of forming BesdOBs is very low, especially HR 6819 and ALS 8775. The main reason is that the luminosity and the effective temperature in our models only depend on the mass of He star and evolutionary time(Hurley et al., 2000), which results in a shallow region in HR diagram. HR 6819 is a very intriguing object. Rivinius et al. (2020) suggested that HR 6819 is a triple system which is composed of a close inner binary consisting a B-type giant plus a black hole and an outer Be star with a wide orbit. However, according to an orbital analysis, Bodensteiner et al. (2020) considered that HR 6819 is a BesdOB, which is supported by a new high-angular resolution observations in Frost et al. (2022). Based on the model of binary evolution, Bodensteiner et al. (2020) suggested that sdOB in HR 6819 just evolves to higher effective temperature after mass transfer via Roche lobe, and HR 6819 is very rare. Similarly, ALS 8775, also called as LB-1, was considered as a binary consisting a B-type star in a 79-day orbit with an about 70 M\({}_{\odot}\) black hole (Liu et al., 2019). However, Shenar et al. (2020) suggested that ALS 8775 comprises a Be star and a stripped star, that is, it is a BesdOB. Compared with sdOBs in other known BesdOBs, sdOBs in HR 6819 and ALS 8775 have very low effective temperatures. As mentioned as Bodensteiner et al. (2020), these sdOBs just lose their hydrogen-rich envelope, and are evolving into higher effective temperature. Their results are consistent with ours.
### BeWD population
All 6 known BeWDs are detected in the MCs. Considering that metallicity in a galaxy is not well-distributed, we plot BeWDs in all simulations with different metallicities. As illustrated in Figure 6, our results can cover well 4 BeWDs with known WD's mass and orbital period. The BeWDs with orbital
Figure 3: Similar to Figure 1, but for the initial primary masses vs. secondary masses.
periods shorter than about 300 days (such as SWIFT J011511.0-725611 and J004427.3734801) have undergone the BesdOB phase, while the left BeWDs (such as XMMU J052016.0-692505 and J010147-715550) experience a CEE till their primaries evolve into asymptotic giant branch.
Similar result appears in Figure 7 which shows the distributions of WDs' and Be stars' masses in BeWDs. Obviously, the mass distribution has two zones. One zone has larger \(q\) (\(q=M_{\rm Be}/M_{\rm WD}\)). These BeWDs have undergone efficient mass transfer, and BesdOBs can be their progenitors. Another has low \(q\). These BeWDs have undergone inefficient mass transfer.
Observationally, it is difficult to find a BeWD. The optical properties of BeWDs are similar to those of binaries consisting of Be star and neutron star, and it is hard to distinguish them(Coe et al., 2020). Compared with neutron star X-ray binaries, BeWDs have soft X-ray range. Usually, if the mass-accretion rate is higher than the minimum stable burning rate (\(M_{\rm cr}\)\(\sim 10^{-7}\) M\({}_{\odot}\)yr\({}^{-1}\)), the accreting WD is a persistent soft X-ray source, or else it is a transient X-ray source during nova burst. For a persistent soft X-ray source or ones during nova burst, the X-ray luminosity can be estimated by the nuclear burning rate of accreted material (e. g., Wolf et al., 2013; Chen et al., 2015). During the quiet phase, the X-ray emission mainly is produced by the gravitational potential released by material accreted. All known BeWDs are transient X-ray sources. Most of time, they are in the quiet phase. Following Chen (2022), although soft X-ray emission can be easily absorbed by the interstellar medium, the X-ray luminosity of BeWD can be estimated by the mass-accreting rate via
\[L_{\rm X}=\frac{GM_{\rm WD}}{R_{\rm WD}}\dot{M}, \tag{3}\]
where \(G\) is gravitational constant.
Figure 8 gives the estimated X-ray luminosities with different \(f_{\rm W}\)s. For the models with \(f_{\rm W}\)=0 (a spherical stellar wind, the right sub-figure of Figure 8), our results hardly cover the observational samples. Even for the standard models with \(f_{\rm W}\)=0.9 (a decretio
Figure 4: The sdOB masses vs. the orbital periods (left sub-figure) and the sdOB masses vs. Be stars masses (right sub-figure) in BesdOBs. The observations listed in Table 2 are showed by red points, and HR 6819 is given by green point (See text).
the left sub-figure of Figure 8), the known BeWDs appear in the top region. There are two possible reasons as follows. One is that our models underestimate the mass-loss rate of Be stars. Carciofi et al. (2012) suggested that the mass-loss rate by decretion disk can reach up to about \(10^{-7}-10^{-10}\)M\({}_{\odot}\) yr\({}^{-1}\)(Also see Rimulo et al., 2018; Ghoreyshi et al., 2018), which is higher than times of that observed in B stars (Puls et al., 2008). Another is that these BeWDs measured X-ray luminosities are on outburst phase. In BeWDs, outbursts are divided into two types. The type-I outburst originates from the thermonuclear runaway (TNR). Usually, the X-ray luminosity produced by TNR in BeWDs is too faint to be detected. The type-II outburst is triggered by accretion-disk instability. SWIFT J011511.0-725611 just experienced a type-II outburst (Kennea et al., 2021). The X-ray luminosity during type-II outburst can be higher than that during quiet phase for at least two orders of magnitude (Kahabka et al., 2006; Kennea et al., 2021). Therefore, the model with a stellar wind dominated by decretion disk is better.
### BeWD destiny
Theoretically, BeWD destiny depends on not only its orbital period and mass ratio (\(q=\frac{M_{\rm Be}}{M_{\rm WD}}\)), but also WD types (CO or ONe WD). In our simulations, the ratios of COWDs to ONeWDs in BeWDs are about 7/3 (\(Z=0.02\)) and 6/4 (\(Z=0.0001\)). This ratio is mainly determined by the initial mass forming CO and ONe WDs. The former range of initial masses in BeWDs is between about 2 and 6 M\({}_{\odot}\), while the later is between about 6 and 12 M\({}_{\odot}\).
Figure 9 shows the distributions of the orbital periods and \(q\) for BeCOWDs and BeONeWDs. There are 4 zones which are noted 'A', 'B', 'C' and 'D'. Every zone represents different evolutionary channels. The proportions of BeCOWDs produced by A, B, C and D channels are about 20%, 35%, 25% and 20% in all simulations except that they are about 5%, 50%, 35% and 10% in the extremely metal-poor simulation (\(Z=0.0001\)). For BeONeWDs, they are about 40%, 20%, 20% and 20% in the metal-rich models, while they are about 0, 50%, 30% and 25% in the extremely metal-poor simulation.
Figure 5: Similar to Figure 4, but for HR diagrams of Be stars (left sub-figure) and sdOBs (right sub-figure)
As mentioned in subsection 3.1, BeWDs in A and B zones have been BesdOBs. Differently, the progenitors of BeCOWDs (or BeONeWDs) in A zone have orbital periods whose range is between about 20 (30) and 200 (600) days. Their primaries fill their Roche lobes on late phase of Hertzsprung gap or even giant phase. When they evolve into the first giant branch, the progenitors experience CEE. If the binaries do not merge, the orbital periods shrink to about several percents, the primaries become sdOBs, and the secondaries are spun up to Be stars via mass accretion before CEE or tidal interaction after CEE. The progenitors become BesdOBs, and then they evolve into BeWDs with the shortest orbital periods. The progenitors of BeCOWDs (or BeONeWDs) in B zone have initial orbital periods shorter than 20 (30) days. Their progenitors fill Roche lobes during main sequence, and undergo a stable and efficient mass transfer so that the secondary masses exceed primary ones. Even the primaries evolve into the first giant branch, the mass transfer is still stable, which results in lengthening orbital periods. For the BeWDs in C and D zones, their progenitors have wide orbits. The progenitors of BeCOWDs (or BeONeWDs) in C zone have the orbital periods between about 200 (600) and 1000 (2000) days, their primaries fill Roche lobes till they evolve into early AGB phase. Usually, these binaries undergo CEE. For the BeWDs in D zone, the orbital periods of their progenitors are so long that the primaries can not fill Roche lobes. The secondaries are spun via accreting stellar wind. For very metal-poor models (\(Z=0.0001\)), A zone lacks BeWDs. The main reason is as follows: Very low metallicity results in very small stellar radius so that the primaries can not fill their Roche lobe.
#### 3.3.1 Merger of WD and non degenerated star
Be stars or their successors in BeWDs in the A, B and C zones of Figure 9 can fill their Roche lobes with their evolution. If \(q\) is higher than \(q_{\rm cz}\), CEE occurs. It is well known that CEE can produce two
Figure 6: The WD masses vs. the orbital periods for BeWDs (left sub-figure) and the BeWDs which come from BesdOBs (right sub-figure). The red points are the known BeWDs which are listed in Table 1. On observations, the orbital period of XMMU J052016.0-692505 is uncertain, and it has two possible values. The longer orbital period (1020 days) is given by green point.
Figure 8: Similar to Figure 6, but for the orbital periods vs. X-ray luminosities calculated by Eq. (3). The left and the right sub-figures represent the simulations of \(f_{\rm W}=0.9\) and 0, respectively.
Figure 7: Similar to Figure 6, but for the masses of WDs Be stars in BeWDs.
different results: merger if the CE can not be ejected, or else a close binary. The former is usually occurs when Be stars or their successors are on main sequence or Hertzsprung gap, which is a merger of WD and H-rich star. The later usually forms the close binary consisting of a WD and a He star. With He star evolution, it also fills its Roche lobe. CEE may occur again, which produces a merger of a WD and a He star or double WDs.
Figure 10 shows the mass distributions of stars produced by merger of a WD and a non degenerated star. Here, we do not consider the mass loss during merger events. In our simulations, about 60% (\(Z=0.02\)) - 70% (\(Z=0.0001\)) of BeWDs finally merge into single star systems, in which about 80% (\(Z=0.02\)) - 90% (\(Z=0.0001\)) of merger events involve WD and H-rich star. These mergers may observationally correspond to luminous red novae(Soker & Tylenda, 2003; Ivanova et al., 2013; Howitt et al., 2020). The left mergers involving WD and He-rich star may form He giant stars(e g., Hurley et al., 2002). Their progenitors have successfully experienced CEE, and the envelope of Be stars or their successors are ejected. Therefore, the mass of these He giant stars is lower than about 2.0 M\({}_{\odot}\).
#### 3.3.2 Double white dwarf binaries
About 30%-40% of BeWDs' successors do not merge during CEEs. They usually evolve into double WDs. They represent the most likely gravitational wave source (GWS) detected by \(LISA\) mission (e.g. Amaro-Seoane et al., 2017). Peters & Mathews (1963) gave the GW luminosity of a binary system. Assuming an orbital period and a sinusoidal wave, the strain amplitude of GW, \(h\), can be given by
\[h=5.0\times 10^{-22}\left(\frac{\mathcal{M}}{M_{\odot}}\right)^{5/4}\left( \frac{P_{\rm orb}}{1{\rm hour}}\right)^{-2/3}\left(\frac{d}{1{\rm kpc}}\right) ^{-1}, \tag{4}\]
Figure 9: Distributions of the orbital periods vs. the \(q(q=\frac{M_{\rm Ba}}{M_{\rm WD}})\) for BeCOWDs (left sub-figure) and BeONeWDs (right sub-figure). The letters ’A’, ’B’, ’C’ and ’D’ represent different evolutionary channels(See text).
where \(\mathcal{M}=(M_{1}M_{2})^{3/5}/(M_{1}+M_{2})^{1/5}\) is the chirp mass. Considering that the distances of most double WDs known in the Galaxy is shorter than 1 kpc (Korol et al., 2017; Kupfer et al., 2018; Burdge et al., 2019; Li et al., 2020), we take \(d\) as 1 kpc. Figure 11 gives the distribution of GWs from double WDs in the strain-frequency space. The GWs' frequency detected potentially by LISA are mainly at about \(3\times 10^{-3}\) and \(\times 10^{-2}\) Hz, which greatly depends on the LISA sensitivity.
### Supernova Ia progenitors
Following Lu et al. (2009), we also structure an aspherical stellar-wind model for BeWDs. It depends on the mass-accretion rates (\(\dot{M}_{\rm a}\)) whether an accreting WD's mass can efficiently increase(e. g., Nomoto et al., 2007). If \(\dot{M}_{\rm a}\) is higher than a critical value (\(M_{\rm cr}\)), the accreted hydrogen materials can steadily burn, or less TNR can occur on the surface of accreting WD. For the former, the accreted materials can efficiently turn into WD matter after the burning. For the later, the accreted materials can partly be ejected during the TNR, even WD matter is eroded when \(\dot{M}_{\rm a}\) is lower than about \(\frac{1}{8}\dot{M}_{\rm cr}\)(Yaron et al., 2005; Lu et al., 2006)
Figure 12 shows the distribution of WD'mass and mass-accretion rate in BeWDs. Obviously, the mass-accretion rates of WDs in BeWDs almost are lower than \(\dot{M}_{\rm cr}\), and the majority of them are lower than \(\frac{1}{8}\dot{M}_{\rm cr}\). Therefore, in our simulations, there is not a sample in which COWD's mass can reach up to 1.35 M\({}_{\odot}\) and ONeWD's mass can reach up to 1.44 M\({}_{\odot}\) in BeWDs. Therefore, BeWDs hardly occur SN Ia.
## 4 Conclusions
Using the method of population synthesis, we investigate the formation and the destiny of BeWDs. The effects of the metallicity and the combining parameters \(\lambda\times\alpha_{\rm CE}\) of CEE on BeWD population
Figure 10: Mass distributions of stars produced by merger of a WD and a non degenerated star. Different lines represent the merger of different types of WD and star, which are given in the top-middle zone. The frequency of mergers involving COWDs and ONeWDs is normalized to 1, respectively.
Figure 11: Distribution of gravitational sources from double WDs in the strain-frequency space. Green lines give the LISA sensitivity.
Figure 12: Distributions of WD mass vs. the mass-accretion rates in BeWDs. Red lines represent the critical mass-accretion rate (\(\dot{M}_{cr}\)) for the accreted hydrogen burning steadily on the surface of the WD.
are discussed. For \(\lambda\times\alpha_{\rm CE}=0.5\), about 3.7% (\(Z=0.0001\))-4.1% (\(Z=0.02\)) of binary systems can evolve into BeWDs; about 60% (\(Z=0.0001\))-70% (\(Z=0.02\)) of BeWDs include a COWD, and 30%-40% of them have an ONeWD; about 40% (\(Z=0.0001\)) -45% (\(Z=0.02\)) of BeCOWDs have undergone CEE, about 35% (\(Z=0.0001\)) -50% (\(Z=0.02\)) of them have experienced heavy mass transfer, and about 10% (\(Z=0.0001\)) -20% (\(Z=0.02\)) of them exchange materials via stellar winds; for BeONeWDs, the above proportions are 50%-60%, 20%-30% and 20%-30%, respectively. Changing the combining parameter \(\lambda\times\alpha_{\rm CE}\) from 0.25 to 0.5, it introduces an uncertainty with a factor of about 1.3 on BeWD populations. It mainly affects BeWDs which are formed via CEE of the binary systems with \(10^{2}<P^{\rm i}_{\rm orb}<\sim 10^{3}\) days.
About 30%-50% of BeWDs come from BesODBs which are important progenitors. Our results can cover well the observational properties of BesODBs' population, including rare sources: HR 6819 and ALS 8775. BesODBs mainly evolve into the BeWDs with orbital periods shorter than about 300 days. About 60%-70% of BeWDs occur a merger between a WD and a non-degenerated star in which 90% are H-rich stars and the left are He stars. About 30%-40% of BeWDs turn into double WDs which are potential GWs of LISA mission at a frequency band between about \(3\times 10^{-3}\) and \(\times 10^{-2}\) Hz. Due to a low mass-accretion rate of WD, BeWDs hardly become the progenitors of SN Ia.
One should note that the uncertainties of BPS in our work only result from the metallicity and the CEE. It is well known that many input parameters (accretion efficiency during mass transfer, the criteria of dynamical mass transfer, et al.) can affect the results of BPS. If the effects of the combining parameters are considered, the uncertainties of BPS should be larger.
###### Acknowledgements.
This work received the generous support of the Natural Science Foundation of Xinjiang No.2021D01C075, the National Natural Science Foundation of China, project Nos. 12163005, U2031204 and 11863005, the science research grants from the China Manned Space Project with NO. CMS-CSST-2021-A10.
|
2302.11281 | Velocity and confinement of edge plasmons in HgTe-based 2D topological
insulators | High-frequency transport in the edge states of the quantum spin Hall (QSH)
effect has to date rarely been explored, though it could cast light on the
scattering mechanisms taking place therein. We here report on the measurement
of the plasmon velocity in topological HgTe quantum wells both in the QSH and
quantum Hall (QH) regimes, using harmonic GHz excitations and phase-resolved
detection. We observe low plasmon velocities corresponding to large transverse
widths, which we ascribe to the prominent influence of charge puddles forming
in the vicinity of edge channels. Together with other recent works, it suggests
that puddles play an essential role in the edge state physics and probably
constitute a main hurdle on the way to clean and robust edge transport. | Alexandre Gourmelon, Elric Frigerio, Hiroshi Kamata, Lukas Lunczer, Anne Denis, Pascal Morfin, Michael Rosticher, Jean-Marc Berroir, Gwendal Fève, Bernard Plaçais, Hartmut Buhmann, Laurens W. Molenkamp, Erwann Bocquillon | 2023-02-22T10:58:00Z | http://arxiv.org/abs/2302.11281v1 | # Velocity and confinement of edge plasmons in HgTe-based 2D topological insulators
###### Abstract
High-frequency transport in the edge states of the quantum spin Hall (QSH) effect has to date rarely been explored, though it could cast light on the scattering mechanisms taking place therein. We here report on the measurement of the plasmon velocity in topological HgTe quantum wells both in the QSH and quantum Hall (QH) regimes, using harmonic GHz excitations and phase-resolved detection. We observe low plasmon velocities corresponding to large transverse widths, which we ascribe to the prominent influence of charge puddles forming in the vicinity of edge channels. Together with other recent works, it suggests that puddles play an essential role in the edge state physics and probably constitute a main hurdle on the way to clean and robust edge transport.
+
Footnote †: preprint: APS/123-QED
Since its experimental discovery [1] in 2007, the quantum spin Hall (QSH) effect has been intensively studied, as its helical edge states offer an exciting playground for spin-polarized edge transport and topological superconductivity, with possible applications in both spintronics and topological quantum computation. Prominent transport signatures have been observed in HgTe quantum wells (QWs) such as non-local and spin-polarized transport [2; 3; 4; 5], or the fractional Josephson effect in HgTe-based Josephson junctions [6; 7]. Alternatively, InAs/GaSb double QWs [8; 9; 10] or layered materials such as bismuthene [11] or WTe\({}_{2}\)[12] have also been successfully identified as QSH insulators.
In this context, the investigation of high-frequency transport in 2D topological insulators, such as HgTe quantum wells here, is of high interest. The charge relaxation scales of the QSH edge carriers has been measured [13] by microwave capacitance spectroscopy [14; 15], revealing that the edge states have a larger than predicted density of state, possibly due to neighboring puddles. It also suggests that the QSH effect could be enhanced in dynamical studies by exploiting the difference in transport or scattering timescales between topological and bulk carriers. We here explore another aspect, namely the velocity of plasmons propagating in the edge channels. In the quantum Hall effect of GaAs, InAs or graphene samples, the velocities of chiral edge magneto-plasmons have been widely studied, highlighting the role of intra- and inter-channel Coulomb interaction, of the confinement edge potential, or of the screening of Coulomb interaction by nearby metallic gates, and of dissipation in the bulk [16; 17; 18; 19; 20; 21; 22; 23; 24; 25].
Here, we report on a systematic study of the velocities of plasmons in a HgTe quantum well, in the classical and quantum Hall regime (magnetic fields \(B\) up to \(8\,\mathrm{T}\)) of the conduction band, as well as in the topological gap where the quantum spin Hall effect takes place (at \(B=0\)). The measurements are performed in a dilution refrigerator at a temperature of \(T\simeq 20\,\mathrm{mK}\), for frequencies \(f\simeq 3-10\,\mathrm{GHz}\). The (phase) velocity is accessed via the phase shift generated by the delay of a plasmon excitation propagating between a local source and a probe contact in the HgTe QWs. Though phase and group velocities may differ, they are known to coincide in the low-energy limit, which we experimentally confirm (see Supplementary Online Material). The phase shift can be rather accurately measured, even on small distances in which time-resolved techniques [22; 26; 27] would be inoperable due to insufficient delay. This allows for a rather short propagation length \(l\) (ranging between \(3\) and \(7\,\mathrm{\SIUnitSymbolMicro m}\)), in order to approach the ballistic length which does not exceed \(1\,\mathrm{\SIUnitSymbolMicro m}\) in our device. Finally, the electron density \(n\) is tuned via to a gate voltage \(V_{g}\) in a large range \(n\simeq 0.5\times 10^{11}-5\times 10^{11}\,\mathrm{cm}^{-2}\) in the conduction band. Our main observations can be summarized as follows. First, when the Fermi energy is in the conduction band and under the action of a perpendicular magnetic field \(B\), we observe a transition in the magnetic-field dependent velocity, suggestive of the crossover between non-interacting edge states (Landauer-Buttiker picture, abbreviated as LB) and edge reconstruction under e-e interactions (Chklovskii-Shklovskii-Glazman regime [28; 29], denoted CSG). From the analysis of the velocity, we conclude that the edge states have a typical width \(w_{0}\) of several microns at low fields, probably set by the electro
static disorder, while the edge confinement itself occurs, as expected, on a typical scale \(l\sim 0.1\,\mathrm{\SIUnitSymbolMicro m}\) comparable to the distance \(d\) between the gate and the HgTe layer. Second, we confirm this interpretation by an analysis of the observed low velocities in the QSH gap of the device.
The article is organized in three sections. In the first section, we introduce the device geometry, and the preliminary characterization of the samples via DC magneto-transport measurements. The microwave measurement setup is briefly described in the second section, together with the post-acquisition calibration process, with application to raw data. Finally, we detail several experimental results in the QSH and QH regime, and present a plausible interpretation based on the presence of charge puddles in the band gap of the material.
## I Sample geometry and dc transport properties
Samples -The samples are fabricated from HgTe/Cd\({}_{0.68}\)Hg\({}_{0.32}\)Te QWs grown by molecular beam epitaxy. The thickness \(t\) of the QWs is \(8.5\,\mathrm{nm}\). For such a thickness, the band structure consists of light electrons in the conduction band, and heavy holes in the valence band. A topological phase transition for thickness \(t>t_{c}\simeq 6.3\,\mathrm{nm}\) enforces the presence of QSH edge states in the gap of the QWs [30]. A gap of approx. \(26\,\mathrm{meV}\) is predicted by \(\mathbf{k}\cdot\mathbf{p}\) simulations of the band structure (estimated along the \(k_{x}=\pm k_{y}\) direction in which it is minimal). Additionally, the QW is protected by a Cd\({}_{0.68}\)Hg\({}_{0.32}\)Te capping layer of thickness \(50.5\,\mathrm{nm}\). The QWs are first characterized using standard Hall-bar measurements, yielding a mobility of \(1-2\times 10^{5}\,\mathrm{cm}^{2}\,\mathrm{V}^{-1}\,\mathrm{s}^{-1}\) (measured at a density \(n\simeq 2-3\times 10^{11}\,\mathrm{cm}^{-2}\) in the conduction band). Three devices have been investigated and have given similar results. Each device comprises a rectangular mesa defined via a wet-etching technique [5] to preserve the high crystalline quality and the high mobility of the epilayer. Low-resistance ohmic contacts are evaporated on either ends of the mesa, for both DC characterization and RF measurements. Two gold finger gates (denoted RFg 1 and RFg 2 in Fig.1) are patterned with e-beam lithography, with a width \(\delta\simeq 800\,\mathrm{nm}\) and are used to locally and capacitively excite the underneath QW with high-frequency signals, while an additional gate for DC tuning of the electron density (DCg) covers the rest of the mesa. All gates are evaporated on top of a \(16\,\mathrm{nm}\)-thick HfO\({}_{2}\) insulating layer, grown by low-temperature atomic layer deposition (ALD) [5]. The main text focuses on one device where the global gate DCg is made of Au (denoted "Au sample"), as presented in Fig.1b. Another sample covered with a thin Pd global gate electrode (denoted "Pd sample") is also briefly discussed in the main text, with more data presented in the Supplementary Material. The results are very analogous, though our observations point towards larger puddles in the electrostatic landscape of the device.
Characterization of the transport properties -The two-terminal resistance \(R_{2T}\) is measured at \(B=0\,\mathrm{T}\) as a function of the gate voltage \(V_{g}\) (applied simultaneously to all three gates to preserve a uniform electron density), and presented in Fig. 2a. It informs on the band structure and the position of the gap in the devices. Similarly to previous works, we identify the gap as a clear peak in \(R_{2T}\), separating the conduction and valence band regimes. However the peak value of \(R_{2T}\) is much larger than the expected quantized value \(R_{K}/2=h/2e^{2}\simeq 12.9\,\mathrm{k}\Omega\), and lies around \(120\,\mathrm{k}\Omega\). The full in-situ characterization of the mobility and electron density from the magnetoresistance of the devices is rendered impractical by the two-terminal geometry imposed by the microwave measurements. Indeed, the two-terminal resistance involves both the Hall and longitudinal resistance, which are easily separated in a four-terminal geometry. Nevertheless, one observes in Fig.2b that \(R_{2T}\) clearly exhibits the quantum Hall plateaus. Assuming that these plateaus reach perfectly quantized values, we write \(R_{2T}=R_{K}/\nu+R_{c}\) at the center of each plateau of filling factor \(\nu\in\mathbb{N}^{*}\), where \(R_{c}\) is a contact resistance. While the contact resistance is estimated at \(R_{c}\simeq 100\,\Omega\) in the conduction band, it appears to be much higher in the valence band (\(20\,\mathrm{k}\Omega\)), presumably due to the formation of p-\(n\) junctions near the \(n\)-doped contacts. The ratio \(R_{K}/R_{2T}\) as a color plot in Fig.2b can then be used to fit the density by adjusting the set of lines \(B_{\nu}=\frac{eB_{K}}{\nu}(\frac{e}{e}V_{g}+n_{0})\) which define the exact integer filling factors. We then obtain the density \(n(V_{g})=\frac{e}{e}V_{g}+n_{0}\) where \(c\simeq 1\,\mathrm{m}\mathrm{F}\,\mathrm{m}^{-2}\) is the gate capacitance per unit area (in agreement with theoretical estimate given the gate layer stack), and \(n_{0}\simeq 4.7\times 10^{15}\,\mathrm{m}^{-2}\) the density at \(V_{g}=0\).
Figure 1: **Sample geometry:** a) Sketch of the device: The light blue part is the HgTe mesa while the yellow parts (fingergate electrodes RFg 1, 2 and ohmic contacts A, B) are made of gold. The red dashed region corresponds to the space covered by the top-gate DCg. A DC voltage \(V_{g}\) is applied to RFg 1, 2 and DCg to uniformly tune the electron density \(n\). b) Image taken with an optical microscope of the Au-gated sample, showing the different gates and contacts of the sample as sketched in a).
## II Microwave measurements and calibration
Microwave setup -The phase shift in the device under test (DUT) is measured using a standard heterodyne detection method. An RF sine wave of frequency \(f\) in the GHz regime is generated from an arbitrary waveform generator (AWG) and sent to the RFg of the sample through the microwave lines of the fridge. At the excitation finger gate RFg, the signal amplitude is typically \(1\,\mathrm{mV}\). After being emitted by the finger gates RFg, the signal is collected by the two contacts (A and B). The signal is amplified by a cryogenic and room-temperature low-noise amplifiers before being sent to a heterodyne detection setup at room temperature. The signal is mixed with a local oscillator (LO) i.e. a sine wave generated signal generator detuned from the AWG output by 50 MHz. This mixing process converts the GHz signal coming from the sample to a 50 MHz signal which is then demodulated by a multi-channel fast acquisition card to obtain the in-phase (\(I\)) and in-quadrature (\(Q\)) parts of the signal \(I\cos(2\pi ft)+Q\sin(2\pi ft)\), in each contact A and B. With this setup, it is possible to measure simultaneously signals at the two contacts A and B in the range of frequencies \(f\simeq 3-10\,\mathrm{GHz}\) set by the cryogenic isolators placed before the cryogenic amplifiers. The full experimental setup is shown in the Supplementary Online Material for more detail.
Calibration of the raw data -The signal measured in the channels of the acquisition card need further calibration and reference: 1) The measured magnitude is offset by stray couplings on the chip and sample holder, which do not contain any physical information on the topological device. This parasitic contribution is measured in a situation where the DUT is known to be perfectly insulating, and then subtracted. 2) The phase is also affected by the propagation in the cables, and can not be directly used for computing the plasmon velocities. A phase reference need to be defined from a situation where currents propagate at a very high velocity in the DUT (much larger than the edge plasmon velocities). We describe in this paragraph how we proceed to these two steps, and how we control the validity of the underlying assumptions.
We first concentrate on the calibration of the amplitude and the subtraction of stray couplings. Such couplings are ubiquitous in microwave measurements, and can be as strong or even stronger than the physical signal through the DUT, in particular in high-impedances devices such as the ones considered here. Given the geometry of the device, reversing the magnetic field direction or the polarity of the carriers (from \(n\)- to \(p\)-regime) reverses the direction of the chiral edge states and thus nullifies the edge state signal measured in one of the two contacts. As an example, we take the situation of Fig.3. There the data measured on contact \(A\) at \(f=4\,\mathrm{GHz}\) is shown in the Nyquist plane (\(I\), \(Q\)) showing the in-phase and in-quadrature parts of the signal. For \(B<0\), the current emitted by the finger gate RFg flows to contact A and then depends on the filling factor \(\nu\). As a result, in Fig.3a, the data points for \(B<0\) span a wide zone (colored data points following the color bar). In contrast, for \(B>0\), the data points of all filling factors are concentrated in a small area (see Fig.3b), showing that no current flows from the RFg to contact \(A\), and the data
Figure 2: **DC transport properties of the sample:** a) Two-terminal resistance \(R_{2T}\) as a function of the gate voltage \(V_{g}\), exhibiting a peak signaling the gap (indicated by the dashed lines), and the conduction and valence bands on either sides of this peak. b) 2D color map of the ratio \(R_{K}/R_{2T}\) as a function of gate voltage \(V_{g}\) and magnetic field \(B\). The different filling factors \(\nu\) are labelled, and the white dotted lines are the lines \(B_{\nu}\) used to fit the carrier density \(n\) as a function of gate voltage \(V_{g}\) (see main text). The contours between the different QH plateaus are highlighted as dashed black lines. The color scale is intentionally saturated at a maximum value \(R_{K}/R_{2T}=10\), in order to distinguish more clearly the first QH plateaus.
points then indicate the coordinate of the stray coupling in the \((I,Q)\) plane. Instead of reversing the field direction \(B\rightarrow-B\), it is faster and equally accurate to reverse the polarity of the carriers and drive the sample into the \(\nu=-1\) state (pink-colored data points in Fig.3), allowing to subtract a reference vector \(\mathbf{R}_{0}=(I_{0},Q_{0})\) indicated by the red dot in Fig.3a. Thus we can measure and subtract the stray coupling with an estimated accuracy of a few %.
We now explain how we reference the phase. Any signal passing through the DUT acquires a phase \(\phi_{0}+\phi\) with \(\phi\) inversely proportional to \(v\). Thus, we define the phase reference \(\phi_{0}\) in a situation where plasmons are considered infinitely fast (\(\phi\to 0\)). To this end, we consider that 2D plasmons of the conduction band (in the absence of magnetic fields) propagate at a very large velocity (often reported \(v_{\mathrm{max}}\gtrsim 2\times 10^{7}\,\mathrm{m}\,\mathrm{s}^{-1}\) in similar semi-conducting systems). In Fig.3, this corresponds to a rotation of angle \(\phi_{0}\) of the phase. Though the phase reference is rather roughly defined, it is precise enough for the study of all velocities \(v\ll v_{\mathrm{max}}\). The velocities discussed later in this article validate a posteriori this approach, in line also with previous measurements in the QH regime of GaAs 2DEG [22].
These two calibration steps have been successfully conducted in numerous data sets. In such cases, the chirality of the QH edge channels manifests itself as a strong asymmetry with either the magnetic field directions, the choice of the contact (A or B) or the choice of the finger gate (RFg1 or RFg2) (see Supplementary Material for additional data and chirality maps). The phase also winds in a unique direction (clockwise). In Fig.4, we show the resulting calibrated amplitude \(M\) of the microwave signal, its phase \(\phi\), and the velocity \(v\) calculated from the phase as \(v=\frac{2\pi IL}{\phi}\), where \(L\) is the propagation length between finger gate and contact. The amplitude is close to zero on one half of the plane (here for \(B>0\)), while it is strong on the other half (here for \(B<0\)), and gradually decays with increasing field. The different filling factors are clearly visible and agree well with those determined from the DC magnetoresistance. In the regions where the amplitude \(M\) is sufficiently large, the phase can be unwrapped, allowing for the computation of the velocity in the same area.
However, some samples and data sets have resisted such an analysis, and exhibit asymmetric but not totally chiral behavior or do not allow to define an adequate phase reference. In agreement with our findings described later we attribute these phenomena to strong disorder in some samples, allowing for propagation of signal opposite to the expected propagation direction. We present problematic data sets in the Supplementary Material.
After calibrating the electron density \(n\), the amplitude \(M\) and phase \(\phi\) of the microwave data, we now explore the variations of the velocity \(v\) in the Hall regime as function of \(n\), \(B\) but also the filling factor \(\nu=\frac{hn}{eB}\).
Figure 3: **Calibration of the raw data:** In the Nyquist plane (\(I\),\(Q\)), the same set of data points is represented in both panels, as light dots. a) Focusing on \(B<0\), data points corresponding to filling factor \(\nu=-1\) (\(p\)-type transport) are colored in pink, and for \(\nu\in[0,10]\) (\(n\)-type transport) colored according to the filling factor \(\nu\) (as measured from the DC two-terminal resistance \(R_{2T}\)). Data points for \(\nu>0\) occupy a large fraction of the total data set, indicating that the ac current flows from the finger gate RFg to the contact (A in this case) with a magnitude that depends on \(\nu\). b) Focusing now on \(B>0\), we use the same color coding for filling factors. The data points then occupy a very small fraction of the phase space. This indicates that the measured signal is independent of \(\nu\) and is dominated by the stray coupling. Additionally, one can reverse the the polarity of the carriers and drive the sample into the \(\nu=-1\) state (pink-colored data points). The parasite stray coupling \(\mathbf{R}_{0}\) is indicated by a red dot. The red cross indicates the signal taken for \(V_{g}=0\,\mathrm{V}\) and \(B=0\), which is used to determine the phase reference \(\phi_{0}\).
## III Results - plasmon velocities
In this section, we analyze the measured velocity, and discuss different interesting observations. When a perpendicular magnetic field is applied, a clear transition is observed between low and high-field regimes, which we attribute to the crossover between the LB non-interacting regime to the CSG regime [28; 29] at high fields where e-e interactions are prominent. A careful study of both regimes then yields information on the role of puddles and edge confinement in the device, which is relevant for both the classical and quantum Hall regime, but also indicative of the physics of QSH edge states. Though the data is not as clear, we also confirm these observations in the gap of the quantum well at zero magnetic field, i.e. when QSH edge states dominate transport.
Plasmon confinement in the quantum Hall effect -We first turn to the study of plasmon velocities in the quantum Hall regime, i.e. when a perpendicular magnetic field is applied to the sample. In gated samples, the velocity of the edge magneto-plasmons can be simply written as
\[v=\frac{ned}{\epsilon Bw}=\frac{\sigma_{xy}}{C_{\text{QH}}} \tag{1}\]
where \(w\) is the transverse width of the edge plasmon, \(\sigma_{xy}=ne/B\) the Hall conductance, and \(C_{\text{QH}}=\epsilon w/d\) the capacitance between gate and plasmon per unit length. This equation can be obtained from a microscopic derivation [31; 32]. It also is a constitutive relation of a transmission line model for edge states [33], connecting the line impedance \(1/\sigma_{xy}\) and the velocity \(v\) with the capacitance \(C_{\text{QH}}\).
Through Eq.(1), the velocity \(v\) provides insights into the confinement of plasmons on a width \(w\) near the edges of the sample. In this context, the role of e-e interaction and screening in the progressive formation of edge states is well-understood since pioneering works in the 90s [28; 34], and have recently been numerically revisited [29].
In all measured devices, we observed two different behaviors depending on the strength of the magnetic field. At low field, the velocity is both proportional to \(n\) and \(1/B\) (as illustrated in Fig.5a for three different electron densities), in agreement with the Landauer-Buttiker model. In this model, a large number of edge states are uniquely defined by the edge confinement profile, while screening and reconstruction from e-e repulsion are irrelevant. This allows to define the \(n\)-independent width \(w_{0}\) of the plasmon in this regime, and we find \(w_{0}\simeq$1.2\,\mathrm{\SIUnitSymbolMicro m}$\) in the Au-gated sample (\(w_{0}\simeq$4.6\,\mathrm{\SIUnitSymbolMicro m}$\) for the Pd-gated sample).
This transverse width is much greater than the distance to the gate \(d\simeq$50\,\mathrm{nm}$\) which controls the typical confinement length of the edge states, or than the magnetic length \(l_{B}=\sqrt{\frac{\hbar}{eB}}\simeq$80\,\mathrm{nm}$\) at \(B=$100\,\mathrm{mT}$\). It indicates that the edge states are broadened, for example by shallow potential fluctuations and puddles. Such very large values of \(w_{0}\) have also been recently reported in Ref.[27], and similarly attributed to charge puddles. They result in an increased capacitance \(C_{\text{QH}}\) accounting for the gate-puddle coupling, an increased transverse width \(w_{0}\) and equivalently to a reduced velocity \(v\), irrespective of the edge confinement depletion length \(l\simeq d\).
At higher fields (\(B>B_{c}\simeq$2\,\mathrm{T}$\) in Fig.5a), the velocity \(v\) strongly departs from this simple law, and shows strong oscillations. This crossover may be attributed to a reduced number of edge states, forming compressible and incompressible stripes under the influence of strong e-e interactions (CSG regime). We find that the crossover field \(B_{c}\) between both regimes is approximately compatible with the heuristic law [29]\(B_{c}\propto n^{2/3}\) (see Supplementary Material). Such oscillations have already been observed in GaAs quantum wells [22] and originate from the transverse compression and decompression of plasmons when a new incompressible stripe nucleates in the bulk of the material at integer filling factors, and is progressively pushed towards the edges of the sample as \(\nu\) increases (see Fig.5c). It is worth noting that oscillations of the velocity have also been observed in ungated graphene [26] and
Figure 4: **Calibrated amplitude, phase and velocity:** Colormaps of the amplitude \(M\) (a), phase \(\phi\) (b) and velocity \(v\) (c) as function of the gate voltage \(V_{g}\) applied on DC\({}_{g}\) and the magnetic field \(B\), obtained from the raw data presented in Fig. 3. The white shadings indicate regions where the signal amplitude \(M\) is too small, so that \(\phi\) and \(v\) are not reliably computed.
InAs quantum wells [35], with opposite behavior (minimal widths for integer filling factors), and are then ascribed to another mechanism, namely enhanced dissipation due to a conducting bulk.
Therefore, we continue the analysis by plotting the width \(w\) obtained from Eq. 1 as function of the filling factor \(\nu\) (see Fig.5b). At high filling factors \(\nu\gg 15\) (i.e. low magnetic fields), \(w\) slowly converges towards its saturation value \(w_{0}\). For low filling factors, we observe that \(w\) is maximum (i.e. the velocity \(v\) reaches its minima) at integer filling factors. The oscillations are very strongly visible at high densities \(n>3\times 10^{11}\,\mathrm{cm}^{-2}\), when screening is strong and thus when the electrostatic disorder is less influential. In contrast, the oscillations are washed out at low densities. The oscillations of \(v\) and \(w\) are also visible though much fainter in the Pd sample (see Supplementary Material), as can be expected in a more disordered sample.
As shown in Ref.[22], the oscillations of \(w\) allow for reconstructing the edge density profile. We define the local density as \(x\mapsto n_{e}(x)=nf(x)\) ranging from \(n_{e}(x=0)=0\) at the quantum well edge to \(n_{e}(x)=n\) deep in the bulk of the material, as depicted in Fig.5c. The reconstruction is based on the following principles. The plasmon width \(w\) is essentially defined by the position of the innermost edge state (compressible stripe), located at a position \(x_{\mathrm{QH}}\) such that the local filling factor \(\nu_{e}(x_{\mathrm{QH}})=\frac{hn_{e}(x_{\mathrm{QH}})}{eB}=\lfloor\nu\rfloor\), i.e. is the largest integer inferior or equal to the bulk filling factor \(\nu\). As \(\nu\) varies, \(w\) spans a large range of values from \(~{}0\) (strongly confined plasmons) to \(w\simeq w_{0}\) (loosely confined plasmons), reflecting the variations of \(x_{\mathrm{QH}}\), thus yielding an implicit equation connecting \(w,B\) and \(n_{e}\). Accounting for a broadening \(w_{p}\) of the transverse width due to puddles, we find that the edge profile function \(x\mapsto f(x)\) can be reconstructed using the implicit equation (see Supplementary Material)
\[f(w-w_{p}/2)=1-\frac{1}{2\nu} \tag{2}\]
Figure 5: **Velocity and plasmon transverse width in the Hall regime** a) Linecuts of the velocity \(v\) as function of magnetic field \(B\) for three values of the density \(n\). The grey dashed line shows fits to the law \(v\propto B^{-1}\), valid at low fields. In the high field region, \(v\) exhibits strong oscillations, which become more pronounced as the density \(n\) increases. b) Transverse width \(w\) as a function of filling factor \(\nu\). For large \(\nu\), i.e. low fields, \(w\) is approximately constant and independent of \(n\). For low \(\nu\), the width \(w\) oscillates, showing minimum for integer filling factors \(\nu\in\mathbb{Z}\). c) Sketch of the edge density profile \(n_{e}(x)\), as a function of the distance \(x\) from the edge: \(n_{e}(x)\) saturates at \(n_{e}(x)=n\) in the bulk of the material, and decreases to \(n(x)=0\) at the edge. The blue shades indicate the compressible stripes while the white stripes are the incompressible ones. The bare plasmon width is given by the position of the innermost Landau level \(x_{\mathrm{QH}}\), and is further increased by \(w_{p}/2\) due to puddles. d) Normalized reconstructed edge profile \(n_{e}(x)\) obtained by plotting \(1-1/2\nu\) as a function of \(w\) for all data triplets \((n,B,w)\). The obtained profiles are shown as colored dots for various values of the bulk density \(n\). The dashed lines represents the heuristic edge profile \(f(x)\) for two extreme admissible values of the depletion depth \(l=60\) and \(150\,\mathrm{nm}\).
The results are presented in Fig.5d. For all triplets \((n,B,w)\), we plot \(1-1/2\nu\) as function of the measured width \(w\). The data points describe the reconstructed edge profile, which is found to be mostly independent of the bulk density \(n\). We then fit the reconstructed profile with the heuristic function \(f(x)=\sqrt{\frac{x}{x+l}}\) used in Ref.[22] to obtain an estimate of the edge depletion length \(l\). We find a good agreement with \(l\simeq 60-150\,\)nm, and a puddle broadening \(w_{p}\simeq 1.2\,\)um (almost identical for both Au and Pd samples, see Supplementary Material for more data sets). In particular, the depletion occurs on a scale \(l\) on the order of 1 to 3 times the distance between the quantum well and the gate \(d\), as anticipated from the electrostatic potential created by the gate. Besides, the different characteristic lengths \(l\) and \(w_{0}\) differ by more than one order of magnitude, while they should both be of the order of \(d\) in a clean edge potential.
Plasmons at zero magnetic field -We now analyze the measurements at \(B=0\) when the gate voltage \(V_{g}\) is tuned to adjust the Fermi level in the gap of the material. Given the insulating bulk, and the much faster response times of edge states compared to bulk states [13], we argue and assume in the following that the phase response is dominated by edge transport, and that the velocity is that of the QSH edge channels. The following analysis supports this assumption.
The amplitude of the signal is rather weak in this regime, and consequently the phase measurements more scattered. Nonetheless, we reliably observe (in all samples and configurations) small velocities (see Fig.6), on the order of \(v=v_{\rm QSH}\simeq 10\times 10^{4}\,\)m s\({}^{-1}\) for the Au sample (\(2\times 10^{4}\,\)m s\({}^{-1}\) for the Pd sample). We point out that these values are significantly smaller than those predicted by the band structure [36; 37], which is only slightly smaller than the Fermi velocity in the conduction band \(v_{\rm F}^{\rm CB}\simeq 1\times 10^{6}\,\)m s\({}^{-1}\).
As the gate voltage \(V_{g}\) is driven towards the valence band (\(V_{g}\lesssim-0.75\,\)V), the velocity is found to increase again (\(v\gtrsim 2\times 10^{5}\,\)m s\({}^{-1}\)). This is in line with the expected Fermi velocity in the valence band \(v_{\rm F}^{\rm VB}\simeq 2\times 10^{5}\,\)m s\({}^{-1}\) (though this estimation is made difficult by the camelback structure of the valence band, and its strong variations with parameters such as the quantum well thickness \(t\)).
Discussion -QH and QSH edge states have different origins, namely the formation of the Landau level spectrum for QH edges states vs the topological band inversion of HgTe for the QSH ones. However, their exact properties could both be affected by electrostatic disorder, and therefore may be correlated to one another. Though the following considerations are more speculative, we put forward examples of such relations.
The velocities in these two regimes can not directly be compared (\(v\propto B^{-1}\) in the QH case). However, focusing first on the Au-gated sample, we observe that the two capacitances are very close to each other, with [38]\(C_{\rm QH}=\epsilon w_{0}/d\simeq 1.4\,\)nF m\({}^{-1}\) and \(C_{\rm QSH}=4e^{2}/hv_{\rm QSH}\simeq 1.5\,\)nF m\({}^{-1}\). This value is also fully compatible with the density of state previously measured in the QSH edge state [13], (with a gold-gated device, and accounting for a factor 3 in the distance \(d\) between the two devices). Moreover, the following ratios between the two samples \(\frac{w_{0}({\rm Pd})}{w_{0}({\rm Au})}\sim\frac{v_{\rm QSH}({\rm Au})}{w_{0} ({\rm Pd})}\sim 3.8\) further corroborates that large values of \(w_{0}\) and slow velocities in the QSH regime both originate from the electrostatic disorder, and yield \(C_{\rm QH}\simeq C_{\rm QSH}\simeq 5.4\,\)nF m\({}^{-1}\) in the Pd sample. We note that the characteristic puddle broadening length \(w_{p}\) is observed to be identical in both samples, for an unclear reason. We however stress that the edge profile reconstruction relies on various crude approximations, and probes a high magnetic field regime, while \(w_{0}\) and \(v_{\rm QSH}\) are obtained at low (or zero) magnetic fields.
These simple comparisons should not be overinterpreted, especially since they connect different regimes (QH and QSH). Nonetheless, they suggest that all measured quantities reflect shallow fluctuations of the electrostatic potential yielding puddles to which the different types of edge states couple. They could play an important role in understanding the causes of scattering in the edge states.
## IV Summary and outlook
Puddles play a minor part in archetypical studies of the quantum Hall effect in GaAs hetero-structures thanks to larger gaps, optimized electrostatic disorder, and the natural protection of QH edge states against scattering [39]. However their role has been recently stressed in the quantum spin Hall effect [13; 27; 40; 41] or the quantum anomalous effect [42], where characteristic gap scales are much smaller.
In this context, our analysis of plasmon velocities in the classical and quantum Hall regime (\(B\neq 0\)) and in
Figure 6: **Velocity measured in the QSH regime, at \(B\simeq 0\)**: Velocity \(v\) as function of the gate voltage \(V_{g}\) applied to DCg, for three values of the magnetic field \(B\) close to \(B=0\). The gap region estimated from the resistance \(R_{2T}\) is indicated by vertical dashed lines.
the QSH gap (\(B=0\)) examines the interplay of puddles with high-frequency edge channel transport in HgTe quantum wells. It consistently points towards the picture of edge states coupled to puddles that form due to electrostatic disorder. Though the steep edge confinement takes place over a distance \(l\sim d\), the quantum Hall edge states spread at low fields over a width \(w_{0}\) of order 1-4\(\,\mathrm{\SIUnitSymbolMicro m}\). In addition we find that the velocity \(v_{\mathrm{QSH}}\) in the QSH edge state regime is strongly reduced compared to the anticipated Fermi velocity of the edge channels, in agreement with recent measurements of the edge density of state in similar quantum wells.
This body of works suggest that, before e-e interaction [43] or other mechanisms, puddles play a prominent role in the physics of topological edge states, and constitute a serious hurdle in order to investigate the topological physics of pristine edge states. We hope that the progress in the growth and lithography of existing materials, the development of new platforms with enhanced gaps [44] and lesser electrostatic disorder will help overcome disorder.
###### Acknowledgements.
The authors warmly thank S. Shamim and N. Kumada for insightful discussions, and W. Beugeling for technical support with \(\mathbf{k}\cdot\mathbf{p}\) simulations. This work has been supported by the ERC, under contract ERC-2017-StG "CASTLES" and ERC-2017-Adv "4TOPS", the DFG (SFB 1170 and Leibniz Program), Germany's Excellence Strategy (Cluster of Excellence Matter and Light for Quantum Computing ML4Q, EXC 2004/1 - 390534769, and the Wurzburg-Dresden Cluster of Excellence on Complexity and Topology in Quantum Matter, EXC 2147, 39085490) and the Bavarian Ministry of Education (ENB Graduate school on 'Topological Insulators', and the Institute for Topological Insulators), and finally by the JST, PRESTO Grant Number JPMJPR20L2, Japan.
## Data availability
The data sets generated and/or analyzed during the current study are available from the corresponding author on reasonable request.
## Author contributions
A.G. and E.F. performed the measurements and the data analysis, under the supervision of H.K. and E.B. A.G. fabricated the samples, with help from H.K., and based on MBE layers grown by L.L. H.K. and E.B. supervised the project. All authors participated to the analysis of the results and to the writing of the manuscript.
**- Supplementary Online Material -**
## V Microwave measurement setup
Our sample is placed at the mixing chamber stage of a cryogenic dilution fridge, at a base temperature of about \(20\,\mathrm{mK}\). The experimental setup of the RF measurements is depicted in Fig.7 and described below. A sine wave of frequency \(f\) in the GHz range is generated from a Keysight M8196A arbitrary waveform generator (AWG) and sent to the finger gate RFg of the sample through the RF lines of the fridge. We set the signal amplitude to \(1\,\mathrm{V}\) (maximum amplitude allowed by the AWG), and the signal is then attenuated by \(\sim-52\,\mathrm{dB}\) thanks to the attenuators placed along the fridge RF lines. At the sample level, the signal has an amplitude lower than \(2\,\mathrm{mV}\) given the finite attenuation of the RF lines (in addition to the fixed attenuators).
After being emitted by RFg, the signal is collected by the two ohmic contacts (A and B), and then preamplified by a LNF-LNC0.3_14B (Low Noise Factory) cryogenic amplifier with a \(0.3-14\,\mathrm{GHz}\) bandwidth (\(+37\,\mathrm{dB}\)) before being sent to a heterodyne detection system at room temperature. At the output of the fridge RF lines, the signal coming from each contact is first amplified by a Mini-Circuit ZVA-183WX-S+ amplifier (\(+26\,\mathrm{dB}\) gain for \(700\,\mathrm{MHz}\) to \(18\,\mathrm{GHz}\)) before being mixed with a Local Oscillator (LO) thanks to a Marki Microwave doubled balanced mixer ML1-0220. This LO is a sine wave generated by an Agilent Technologies E8247C PSG CW signal generator (PSG) detuned from the AWG output by \(50\,\mathrm{MHz}\). This mixing process converts the GHz signal coming from the sample to a \(50\,\mathrm{MHz}\) signal making the digital acquisition possible. This is done, after filtering and amplifying (\(+32\,\mathrm{dB}\)) as shown in Fig.7, by an ADQ-14 acquisition card from Teledyne SP devices. The acquisition card will demodulate the input signal by a \(50\,\mathrm{MHz}\) sine wave and then capture the "in-phase" (\(I\)) and "in-quadrature" (\(Q\)) part of the signal.
In order to have phase-resolved measurement, a trigger signal synchronized with the AWG signal must be sent to the acquisition card. This trigger is a \(50\,\mathrm{MHz}\) signal that is generated from the mixing between a sine wave coming from the AWG with same frequency as the signal sent to the sample and the PSG sine wave. All the instruments clocks (AWG, PSG and acquisition card) are then synchronized using the PSG \(10\,\mathrm{MHz}\) clock signal as reference for the AWG and the acquisition card.
Thanks to this heterodyne setup, we are able to measure in parallel signals coming from contact A and contact B in a frequency range spanning from \(3.2\,\mathrm{GHz}\) to \(12\,\mathrm{GHz}\). The minimum of this range (\(3.2\,\mathrm{GHz}\)) corresponds to the minimal working frequency of our detection system, in particular it is the minimal frequency of the bandwidth of the circulators (labelled "Circ." in figure Fig.7). The maximum of the frequency range (\(12\,\mathrm{GHz}\)) corresponds to the maximal frequency of the bandwidth of the mixers. In addition, no clear signal from the sample is measured at this high frequency, suggesting that it corresponds to a regime largely dominated by parasitic coupling. Overall, we have obtained the clearest data sets at \(4\,\mathrm{GHz}\). They are shown in the main text. More data sets at different frequencies are discussed in section VII.2
## VI Additional data sets
### Geometry of the Pd sample
We first briefly discuss the geometry of the Pd sample, the motivation for using a thin Pd gate, and the differences between both geometries. Both samples are very similar, and comprise two RF gates RFg 1 and RFg 2 on both sides of the sample and which are coupled capacitively to the edge of the HgTe QW. Also, in both cases, the Fermi energy is tuned from the valence band to the conduction band thanks to a voltage bias applied at a DC gate (DCg). As seen in Fig.8c and 8d, the DCg covers the whole HgTe QW mesa between the two ohmic contacts while being separated from the RFg part by a \(\sim 0.4\,\mathrm{\SIUnitSymbolMicro m}\)-wide gap, in order to minimize the cross talk between the two gates. This cross-talk has been a particular concern since it contributes solely to the stray parasitic couplings between the RFg and the contact. This stray capacitance can be seen as the serial addition of the capacitive coupling \(\delta C_{1}\) between the RFg and the DCg, and the capacitive coupling \(\delta C_{2}\) between the DCg and the contact. In order to minimize this parasitic signal, we have tested two different geometries for the DCg sketched in Fig.8.
* The first idea is to take advantage of a resistive gate. A resistive gate would add a resistance \(\delta R\) between the two stray capacitances \(\delta C_{1}\) and \(\delta C_{2}\) and then dissipate the RF signal passing through the DCg. In order to achieve such resistive DCg, this one will be fabricated by a thin layer of Pd. Indeed, such a material is known to have a relatively high resistivity for a metal when the thickness is low enough. We have measured that our \(\sim 2.5\,\mathrm{nm}\) thick Pd layer has a sheet resistance around \(\sim 300\,\mathrm{\SIUnitSymbolOhm}\square\) while a Au gate of \(200\,\mathrm{nm}\) thick has a sheet resistance of \(\sim 7\,\mathrm{\SIUnitSymbolOhm}\square\). We then estimate a reduction of the parasite signal around \(60\%\) compared to a pure Au gate.
* The second idea is to ground the gate DCg in the GHz regime, by enhancing the capacitive coupling \(\delta C_{s}\) between the DCg and the ground plane of the CPW. This is achieved by extending the DCg to overlap the \(4\) RFg CPW grounds over an area of \(5\times 5\,\mathrm{\SIUnitSymbolMicro m}^{2}\) as show in the Fig.8d. In this case we have estimated that \(\delta C_{s}\sim 100\,\mathrm{fF}\)
Both samples in fact showed very similar results, and there is in particular no sign that the Pd gate shows
reduced screening. This is likely due to insufficient sheet resistance of the Pd layer, which remains far from that of ZnO layers (\(\sim 1\times 10^{5}\,\Omega/\square\)) used in Ref.[35].
### Crossover between low- and high-field regimes
According to Refs. [31; 32], the plasmon velocity \(v\) exhibits a power law \(B^{-1}\) given by \(v=\frac{ned}{evB}\) if one assumes the other parameters \(n\), \(w\) to be constant. To confirm this, we plot \(v\) as a function of \(B\) on a log-log scale for different carrier densities \(n\) in the conduction band, presented in Fig.9. These plots clearly show two power law regimes (with negative exponents) in \(B\) as represented by the blue and red dashed lines:
* at low magnetic field (\(B\ll B_{c}\)), the velocity follows the expected inverse law in \(B\): \(v=\frac{\alpha_{1}}{B}\), where \(\alpha_{1}\) is a fitting parameter.
* at high magnetic field (\(B\gg B_{c}\)), we observe in some parameter range a good agreement with negative power laws with different exponent (\(-1/3\) for Pd DCg and \(-2/3\) for Au DCg), though we don't have models describing such a behavior. Note also that for Au DCg sample, we have only considered the minimum in the oscillations of \(v\).
For each sample, we plot the fitting parameters \(\alpha_{1}\) as a function of the density \(n\). This is presented in Fig.10. As depicted in this figure by a black dashed line, \(\alpha_{1}\) is linear with the density \(n\) in a wide range of densities, which allows to univocally identify a constant (independent of \(n\)) transverse width \(w_{0}\) in this regime (low magnetic field \(B\ll B_{c}\)). Thanks to this linear fit, one can extract this constant width \(w_{0}\) and find \(w_{0}\simeq 4.6\,\mathrm{\SIUnitSymbolMicro m}\) for Pd DCg and \(w_{0}\simeq 1.2\,\mathrm{\SIUnitSymbolMicro m}\) for Au DCg.
One possible scenario to explain the transition between a low- and a high-field regime is the transition between a Landauer-Buttiker (LB) regime[45; 46] and a Chklovskii-Shklovskii-Glazman (CSG) regime[28] of the QH edge channels, more recently numerically revisited in Armagnat _et al.[29]_. Though this is not central to our argumentation, we explore more in detail this possibility in this section. At low magnetic field the edge channels are narrow, only spreading over a characteristic width given by the magnetic length \(l_{B}=\sqrt{\hbar/eB}\). In this regime, the electronic interactions are neglected and the electrostatics is dominated by the transverse confining potential \(U(x)\) of the QW (the coordinate \(x\) describing the position transverse to the propagation direction). This LB description is not valid anymore for high magnetic field. Instead, a more appropriate description is given by the CSG picture, in which electronic interactions are now considered and play a significant role on the electrostatics of the system[28]. The consequence is that the edge states acquire a finite width, constituting compressible stripes, separated by insulating regions called incompressible stripes. The transition between these two regimes happens when the magnetic length \(l_{B}\) becomes comparable to one edge channel typical width \(a\sim d/\nu\). According to this statement, the crossover magnetic field \(B_{c}\) should verify \(B_{c}\propto n^{\frac{2}{3}}\). In Fig.11, we have plotted the critical field \(B_{c}\) as a function of the bulk electron density \(n\), and compare it to this power law. Though the agreement is not very good, the increase of \(B_{c}\) with the carrier density is captured, suggesting a similar mechanism for the transition in our sample.
### Velocity oscillations in the Pd and Au samples
We show in this section full colormaps of the RF signal amplitude \(M\) and velocity \(v\) measured on both samples with Au and Pd gates. In Fig.12, one observes that as expected both \(M\) and \(v\) increase when \(\nu\) increases. In the high density region \(n>2\times 10^{11}\,\mathrm{cm}^{-2}\), oscillations in the velocity \(v\) become visible, especially for the Au-gated sample. The features align very well with the integer values of \(\nu\), supporting the claims of the main text.
### Edge potential reconstruction in the Pd and Au samples
According to the theory developed for screened plasmons[31; 32; 22], the plasmon width \(w\) is fixed by the innermost incompressible strip, localized at the transverse position \(x_{\mathrm{QH}}\). As stated in the main text, we assume that the edge state is further broadened by puddles, and write \(w=w_{p}/2+x_{\mathrm{QH}}\). The plasmon width oscillates between a maximum \(w\simeq w_{0}\) for bulk filling factor just exceeding an integer value (i.e. with an edge state nucleating in the bulk) \(\nu\simeq\lfloor\nu\rfloor^{+}\), and a minimum \(w\simeq 0\) when the bulk filling factor is slightly below the next integer number \(\nu\simeq\lceil\nu\rceil^{-}\).
The local density at the position \(x_{\mathrm{QH}}\) of the innermost edge channel are actually equal at zero magnetic field and for a finite filling factor[29], namely \(n(x_{\mathrm{QH}},B)=n(x_{\mathrm{QH}},0)=n(w-w_{p}/2)\). As the position of the edge state \(x_{\mathrm{QH}}\) is related to the local filling factor \(\nu_{e}(x_{\mathrm{QH}})=\frac{hn_{e}(x_{\mathrm{QH}})}{eB}\) being an integer, i.e. \(\nu_{e}(x_{\mathrm{QH}})=\lfloor\nu\rfloor\), one can connect the bulk density \(n\), the local edge density \(n_{e}(x_{\mathrm{QH}})\) and the magnetic field \(B\), so that one can map \(n_{e}(x)\) as the plasmon width \(w\) varies and obtain:
\[n_{e}(x_{\mathrm{QH}})=n_{e}(w-w_{p}/2)=\frac{e|B|}{h}\lfloor\nu\rfloor \tag{3}\]
At half filling \(\nu=\lfloor\nu\rfloor+1/2\), this further simplifies to:
\[n_{e}(x_{\mathrm{QH}})=n_{e}(w-w_{p}/2)=\frac{e|B|}{h}\left(\nu-\frac{1}{2} \right)=n-\frac{e|B|}{2h} \tag{4}\]
To account for disorder and smearing effects, we assume, that on average, Eq.(4) can be generalized to all filling factors \(\nu\), considering the approximation that the plasmon width \(w\) at \(\nu=\lfloor\nu\rfloor+1/2\) is the mean value of the
width for the whole filling factor interval [\(\lfloor\nu\rfloor\), \(\lfloor\nu\rfloor+1\)[ in which the number of egde states is fixed at \(\lfloor\nu\rfloor\).
In Fig.13a and 13b we have plotted the quantity \(n-\frac{e\lfloor B\rfloor}{2h}\) as a function of the plasmon width \(w\) for the Pd DCg and Au DCg samples respectively and for different bulk electron densities \(n\) as well as the normalized quantities \(\left(n-\frac{e\lfloor B\rfloor}{2h}\right)/n=1-1/2\nu\), where one can observe that the curves for the different densities are superimposed. This confirms that the shape of the carrier density profile \(n_{e}(x)\), and in particular its characteristic depletion length \(l\), does not depend much on the bulk carrier density \(n\). For comparison, one can fit the data in Fig.13 with a heuristic edge function given by Kumada _et al._[22]:
\[f(x)=\frac{n_{e}(x)}{n}=\sqrt{\frac{x}{x+2l}} \tag{5}\]
Fits to this function yield a characteristic length \(l\sim 60-150\,\mathrm{nm}\) which is on the order of \(d\sim 66\,\mathrm{nm}\), and a plasmon broadening length \(w_{p}\simeq 1.2\,\mathrm{\SIUnitSymbolMicro m}\) for both samples.
### Velocity in the QSH regime
We here focus on the velocity \(v\) in the QSH regime (i.e. at \(B=0\) near the gap), plotted as a function of gate voltage (Fig.14). It exhibits a minimum in the gap, down to \(\sim 2.5\times 10^{4}\,\mathrm{m}\,\mathrm{s}^{-1}\) for Pd DCg and \(\sim 1\times 10^{5}\,\mathrm{m}\,\mathrm{s}^{-1}\) for Au DCg. As mentioned in the main text, these velocities are much lower than the ones predicted \(\mathbf{k}\cdot\mathbf{p}\) calculations of the band structure, which predict plasmon velocities lower than but comparable to the Fermi velocity \(v_{F}^{\mathrm{CB}}\simeq 1\times 10^{6}\,\mathrm{m}\,\mathrm{s}^{-1}\) in the conduction band.
### Issues in the calibration
In this section, we discuss problems encountered when calibrating data sets in other samples or experimental runs, as well as a plausible origin. We recall that we calibrate the data by i) subtracting a parasitic stray coupling, measured in a situation where the edge states do not contribute to the signal (reversed magnetic field or carrier polarity) ii) subtract a phase reference taken when plasmons propagate quasi-infinitely fast, here in the limit of metallic 2D plasmons (at \(B=0\) and high densities \(n>4\times 10^{11}\,\mathrm{cm}^{-2}\)). We have for examples observed that some data sets have the following issues, when we conduct the calibration procedure described in the main text:
* Near \(B=0\), sweeping \(V_{g}\) from positive towards negative values, the phase does not wind around the origin in a given direction (clockwise) but winds in the counter-clockwise direction in some range of \(V_{g}\). As a consequence, the velocity \(v\) shows divergences and sign changes near \(B=0\). When \(B\) increases, the phase winding is normal and, despite the ill-defined calibration, the results described in the main text for large \(B\) can be verified in these data sets as well.
* Some samples show no strong chirality: the signal amplitude \(M\) does not cancel out in any of the two directions of the magnetic field.
Though we do not have a precise understanding of the situation, we point out that the typical transverse widths \(w_{0},w_{p}\) of the edge states are comparable to the distance \(L\) between the finger gates \(RF_{g}\) and the contacts A and B. Assuming \(w_{0},w_{p}\) determine the size of typical puddles, disorder could result in complex percolation paths connecting \(\mathrm{RF}_{g}\) and the contacts A and B regardless of the chirality imposed by the magnetic field, making the calibration impossible in particularly disordered samples.
## VII Frequency dependence: study in the Pd sample
### Calibrated data for different frequencies
We here analyze the magnitude of the signal for different frequencies. As mentioned in the main text, the signal magnitude \(M\) has a strong asymmetry in magnetic field \(B\): it has a vanishing value for \(B>0\) and is non zero for \(B<0\). This feature is the signature of the chirality of the QH edge states. To illustrate this further, we define the asymmetry function of the magnitude \(\chi(V_{g},B)\) in the \((V_{g},B)\) plane as:
\[\chi(V_{g},B)=\frac{M(V_{g},B)-M(V_{g},-B)}{M(V_{g},B)+M(V_{g},-B)}. \tag{6}\]
This function varies between extremal values \(\pm 1\) that are reached when \(M\) is strictly zero for one \(B\) polarity only, and vanishes when the signal magnitude is equal for positive and negative \(B\).
In Fig.16, we present both the RF fan chart of \(M\) and color plots of \(\chi\) for frequencies \(f=3.2\,\mathrm{GHz}\), \(4\,\mathrm{GHz}\), \(5\,\mathrm{GHz}\), \(6\,\mathrm{GHz}\) and \(9.45\,\mathrm{GHz}\). Though the maximum amplitude of \(M\) is strongly frequency-dependent, the color-plots look relatively similar. One can nevertheless clearly observe that the amplitude measured in the QH regime (relative to the peak value at \(B=0\)) decreases with frequencies, and that the asymmetry between \(B\) and \(-B\) weakens. Now looking at the colorplots of \(\chi\), we observe that the signal is strongly asymmetric (\(\chi(V_{g},B)\simeq\pm 1\) for \(B\neq 0\)) for each frequency, in particular in the conduction band (\(V_{g}\gtrsim-0.6\,\mathrm{V}\)). For lowest frequencies \(f=3.2\,\mathrm{GHz}\) to \(5\,\mathrm{GHz}\), the asymmetry function value varies abruptly from \(\chi\simeq+1\) to \(\chi\simeq-1\) when sweeping the magnetic field from negative \(B<0\) to positive \(B>0\) value. In the valence band, and more particularly in the gap, the asymmetry function shows a lesser chirality than in the conduction band. Nevertheless, one also can see that the asymmetry function switches sign when passing from conduction band (\(V_{g}\gtrsim-0.6\,\mathrm{V}\)) to the valence
band (\(V_{g}\lesssim-0.9\,\mathrm{V}\)), indicating an inversion of the chirality. For higher frequencies \(f=6\,\mathrm{GHz}\) and \(9.45\,\mathrm{GHz}\), even if \(\chi\) shows a strong asymmetry, the transition at \(B=0\) is smoother and the asymmetry function takes lower extremal absolute value (\(|\chi|\lesssim 1\)) than for lower frequencies. This suggests that increasing further the frequency degrades the observed chirality in RF. This might be explained by cross-talk effects between opposite edges of the sample for which QH states have an opposite direction [47].
### Phase vs group velocity
The phase velocity is defined as the velocity of a constant phase plane and it is expressed as the ratio between the wave pulsation \(\omega=2\pi f\) and its wavenumber \(k\), \(v_{\phi}=v=\frac{\omega}{k}\), while the group velocity at which energy is transported is given by \(v_{g}=\frac{\partial\omega}{\partial k}\). We expect, in the studied limit of small frequencies, that the dispersion relation \(\omega(k)\) is linear, such that group and phase velocity are equal, \(v_{\phi}=v_{g}\), and the edge plasmons are non-dispersive. In the Pd sample, we have measured the phase shift fan chart for four different frequencies 3.2 GHz, 4 GHz, 5 GHz and 6 GHz. For each point in the \((V_{g},B)\) phase space, we have fitted the relation between the pulsation \(\omega=2\pi f\) and the wavenumber \(k=\phi/L\) with a linear function. Examples are given in Fig.17. The group velocity is then extracted by measuring the slope of the linear fit. One can see that, in the conduction band, the four points are relatively well fitted with a linear dispersion (represented as a black dashed line on the figure). Finally, in Fig.18, the group and phase velocities are plotted as a function of the magnetic field \(B\) for fixed values of the gate voltage \(V_{g}\), in the conduction band. The two velocities are found to match well, confirming that one can assume that the equation \(v_{\phi}=v_{g}\) is verified in our measurements.
|
2309.02831 | The multiplicative semigroup of a Dedekind domain | In 1995 Grillet defined the concept of a stratified semigroup and a
stratified semigroup with zero. The present authors extended that idea to
include semigroups with a more general base and proved, amongst other things,
that finite semigroups in which the H-classes contain idempotents, are
semilattices of stratified extensions of completely simple semigroups, and
every strict stratified extension of a Clifford semigroup is a semilattice of
stratified extensions of groups. We continue this work here by considering the
multiplicative semigroup of Dedekind domains and show in particular that
quotients of such rings have a multiplicative structure that is a (finite)
Boolean algebra of stratified extensions of groups. | James Renshaw, William Warhurst | 2023-09-06T08:27:50Z | http://arxiv.org/abs/2309.02831v1 | # The multiplicative semigroup of a Dedekind domain
###### Abstract
In 1995 Grillet defined the concept of a stratified semigroup and a stratified semigroup with zero. The present authors extended that idea to include semigroups with a more general base and proved, amongst other things, that finite semigroups in which the \(\mathcal{H}-\)classes contain idempotents, are semilattices of stratified extensions of completely simple semigroups, and every strict stratified extension of a Clifford semigroup is a semilattice of stratified extensions of groups. We continue this work here by considering the multiplicative semigroup of Dedekind domains and show in particular that quotients of such rings have a multiplicative structure that is a (finite) boolean algebra of stratified extensions of groups.
**Keywords** Semigroup, stratified extension, semilattice, Dedekind domain.
**Mathematics Subject Classification** 2020: 20M10.
## 1 Introduction and preliminaries
In [7], the authors introduce stratified extensions of semigroups as a generalisation of the work of Grillet in [4]. They define the _base_ of a semigroup \(S\) to be the subset \(\mathrm{Base}(S)=\bigcap_{m>0}S^{m}\) and note that \(S\) is a _stratified semigroup_ as defined by Grillet if \(\mathrm{Base}(S)=\{0\}\) or \(\mathrm{Base}(S)\) is empty. A semigroup \(S\) is then called a _stratified extension_ of \(\mathrm{Base}(S)\) if \(\mathrm{Base}(S)\neq\emptyset\). The name signifying the fact that in this case \(S\) is an ideal extension of \(\mathrm{Base}(S)\) by a stratified semigroup with zero.
After a few preliminaries, in Section 2 we consider the multiplicative structure of commutative rings in a more general way and describe the \(\mathcal{J}-\)classes in terms of
certain annihilators. We then show that the multiplicative semigroup can be viewed as a semilattice of semigroups. In section 3 we specialise to Dedekind domains and show that the subsemigroups of the semilattice are stratified extensions of groups. In section 4, we consider quotients of Dedekind domains and demonstrate by using prime factorisations of ideals, that the multiplicative structure is a finite Boolean algebra of stratified extensions of groups and give a'recipe' for constructing both the semilattice and the stratified subsemigroups. Section 5 then presents some interesting examples.
If \(S\) is a stratified extension of \(\operatorname{Base}(S)\), the _layers_ of \(S\) are defined to be the sets \(S_{m}=S^{m}\setminus S^{m+1}\), \(m\geq 1\). This definition makes sense for any semigroup, of course, but we are only interested in the case when \(\operatorname{Base}(S)=\cap_{m\geq 1}S^{m}\neq\emptyset\). Every element of \(S\) lies either in the base of \(S\) or in exactly one layer of \(S\), and if \(s\in S_{m}\) then \(m\) is the _depth_ of \(s\). If \(S\) has finitely many layers then the numbers of layers is called the _height_ of \(S\). The layer \(S_{1}\) generates every element of \(S\setminus\operatorname{Base}(S)\) and is contained in any generating set of \(S\).
Since \(\operatorname{Base}(S)\subseteq S^{m}\) for any \(m\in\mathbb{N}\), we have an alternative characterisation for the elements of \(\operatorname{Base}(S)\). A element \(s\in S\) lies in \(\operatorname{Base}(S)\) if and only if for any \(m\in\mathbb{N}\), \(s\) can be factored into a product of \(m\) elements i.e. \(s=a_{1}a_{2}\ldots a_{m}\) for some \(a_{i}\in S\). This characterisation allows us to deduce some immediate properties of \(\operatorname{Base}(S)\) as a subsemigroup of \(S\).
**Lemma 1.1** ([7, Corollary 2.2]): _Let \(S\) be a semigroup. \(\operatorname{Reg}(S)\subseteq\operatorname{Base}(S)\) and if \(S\) is regular then \(\operatorname{Base}(S)=S\)._
**Lemma 1.2**: _Let \(S,T\) be semigroups and \(f:S\to T\) a morphism. Then for all \(i\in\mathbb{N}\), \(S^{i}\subseteq f^{-1}(T^{i})\) and so in particular, \(\operatorname{Base}(S)\subseteq f^{-1}(\operatorname{Base}(T))\)._
For the basic concepts in semigroup theory we refer the reader to [5]. In particular, we say that \(S\) is a _semilattice of semigroups_\(S_{\alpha},\alpha\in Y\), and write \(S=\mathcal{S}[Y,S_{\alpha}]\), if \(Y\) is a semilattice, \(S=\cup_{\alpha\in Y}S_{\alpha}\) and for all \(\alpha,\beta\in Y\), \(S_{\alpha}S_{\beta}\subseteq S_{\alpha\beta}\). Notice that if \(Y\) is a semilattice and if \(\phi:S\to Y\) is an onto morphism, then \(S\) is a semilattice of semigroups \(S=\mathcal{S}[Y,\phi^{-1}(\alpha)]\).
For more details of basic definitions and results in ring theory, we refer the reader to [1] and [2]. An ideal \(I\) of a ring \(R\) is _prime_ if \(I\) is a proper ideal and for all \(a,b\in R\), if \(ab\in I\) then either \(a\in I\) or \(b\in I\). A _domain_ is a ring with no non-zero zero-divisors and a _Dedekind domain_ is a commutative domain in which every non-zero proper ideal can be factored into a product of prime ideals. If \(I,J\unlhd R\) are ideals of \(R\) then we say that \(I\)_divides_\(J\) and write \(I|J\) if and only if there exists \(H\unlhd R\) with \(I=JH\). Then \(R\) is a Dedekind domain if and only if
\[\text{for all }I,J\unlhd R,J\subseteq I\text{ if and only if }I|J.\]
A _principal ideal domain_ is a commutative domain in which every ideal is principal. A Dedekind domain is a principal ideal domain if and only if it is a unique factorisation domain. If \(R\) is a Dedekind domain and \(\{0\}\neq I\unlhd R\) is a non-zero ideal
of \(R\) then \(R/I\) is a principal ideal ring. A Dedekind domain is Noetherian and as such every non-zero, non-unit element can be factorised into a product of irreducible elements. The following elementary properties of ideals will be used implicitly in some of what follows.
**Lemma 1.3**: _Let \(I,J\) be ideals of \(R\). Then_
1. \(IJ\subseteq I\cap J\)_._
2. \(I\cup J\subseteq I+J\)__
3. \(I\subseteq J\iff I+J=J\)_._
4. \(IJ+J=J\)_._
## 2 Rings as semilattices of semigroups
Let \(R\) be a commutative ring with unity. We will show that the multiplicative semigroup of \(R\) is a semilattice of semigroups, and investigate this structure further in Section 3. Note that as \(R\) is commutative then on the multiplicative semigroup of \(R\), Green's relations, \({\cal H}={\cal R}={\cal L}={\cal D}={\cal J}\) coincide. Recall that for any ring \(R\), the quotient \(R/(0)\) is naturally isomorphic to \(R\) itself via the map \(x\mapsto x+(0)\). Some of the results below take place within \(R/(0)\) and we could make use of this isomorphism to recast them in \(R\) instead. However we have chosen not to do this explicitly.
Let \(D\) be the set of all ideals of \(R\). It is easy to see that, under the usual addition and multiplication of ideals, \(D\) forms a semiring with additive identity \((0)\) and multiplicative identity \((1)=R\). Let \(\delta:R\to D\) be given by \(\delta(x)=(x)\). This is clearly a semiring homomorphism.
**Proposition 2.1**: _The kernel of \(\delta\) is Green's \({\cal J}-\)relation and hence \({\cal J}\) is a congruence on \(R\)._
**Proof.** Let \(x,y\in R\) such that \(\delta(x)=\delta(y)\). Then the principal ideals \(RxR\) and \(RyR\) are equal, so \(x{\cal J}y\). Conversely if \(x{\cal J}y\) then \(\delta(x)=RxR=RyR=\delta(y)\).
Let \(x\in R\) and let
\[\overline{x}={\rm Ann}(x)=\{y\in R\mid xy=0\}\]
be the annihilator of \(x\), which is clearly an ideal of \(R\). Let \(R_{\overline{x}}=R/\overline{x}\) and let \(U_{\overline{x}}\) be the group of units of this quotient. For \(y\in R\), we denote by \([y]_{\overline{x}}\) the coset \(y+\overline{x}\) and consider the set \(xU_{\overline{x}}\). We will see that this set is essentially the \({\cal J}-\)class of \(R\) containing \(x\). Note that \(x\overline{x}=\{xy\mid y\in R,xy=0\}=\{0\}=(0)\).
**Lemma 2.2**: _Let \(x\in R\) and let \(V_{x}=\{u\in R\mid\exists v\in R,xuv=x\}\). Then_
1. \(xU_{\overline{x}}\subseteq xR_{\overline{x}}\subseteq R/(0)\)_,_
2. _For_ \([u]_{\overline{x}},[v]_{\overline{x}}\in R_{\overline{x}}\)_,_ \(x[u]_{\overline{x}}=x[v]_{\overline{x}}\) _if and only if_ \([u]_{\overline{x}}=[v]_{\overline{x}}\)_,_
3. \(V_{x}\) _is a submonoid of_ \(R\) _and_ \(u\in V_{x}\) _if and only if_ \([u]_{\overline{x}}\in U_{\overline{x}}\)_._
**Proof.** :
1. For any \(u\in R\) we have \(x[u]_{\overline{x}}=x(u+\overline{x})=xu+x\overline{x}=xu+(0)\in R/(0)\) and so \(xR_{\overline{x}}\subseteq R/(0)\).
2. Let \(x[u]_{\overline{x}}=x[v]_{\overline{x}}\). Then \(xu-xv\in(0)\) and so \(x(u-v)=0\). Hence \(u-v\in\overline{x}\) and so \([u]_{\overline{x}}=[v]_{\overline{x}}\). The converse is obvious.
3. That \(V_{x}\) is a submonoid of \(R\) is fairly clear. Suppose that \([u]_{\overline{x}}\in U_{\overline{x}}\) so that there exists \([v]_{\overline{x}}\in U_{\overline{x}}\) such that \([u]_{\overline{x}}[v]_{\overline{x}}=[1]_{\overline{x}}\). Then \(uv-1\in\overline{x}\) and so \(xuv=x\). Conversely, if \(u,v\in R\) such that \(xuv=x\) then \(x[u]_{\overline{x}}[v]_{\overline{x}}=xuv+(0)=x+(0)=x[1]_{\overline{x}}\) and so from part (2) it follows that \([u]_{\overline{x}}\in U_{\overline{x}}\).
**Theorem 2.3**: _Let \(x,y\in R\). Then \(x{\cal J}y\) if and only if \(y+(0)\in xU_{\overline{x}}\). Consequently the sets \(xU_{\overline{x}}\) are the \({\cal J}-\)classes of \(R/(0)\)._
**Proof.** : Suppose \(y+(0)\in xU_{\overline{x}}\). Then \(y+(0)=x[u]_{\overline{x}}=xu+(0)\) for some \(u\in V_{x}\), and so \(y=xu\). Since \([u]_{\overline{x}}\) is a unit, there exists \(v\in V_{x}\) such that \(xuv=x\) and so \(yv+(0)=xuv+(0)=x+(0)\) and hence \(x=yv\) and so \(x{\cal J}y\).
Now let \(x{\cal J}y\). Then there exists \(u\in R\) such that \(xu=y\). Then \(y+(0)=xu+(0)=x[u]_{\overline{x}}\). Further, there exists \(v\in R\) such that \(x=yv\). Then \(x=xuv\) and so from Lemma 2.2(3), \([u]_{\overline{x}}\in U_{\overline{x}}\) and \(y+(0)\in xU_{\overline{x}}\).
From this and the fact that \({\cal J}=\ker(\delta)\) we immediately deduce
**Corollary 2.4**: _Let \(x,y\in R\). Then_
\[x{\cal J}y\mbox{ if and only if }xU_{\overline{x}}=yU_{\overline{y}}\mbox{ if and only if }(x)=(y).\]
Notice that \(x{\cal J}_{R}y\) if and only if \((x+(0))\,{\cal J}_{R/(0)}\,(y+(0))\) and that the sets \(xU_{\overline{x}}\) are the \({\cal J}-\)classes of \(R/(0)\). Consequently, if \(x_{1}+(0),x_{2}+(0)\in xU_{\overline{x}}\) then
\[x_{1}\;{\cal J}_{R}\;x_{2}\;{\cal J}_{R}\;x\]
and so since \({\cal J}_{R}=\ker\delta\)
\[(x_{1})=(x_{2})=(x).\]
Note also that since \(u\in V_{x}\) if and only if \([u]_{\overline{x}}\in U_{\overline{x}}\) then \(y\in xV_{x}\) if and only if \(y+(0)\in xU_{\overline{x}}\). It follows that \(xV_{x}\) is the image of \(xU_{\overline{x}}\) under the natural isomorphism from \(R/(0)\) to \(R\) and hence is the \({\cal J}\)-class of \(R\) containing \(x\). Our reason for working with \(xU_{\overline{x}}\) rather than \(xV_{x}\) is due to Lemma 2.2(2): if \(u,v\in V_{x}\) and \(xu=xv\) then it is not necessarily the case that \(u=v\). For example, if \(e\in R\) is a non-unit idempotent then \(1,e\in V_{e}\) and \(e1=ee=e\) but \(e\neq 1\). This cancellation property is required for the following theorem.
**Theorem 2.5**: _Let \(R\) be a ring and let \(x\in R\). The set \(xU_{\overline{x}}\) is a subsemigroup of \(R/(0)\) if and only if \(\delta(x)\) is an idempotent. It is in fact a subgroup which is isomorphic to \(U_{\overline{x}}\)._
**Proof.** Let \(x_{1}+(0),x_{2}+(0)\in xU_{\overline{x}}\). Then from above, \(x_{1}{\cal J}x_{2}{\cal J}x\) and so since \({\cal J}\) is a congruence, it follows that \(x_{1}x_{2}{\cal J}x^{2}\) and hence from Corollary 2.4, \(x_{1}x_{2}+(0)\in x^{2}U_{\overline{x^{2}}}\). But if \(xU_{\overline{x}}\) is a subsemigroup of \(R/(0)\) then \(x_{1}x_{2}+(0)\in xU_{\overline{x}}\) and so \(xU_{\overline{x}}\cap x^{2}U_{\overline{x^{2}}}\neq\emptyset\). Consequently \(xU_{\overline{x}}=x^{2}U_{\overline{x^{2}}}\) and so \(\delta(x)=(x)=(x^{2})=\delta(x)^{2}\) by Corollary 2.4.
Conversely, if \((x)=(x^{2})\) then \(x{\cal J}x^{2}\) and so in particular \(x+(0)\) and \(x^{2}+(0)\) belong to the same \({\cal H}\)-class of \(R/(0)\), \(xU_{\overline{x}}\), and hence this \({\cal H}\)-class is a group.
If \((x)=(x^{2})\) then \(x=x^{2}k\) for some \(k\in R\) and so \(x\in V_{x}\). Now define \(\phi:xU_{\overline{x}}\to U_{\overline{x}}\) by \(\phi(x[u]_{\overline{x}})=[xu]_{\overline{x}}\). By Lemma 2.2(3) this map is well-defined and it is clearly onto. In addition
\[\phi(x[u]_{\overline{x}}x[v]_{\overline{x}})=\phi(x^{2}[uv]_{ \overline{x}})=\phi(x[xuv]_{\overline{x}})\] \[=[x^{2}uv]_{\overline{x}}=([xu]_{\overline{x}})\,([xv]_{\overline{ x}})\] \[=\phi(x[u]_{\overline{x}})\phi(x[v]_{\overline{x}}),\]
and so \(\phi\) is a morphism. Finally, if \(\phi(x[u]_{\overline{x}})=\phi(x[v]_{\overline{x}})\) then \([xu]_{\overline{x}}=[xv]_{\overline{x}}\) and so \([x]_{\overline{x}}[u]_{\overline{x}}=[x]_{\overline{x}}[v]_{\overline{x}}\). Hence \([u]_{\overline{x}}=[v]_{\overline{x}}\) since \([x]_{\overline{x}}\in U_{\overline{x}}\). Therefore \(x[u]_{\overline{x}}=x[v]_{\overline{x}}\) as required.
For any \(I\in D\), let \(E_{I}=\{J\in E(D)\mid J\subseteq I\}\). This set is non-empty since \((0)\in E(D)\) and \((0)\subseteq I\) for any \(I\in D\). We claim that
\[\varepsilon(I)=\sum_{J\in E_{I}}J\]
is the greatest element of \(E_{I}\) with respect to subset inclusion. It is easy to see that \(\varepsilon(I)\in D\) and for every \(J\in E_{I}\), \(J\subseteq\varepsilon(I)\). It remains to show that \(\varepsilon(I)\in E(D)\) and \(\varepsilon(I)\subseteq I\).
A general element of \(\varepsilon(I)\) has the form \(e_{1}+\ldots+e_{n}\) where each \(e_{i}\) lies in some ideal \(J_{i}\in E_{I}\). Hence every element of \(\varepsilon(I)\) is an element of \(I\) and so \(\varepsilon(I)\subseteq I\). As \(J_{i}\in E(D)\), \(e_{i}\in J_{i}J_{i}\). Then
\[e_{1}+\cdots+e_{n}\in J_{1}J_{1}+\ldots+J_{n}J_{n}\subseteq(J_{1}+\ldots+J_{n })(J_{1}+\ldots+J_{n}).\]
Since each \(J_{i}\in E_{I}\), \(J_{1}+\ldots+J_{n}\subseteq\varepsilon(I)\) so \(\varepsilon(I)\subseteq\varepsilon(I)\varepsilon(I)\). The reverse inclusion holds for any ideal and so \(\varepsilon(I)\in E(D)\) as required.
This construction clearly describes a well-defined map \(\varepsilon:D\to E(D)\). For each \(e\in E(D)\) let
\[D_{e}=\varepsilon^{-1}(e).\]
**Proposition 2.6**: _The multiplicative semigroup of \(D\) is a semilattice of semigroups \({\cal S}[E(D);D_{e}]\)._
**Proof.** As \(R\) is commutative, \(D\) is also commutative and hence \(E(D)\) is a semilattice. If \(I\in E(D)\) then clearly \(I\) is the greatest element of \(E_{I}\) so \(\varepsilon(I)=I\) and \(\varepsilon\) is a surjection onto the semilattice \(E(D)\). It remains to show that \(\varepsilon\) is a homomorphism. Let \(I,J\in D\) and \(K\in E(D)\). If \(K\in E_{I}\cap E_{J}\) then \(K\subseteq I\) and \(K\subseteq J\). Then \(K=KK\subseteq IJ\) so \(K\in E_{IJ}\). Conversely, if \(K\in E_{IJ}\) then \(K\subseteq IJ\). But \(IJ\subseteq I\) so \(K\subseteq I\) and \(K\in E_{I}\). In a similar way, \(K\in E_{J}\) and hence \(K\in E_{I}\cap E_{J}\) and \(E_{IJ}=E_{I}\cap E_{J}\).
It is easily seen that for all \(L\in D\), \(E_{L}=E_{\varepsilon(L)}\) and hence
\[E_{\varepsilon(IJ)}=E_{IJ}=E_{I}\cap E_{J}=E_{\varepsilon(I)}\cap E_{ \varepsilon(J)}=E_{\varepsilon(I)\varepsilon(J)}.\]
As \(E(D)\) is a semilattice \(\varepsilon(I)\varepsilon(J)\in E(D)\) and so it follows that \(\varepsilon(IJ)=\varepsilon(I)\varepsilon(J)\) as required.
For each \(e\in E(D)\) let \(R_{e}=(\varepsilon\delta)^{-1}(e)\).
**Theorem 2.7**: _The multiplicative semigroup of \(R\) is a semilattice of semigroups \({\cal S}[{\rm Im}(\varepsilon\delta);R_{e}]\)._
**Proof.** It is clear that \(\varepsilon\delta\) is a surjective homomorphism from \(R\) onto its image. Since \({\rm Im}(\varepsilon\delta)\) is a subsemigroup of the semilattice \(E(D)\), it is also a semilattice.
We now want to consider the nature of the semigroups \(D_{e}\) and \(R_{e}\) for a specific type of ring.
## 3 Dedekind domains
Let \(R\) be a Dedekind domain. We wish in the next section to consider quotients of Dedekind domains but we first make some observations about Dedekind domains in general. While the semigroup structure of these rings is not too complex, it is interesting in its own right. We will show that \(R\) is a semilattice of stratified extensions of groups. More specifically, \(R\) is a semilattice of two semigroups; its group of units and a stratified extension of the trivial group. Recall that \(D\) is the collection of all ideals of \(R\) and that \(\varepsilon\delta(x)\) is the largest idempotent ideal contained in \((x)\).
**Proposition 3.1**: _The idempotents of \(D\) are \(R\) and \((0)\)._
**Proof.** As \(R\) is a Dedekind domain, every non-zero proper ideal \(I\) of \(R\) can be factorised uniquely as a product of prime ideals, so \(I=X_{1}\ldots X_{n}\) for some prime ideals \(X_{i}\unlhd R\). If \(I\in E(D)\) then \(I=I^{2}=X_{1}\ldots X_{n}X_{1}\ldots X_{n}\) is another factorisation of \(I\) into prime ideals. This contradicts the uniqueness of the factorisation, and so \(I\) cannot be idempotent. Hence there are no idempotent non-zero proper ideals of \(R\) and so \(E(D)=\{R,(0)\}\).
Clearly for any ideal \(I\) of \(R\) we have \((0)\subseteq I\subseteq R\) and so \(\varepsilon(I)=R\) if \(I=R\) and \(\varepsilon(I)=(0)\) otherwise. Note also that \(R=\varepsilon\delta(1)\) and \((0)=\varepsilon\delta(0)\) so \(\varepsilon\delta\) is surjective, and in addition \(D_{R}\) is the trivial semigroup and hence is vacuously a stratified extension of a group.
**Proposition 3.2**: _The subsemigroup \(D_{(0)}\) is a stratified extension of the trivial group \(\{(0)\}\)._
**Proof.** Let \(I\) be a non-zero proper ideal of \(R\) so \(I\in D_{(0)}\). Suppose \(I\) factors uniquely as a product of \(n\) prime ideals, \(I=X_{1}\ldots X_{n}\). Each \(X_{i}\) is a non-zero proper ideal of \(R\) so lies in \(D_{(0)}\) and hence \(I\in{D_{(0)}}^{n}\). If \(I\in{D_{(0)}}^{n+1}\) then \(I=Y_{1}\ldots Y_{n+1}\) for some \(Y_{i}\in D_{(0)}\). Since \(I\) is non-zero, clearly each \(Y_{i}\) is non-zero and so factors as a product of prime ideals. But then \(I\) can be written as a product of at least \(n+1\) prime ideals, contradicting the uniqueness of the previous factorisation. Hence \(I\not\in{D_{(0)}}^{n+1}\) and so \(I\in{D_{(0)}}^{n}\setminus{D_{(0)}}^{n+1}\). As this holds for every non-zero ideal of \(R\), we have \({\rm Base}(D_{(0)})=\{(0)\}\) and hence \(D\) is a stratified extension of the trivial group.
**Theorem 3.3**: _Let \(R\) be a Dedekind domain. Then \(R\) is a semilattice of stratified extensions of groups._
**Proof.** Since \(\varepsilon\delta\) is a surjection, by Theorem 2.7, \(R\) is a semilattice of semigroups \({\cal S}[E(D);R_{e}]\). Clearly \(\delta(x)=R\) if and only if \(x\) is a unit of \(R\), and so \(R_{R}\) is exactly the group of units of \(R\). For \(R_{(0)}\), by Lemma 1.2, \({\rm Base}(R_{(0)})\subseteq\delta^{-1}({\rm Base}(D_{(0)}))=\delta^{-1}((0))= \{0\}\). Since \(0\) is idempotent we have \(0\in{\rm Base}(R_{(0)})\) and so \({\rm Base}(R_{(0)})=\{0\}\) and hence \(R_{(0)}\) is a stratified extension of the trivial group.
Note that \({\rm Base}(R_{(0)})=\delta^{-1}({\rm Base}(D_{(0)}))\). In general the layers within the stratified structure of \(D_{0}\) and within \(R_{0}\) will not be the same. However,
**Proposition 3.4**: _Let \(R\) be a Dedekind domain and let \(i>1\). Then \({R_{(0)}}^{i}=\delta^{-1}({D_{(0)}}^{i})\) if and only if \(R\) is a principal ideal domain._
**Proof.** Note that by Lemma 1.2, \({R_{(0)}}^{i}\subseteq\delta^{-1}({D_{(0)}}^{i})\) is always true for any commutative ring \(R\).
Let \(R\) be a principal ideal domain and let \(x\in\delta^{-1}({D_{(0)}}^{i})\). Then \(\delta(x)=(x)\) can be factorised as a product of \(i\) principal ideals \((x)=(x_{1})\ldots(x_{i})=(x_{1}\ldots x_{i})\) with each \((x_{j})\in D_{(0)}\). Hence for \(1\leq j\leq i\), \(x_{j}\in R_{(0)}\) and so \(x_{1}\ldots x_{i}\in R_{(0)}\). Since \(\delta(x)=\delta(x_{1}\ldots x_{i})\) we have \(x{\cal J}x_{1}\ldots x_{i}\) and so \(x=x_{1}\ldots x_{i}u\) for some \(u\in R\). Since
\[\varepsilon\delta(x_{i}u)=\varepsilon\delta(x_{i})\varepsilon\delta(u)=(0) \varepsilon\delta(u)=(0)\]
then \(x_{i}u\in R_{(0)}\) and it follows that \(x\in{R_{(0)}}^{i}\).
For the converse, note that since \(R\) is a Dedekind domain it is Noetherian and hence every non-zero, non-unit element can be factorised into a product of irreducible elements. It follows that every irreducible element of \(R\) is prime if and only if \(R\) is a unique factorisation domain and hence a principal ideal domain. Hence if \(R\) is
not a principal ideal domain there exists some \(x\in R_{(0)}\) such that \(x\) is irreducible but not prime (recall that \(R_{R}\) consists of units which are not irreducible). Since \(x\) is irreducible it cannot be written as a product of two non-unit elements of \(R\) and hence \(x\not\in{R_{(0)}}^{2}\). Since \(x\) is not prime, \((x)\) is not a prime ideal and so has a unique factorisation as a product of prime ideals \(X_{1}\ldots X_{n}\) for some \(n>1\). In particular, \((x)\in{D_{(0)}}^{2}\) and so \({R_{(0)}}^{2}\neq\delta^{-1}({D_{(0)}}^{2})\). It is then an easy matter to extend this for all \(i\geq 2\).
Notice then that the \(i\)-th layer of \(R_{(0)}\) is the preimage of the \(i\)-th layer of \(D_{(0)}\).
**Corollary 3.5**: _An element \(x\in R\) is prime if and only if \((x)\) lies in the first layer of \(D_{(0)}\). Additionally, if \(x\in R\) is prime then \(x\) lies in the first layer of \(R_{(0)}\). The converse holds only when \(R\) is a PID._
## 4 Quotients of Dedekind domains
Let \(S\) be a Dedekind domain, \(A\unlhd S\) and let \(R=S/A\). We will demonstrate that \(R\) is a semilattice of stratified extensions of groups. Note that when \(A=(0),R\cong S\) and this case has effectively been considered in Section 3. When \(A=S\) then \(R=\{0\}\) and this situation is trivial. Hence we shall assume in what follows that \(S\neq A\neq(0)\).
Let \(D_{A}\) be the set of ideals of \(S\) containing \(A\) and define an operation \(*\) on \(D_{A}\) such that \(X*Y=XY+A\). Then
\[(X*Y)*Z=(XY+A)*Z=(XY+A)Z+A=XYZ+AZ+A=XYZ+A\]
and similarly \(X*(Y*Z)=XYZ+A\) and so \(*\) is associative. Note that for every \(I\in D_{A}\), \(I+A=I\).
The following is well known (see for example [1, Third Isomorphism Theorem, Page 303]), but as the result is normally presented as an isomorphism \(D_{A}\to D\), we feel the proof is useful to present here.
**Lemma 4.1**: _The map \(\Phi:D\to D_{A}\) given by \(\Phi(I)=\bigcup_{X\in I}X\) is an isomorphism._
**Proof.** Note that \(x+A\in I\) if and only if \(x\in\Phi(I)\).
We first show \(\Phi\) is well defined. Let \(x,y\in\Phi(I)\). Then \(x+A,y+A\in I\) so \(x+y+A\in I\) and hence \(x+y\in\Phi(I)\). Similarly for any \(z\in S\), \(z+A\in R\) so \(xz+A\in I\) and \(xz\in\Phi(I)\) and hence \(\Phi(I)\) is an ideal of \(S\). Since \(0+A\in I\), \(A\subseteq\Phi(I)\) and so \(\Phi(I)\in D_{A}\).
To see that \(\Phi\) is injective, if \(\Phi(I)=\Phi(J)\) then we have
\[x+A\in I\Leftrightarrow x\in\Phi(I)\Leftrightarrow x\in\Phi(J)\Leftrightarrow x +A\in J\]
so \(I=J\). For surjectivity, let \(I\) be an ideal of \(S\) containing \(A\). Then \(J=\{x+A\mid x\in I\}\) is clearly an ideal of \(R\) and \(\Phi(J)=I\).
Finally we show that \(\Phi\) is a homomorphism. If \(x\in\Phi(I)*\Phi(J)=\Phi(I)\Phi(J)+A\) then \(x=x_{1}y_{1}+\ldots+x_{n}y_{n}+a\) where \(a\in A\), \(x_{i}+A\in I\) and \(y_{i}+A\in J\) for each
\(i\in\{1,\ldots,n\}\). Then \(x_{1}y_{1}+\ldots+x_{n}y_{n}+A\in IJ\) so \(x_{1}y_{1}+\ldots+x_{n}y_{n}\in\Phi(IJ)\) and \(x_{1}y_{1}+\ldots+x_{n}y_{n}+a\in\Phi(IJ)+A=\Phi(IJ)\). Hence \(\Phi(I)*\Phi(J)\subseteq\Phi(IJ)\). For the reverse inclusion let \(x\in\Phi(IJ)\) so \(x+A\in IJ\) and \(x+A=(x_{1}+A)(y_{1}+A)+\ldots+(x_{n}+A)(y_{n}+A)=x_{1}y_{1}+\ldots+x_{n}y_{n}+A\). Then \(x-(x_{1}y_{1}+\ldots+x_{n}y_{n})\in A\) so \(x=x_{1}y_{1}+\ldots+x_{n}y_{n}+a\) for some \(a\in A\). Hence \(x\in\Phi(I)\Phi(J)+A=\Phi(I)*\Phi(J)\). Therefore \(\Phi(I)*\Phi(J)=\Phi(IJ)\) and \(\Phi\) is a homomorphism.
Notice that if \(K\in D_{A}\) then \(\Phi^{-1}(K)=K/A\).
Since \(S\) is a Dedekind domain, every nonzero proper ideal factors into a product of prime ideals. Hence for all \(I\in D,I\neq R\), \(\Phi(I)=P_{1}P_{2}\ldots P_{n}\) for some prime ideals \(P_{i}\) of \(S\). Then \(\Phi(I)=\Phi(I)+A=P_{1}\ldots P_{n}+A\) since \(A\subseteq\Phi(I)\). For each \(1\leq i\leq n\), \(A\subseteq\Phi(I)\subseteq P_{i}\) so \(P_{i}\in D_{A}\). Hence \(\Phi(I)=P_{1}\ldots P_{n}+A=P_{1}*\ldots*P_{n}=\Phi(X_{1})*\ldots*\Phi(X_{n})\) where \(X_{i}=\Phi^{-1}(P_{i})=P_{i}/A\) and so \(I=X_{1}\ldots X_{n}\). Note that this is not necessarily a unique factorisation, as for example (4) as an ideal of \({\mathbb{Z}}_{12}\) can be written as (2)(2) or as (2)(2)(2). It is however a factorisation into prime ideals.
**Lemma 4.2**: _The ideal \(I\) is a prime ideal of \(R\) if and only if \(\Phi(I)\) is a prime ideal of \(S\)._
**Proof.** Suppose \(\Phi(I)\) is a prime ideal of \(S\) and let \((x+A)(y+A)=xy+A\in I\). Then \(xy\in\Phi(I)\) and so without loss of generality \(x\in\Phi(I)\). Hence \(x+A\in I\) and so \(I\) is a prime ideal. Conversely, suppose \(I\) is a prime ideal of \(R\) and let \(xy\in\Phi(I)\). Then \(xy+A\in I\) so without loss of generality \(x+A\in I\) and hence \(x\in\Phi(I)\) so \(\Phi(I)\) is prime.
The following lemma shows that the factorisation of \(I\) into \(\Phi^{-1}(P_{1})\ldots\Phi^{-1}(P_{n})\) is a _minimal prime factorisation_, in the sense that any other prime factorisation of \(I\) must include each of these factors.
**Lemma 4.3**: _Let \(I\in D\) be such that \(\Phi(I)\) has a unique prime factorisation \(P_{1}\ldots P_{n}\). If \(X_{1}\ldots X_{m}\) is a prime factorisation of \(I\) then \(m\geq n\) and, up to reordering factors, \(X_{i}=\Phi^{-1}(P_{i})\) for \(i\in\{1,\ldots,n\}\)._
**Proof.** By definition,
\[\Phi(I)=\Phi(X_{1})*\ldots*\Phi(X_{m})=\Phi(X_{1})\ldots\Phi(X_{m})+A\]
so \(\Phi(X_{1})\ldots\Phi(X_{m})\subseteq\Phi(I)\) and hence \(\Phi(I)\) divides \(\Phi(X_{1})\ldots\Phi(X_{m})\) as \(S\) is a Dedekind domain. Then \(P_{1}\ldots P_{n}Q=\Phi(X_{1})\ldots\Phi(X_{m})\) for some ideal \(Q\) of \(S\) so, by uniqueness of prime factorisations in \(S\), we have \(m\geq n\) and, reordering if necessary, \(P_{i}=\Phi(X_{i})\) for each \(i\in\{1,\ldots,n\}\). Applying \(\Phi^{-1}\) to each equality then gives the desired result.
Suppose \(A\) has prime factorisation \(P_{1}^{e_{1}}\ldots P_{n}^{e_{n}}\) (\(e_{i}>0\)). By definition, any ideal \(\Phi(I)\in D_{A}\) has \(A\subseteq\Phi(I)\) and so \(\Phi(I)\) divides \(A\) and hence \(\Phi(I)=P_{1}^{f_{1}}\ldots P_{n}^{f_{n}}\) where
\(0\leq f_{i}\leq e_{i}\). In particular, if \(P\) is a prime ideal of \(S\) then \(A\subseteq P\) if and only if \(P=P_{i}\) for some \(i\in\{1,\ldots,n\}\). Let
\[A_{i}=\Phi^{-1}(P_{i})\]
for each \(i\in\{1,\ldots,n\}\). Then \(A_{1},\ldots,A_{n}\) are precisely the prime ideals of \(R\) and any \(I\in D\) has minimal prime factorisation \(A_{1}^{f_{1}}\ldots A_{n}^{f_{n}}\), for some \(f_{i}\geq 0\). Notice that the primes \(A_{1},\ldots,A_{n}\) are unique with respect to this construction, by Lemma 4.3.
Note here that we adopt the convention \(P_{1}^{0},\ldots,P_{n}^{0}=S\) and \(A_{1}^{0},\ldots,A_{n}^{0}=R\), i.e. that the empty powers of primes are the identity elements of \(D_{A}\) and \(D\) respectively.
**Lemma 4.4**: _Let \(I\in D\). If \(I\) has prime factorisation \(A_{1}^{g_{1}}\ldots A_{n}^{g_{n}}\) then the minimal prime factorisation of \(I\) is given by \(A_{1}^{f_{1}}\ldots A_{n}^{f_{n}}\) where \(f_{i}=\min(e_{i},g_{i})\). Hence a prime factorisation is minimal if and only if \(0\leq g_{i}\leq e_{i}\) for all \(i\in\{1,\ldots,n\}\)._
**Proof.** Let \(I=A_{1}^{g_{1}}\ldots A_{n}^{g_{n}}\). Then
\[\Phi(I) =P_{1}^{g_{1}}*\ldots*P_{n}^{g_{n}}\] \[=P_{1}^{g_{1}}\ldots P_{n}^{g_{n}}+A\] \[=P_{1}^{g_{1}}\ldots P_{n}^{g_{n}}+P_{1}^{e_{1}}\ldots P_{n}^{e_{ n}}\] \[=P_{1}^{\min(g_{1},e_{1})}\ldots P_{n}^{\min(g_{n},e_{n})}\]
is the unique prime factorisation of \(\Phi(I)\) so \(A_{1}^{\min(g_{1},e_{1})}\ldots A_{n}^{\min(g_{n},e_{n})}\) is the minimal prime factorisation of \(I\).
**Corollary 4.5**: _Let \(I=A_{1}^{i_{1}}\ldots A_{n}^{i_{n}}\) and \(J=A_{1}^{j_{1}}\ldots A_{n}^{j_{n}}\) be minimal prime factorisations of \(I,J\in D\). The minimal prime factorisation of \(IJ\) is_
\[A_{1}^{\min(i_{1}+j_{1},e_{1})}\ldots A_{n}^{\min(i_{n}+j_{n},e_{n})}.\]
We can now apply our methods from Section 2, and in particular Proposition 2.6, to find the semilattice structure of \(D\).
**Lemma 4.6**: _Let \(I\in D\) with minimal prime factorisation \(A_{1}^{f_{1}}\ldots A_{n}^{f_{n}}\). Then \(I\in E(D)\) if and only if \(f_{i}\in\{0,e_{i}\}\) for all \(i\in\{1,\ldots,n\}\)._
**Proof.** Let \(I\in D\) have minimal prime factorisation \(A_{1}^{f_{1}}\ldots A_{n}^{f_{n}}\) so \(I^{2}\) has minimal prime factorisation \(A_{1}^{\min(2f_{1},e_{1})}\ldots A_{n}^{\min(2f_{n},e_{n})}\). If \(f_{i}=0\) then \(\min(2f_{i},e_{i})=0=f_{i}\) and if \(f_{i}=e_{i}\) then \(\min(2f_{i},e_{i})=e_{i}=f_{i}\) so \(I^{2}=A_{1}^{f_{1}}\ldots A_{n}^{f_{n}}=I\).
Conversely, if \(I^{2}=I\) then \(\min(2f_{i},e_{i})=f_{i}\) for all \(i\in\{1,\ldots,n\}\). Then if \(f_{i}\leq e_{i}/2\) we have \(2f_{i}=f_{i}\) so \(f_{i}=0\) and if \(f_{i}>e_{i}/2\) we have \(f_{i}=e_{i}\). Hence \(f_{i}\in\{0,e_{i}\}\) for all \(i\in\{1,\ldots,n\}\).
Let \(N=\{1,\ldots,n\}\). The previous lemma shows that an idempotent \(e\) is entirely determined by which \(A_{i}\) have a non-zero power \(f_{i}\), and hence we have a bijection \(\Lambda:E(D)\to{\cal P}(N)\) given by \(\Lambda(e)=\{i\in N|f_{i}=e_{i}\}\). If \({\cal P}(N)\) is equipped with the operation of union of sets it then becomes a semilattice and \(\Lambda\) can easily seen to be an order isomorphism.
Note that for every \(I\in D\) there exists \(e\in E(D)\) such that \(I\) has minimal prime factorisation \(\prod_{i\in\Lambda(e)}A_{i}^{f_{i}}\) where \(0<f_{i}\leq e_{i}\) for all \(i\in\Lambda(e)\). In fact \(I\in D_{e}\) if and only if its minimal prime factorisation can be written in this way. To see this, it is sufficient to observe that for any prime ideal \(A_{i}\) we have \(\varepsilon(A_{i})=A_{i}^{e_{i}}\) as \(\varepsilon\) is a homomorphism.
**Proposition 4.7**: _Let \(D_{e}\) be a subsemigroup of \(D\) for some \(e=\prod_{j\in\Lambda(e)}A_{j}^{e_{j}}\in E(D)\). Let \(I\in D_{e}\) with minimal prime factorisation \(\prod_{j\in\Lambda(e)}A_{j}^{f_{j}}\) for \(0<f_{j}\leq e_{j}\) and suppose that \(I\neq e\). Then for each \(i\geq 1\), \(I\in{D_{e}}^{i}\) if and only if \(\min\{f_{j}|f_{j}\neq e_{j}\}\geq i\)._
Note that \(\{f_{j}|f_{j}\neq e_{j}\}\) is non-empty since \(I\neq e\). Since \(e\) is idempotent, \(e\in{D_{e}}^{i}\) for all \(i\in{\mathbb{N}}\).
**Proof.** By definition, \(\prod_{j\in\Lambda(e)}A_{j}\) divides every element of \(D_{e}\) so \(\prod_{j\in\Lambda(e)}A_{j}^{i}\) divides every element of \({D_{e}}^{i}\). Then for every \(I\in{D_{e}}^{i}\) there exists a prime factorisation \(\prod_{j\in\Lambda(e)}A_{j}^{g_{j}}\) with \(g_{j}\geq i\). By Lemma 4.4 the minimal prime factorisation of \(I\), \(\prod_{j\in\Lambda(e)}A_{j}^{f_{j}}\), has \(f_{j}=\min(e_{j},g_{j})\) so we have \(f_{j}=g_{j}\geq i\) for every \(f_{j}\neq e_{j}\) and hence \(\min\{f_{j}|f_{j}\neq e_{j}\}\geq i\).
For the converse, suppose \(I\) has minimal prime factorisation \(\prod_{j\in\Lambda(e)}A_{j}^{f_{j}}\) such that \(\min\{f_{j}|f_{j}\neq e_{j}\}\geq i\). Then for each \(j\in\Lambda(e)\) either \(f_{j}=e_{j}\) or \(i\leq f_{j}<e_{j}\). Let \(g_{j}=\max(f_{j},i)\) and \(J=\prod_{j\in\Lambda(e)}A_{j}^{g_{j}}\). Then \(J=\left(\prod_{j\in\Lambda(e)}A_{j}\right)^{i-1}\left(\prod_{j\in\Lambda(e)}A_ {j}^{g_{j}-(i-1)}\right)\) so, as \(g_{j}-(i-1)>0\), \(J\in{D_{e}}^{i}\). If \(i<f_{j}<e_{j}\) then \(g_{j}=f_{j}<e_{j}\). Otherwise, \(g_{j}>f_{j}=e_{j}\). In either case, \(\min(g_{j},e_{j})=f_{j}\) and hence \(I\) and \(J\) have the same minimal prime factorisation, so \(I=J\) and \(I\in{D_{e}}^{i}\).
**Corollary 4.8**: _The ideal \(I=\prod_{j\in\Lambda(e)}A_{j}^{f_{j}}\) with \(0<f_{j}\leq e_{j}\) lies in the \(i\)-th layer of \(D_{e}\), \({D_{e}}^{i}\setminus{D_{e}}^{i+1}\), if and only if \(\min\{f_{j}|f_{j}\neq e_{j}\}=i\)._
**Corollary 4.9**: _For \(e\in E(D)\), \({\rm Base}(D_{e})=\{e\}\) and the subsemigroup \(D_{e}\) is a stratified semigroup with zero._
We can now easily prove the main theorem.
**Theorem 4.10**: _Let \(S\) be a Dedekind domain, \(A\unlhd S\) be an ideal of \(S\) and let \(R=S/A\). The multiplicative semigroup of \(R\) is a semilattice \({\cal S}[E(D);R_{e}]\) of stratified extensions of groups._
**Proof.** If \(A=\{0\}\) then the result follows from Theorem 3.3, while if \(A=S\) the result is trivial. Henceforth, assume that \(\{0\}\neq A\neq S\).
That \(R\) is the given semilattice follows immediately from Theorem 2.7 and the observation that as every ideal of a quotient of a Dedekind domain is principal, the map \(\varepsilon\delta\) is a surjection.
Let \(e\in E(D)\) and consider \(\mbox{Base}(R_{e})\). By Lemma 1.2, \(\mbox{Base}(R_{e})\subseteq\delta^{-1}(\mbox{Base}(D_{e}))=\delta^{-1}(\{e\})\). Since every ideal of \(R\) is principal, there exists some \(x\in R\) such that \(\delta(x)=(x)=e\) and hence \(\mbox{Base}(R_{e})\subseteq\delta^{-1}(\{e\})=xU_{\overline{x}}\). By Theorem 2.5, \(xU_{\overline{x}}\) is a group and hence by Lemma 1.1, \(xU_{\overline{x}}\subseteq\mbox{Base}(R_{e})\) and so \(\mbox{Base}(R_{e})=xU_{\overline{x}}\) and \(R_{e}\) is a stratified extension of a group.
It is clear from Lemma 1.2 that \(R_{e}^{\ i}\subseteq\delta^{-1}(D_{e}^{\ i})\). In fact, we have equality.
**Proposition 4.11**: _Let \(e=\prod_{j\in\Lambda(e)}A_{j}^{e_{j}}\in E(D)\). Then \(R_{e}^{\ i}=\delta^{-1}(D_{e}^{\ i})\)._
**Proof.** It remains to show that \(\delta^{-1}(D_{e}^{\ i})\subseteq R_{e}^{\ i}\). Let \(x\in R_{e}\) be such that \(\delta(x)\in D_{e}^{\ i}\). Then \(\delta(x)\) has minimal prime factorisation \(\prod_{j\in\Lambda(e)}A_{j}^{f_{j}}\) where \(\min\{f_{j}|f_{j}\neq e_{j}\}=i\). Let \(g_{j}=\max(f_{j},i)\). Then \(\prod_{j\in\Lambda(e)}A_{j}^{g_{j}}\) is a prime factorisation of \(\delta(x)\) with \(g_{j}\geq i\) for every \(j\in\Lambda(e)\).
Since \(S\) is a Dedekind domain, every ideal of \(R\) is principal so there exists some \(a_{j}\in R\) such that \(\delta(a_{j})=(a_{j})=A_{j}\) for every \(j\in\Lambda(e)\). Let \(y=\prod_{j\in\Lambda(e)}a_{j}^{g_{j}}\). Clearly \(\delta(y)=\delta(x)\) and so \(x{\cal J}y\) and hence \(x=yu\) for some \(u\in R\). Then
\[x=\left(\prod_{j\in\Lambda(e)}a_{j}\right)^{i-1}\left(u\prod_{j\in\Lambda(e)} a_{j}^{g_{j}-(i-1)}\right).\]
Clearly \(\varepsilon\delta(\prod_{j\in\Lambda(e)}a_{j})=e\), so if \(\varepsilon\delta(u\prod_{j\in\Lambda(e)}a_{j}^{g_{j}-(i-1)})=e\) then \(x\in R_{e}^{\ i}\) as required. Suppose otherwise, so \(\varepsilon\delta(u\prod_{j\in\Lambda(e)}a_{j}^{g_{j}-(i-1)})=f\) for some \(f\in E(D)\) with \(f\neq e\). As each \(g_{j}-(i-1)>0\), \(\varepsilon\delta(\prod_{j\in\Lambda(e)}a_{j}^{g_{j}-(i-1)})=e\) and so \(e\) divides \(f\) and hence \(ef=f\). But then \(\varepsilon\delta(x)=e^{i-1}f=f\), a contradiction. Hence \(\varepsilon\delta(u\prod_{j\in\Lambda(e)}a_{j}^{g_{j}-(i-1)})=e\) and so \(x\in R_{e}^{\ i}\) and \(\delta^{-1}(D_{e}^{\ i})\subseteq R_{e}^{\ i}\).
**Corollary 4.12**: _Let \(x\in R_{e}\). Then \(x\) lies in the \(i\)-th layer of \(R_{e}\) if and only if \(\delta(x)\) lies in the \(i\)-th layer of \(D_{e}\)._
**Corollary 4.13**: _Let \(A\trianglelefteq S\) be a non-zero proper ideal with prime factorisation \(P_{1}^{e_{1}}\ldots P_{n}^{e_{n}}\) and \(A_{i}=P_{i}/A\). Then_
1. \(R_{e}\) _is a group if and only if_ \(e=\prod_{i\in\Lambda(e)}A_{i}\)_._
2. \(R\) _is a semilattice of groups if and only if_ \(e_{1}=\ldots=e_{n}=1\)_._
3. \(R\) _is a semilattice of groups if and only if_ \(R_{(0)}\) _is a group._
4. \(E(D)\) _is a chain if and only if_ \(n=1\)_, in which case it is the two element semilattice._
An interesting consequence of this construction is that \(S/A\) is a field if and only if \(A\) is prime.
**Proposition 4.14**: _Let \(R\) be a quotient of a Dedekind domain. Then \(R\) is a strong semilattice of semigroups if and only if it is a semilattice of groups._
**Proof.** It is well known (see, for example, [5, Theorem 4.2.1]) that a semilattice of groups is a strong semilattice. For the converse, suppose \(R\) is a strong semilattice of semigroups. For any \(e\in E(D)\) we have \(R\geq e\) and so there exists a morphism \(\phi_{R,e}:R_{R}\to R_{e}\) such that \(xy=\phi_{R,e}(x)y\) for any \(x\in R_{R}\) and \(y\in R_{e}\). Since \(1\in R_{R}\) we have \(x=1x=\phi_{R,e}(1)x\) for every \(x\in R_{e}\). Then \(x\in R_{e}{}^{2}\) so \(R_{e}{}^{2}=R_{e}\) and hence \(R_{e}\) is a group. As this holds for every \(e\in E(D)\), \(R\) is a semilattice of groups.
We can summarise the construction of the semilattice of semigroups with this short'recipe'. First we note that we can reduce the amount of calculation required by making use of the following result.
**Proposition 4.15**: _Let \(S\) be a Dedekind domain, \(A\) an ideal of \(S\), and \(R=S/A\). For all \(x\in R\), \(xV_{x}=xU\) where \(U\) is the group of units of \(R\)._
**Proof.** Note that when \(A=S\) the result is trivial and when \(A=(0)\) we have \(V_{x}=U\) by cancellativity as \(R\) is a domain. We assume henceforth that \(A\) is a non-zero proper ideal.
It is readily apparent that \(U\subseteq V_{x}\) for all \(x\in R\) and hence \(xU\subseteq xV_{x}\). For the reverse inclusion, we note that it is well known that \(R\) is then a principal ideal ring and so, by [6], if \((a)=(b)\) then \(a=bu\) for some \(u\in U\). It is easy to see that if \(y\in xV_{x}\) then \((x)=(y)\) and so \(y\in xU\) as required.
Let \(S\) be a Dedekind domain and \(A\unlhd S\). If \(A=S\) then \(R=S/A\) is the trivial ring. If \(A=(0)\) then \(R\cong S\) and so by Theorem 3.3 we have a semilattice of two semigroups. One is \(R_{R}\), the group of units, while the other is \(R_{(0)}\), a stratified extension of the trivial group. In the latter case, the elements of layer \(i\) are precisely those which can be factorised as a product of \(i\) irreducible elements.
Otherwise, let \((0)\neq A\neq S\) be a proper non-zero ideal of \(S\) and let \(A=P_{1}^{e_{1}}\ldots P_{n}^{e_{n}}\) be the unique factorisation of \(A\) into a product of prime ideals of \(S\). Then the semilattice is order isomorphic to \({\cal P}(N)\) and each subset \(K\subseteq N\) is associated with an idempotent \(e\in E(D)\). The subsemigroup \(R_{e}\) is then a stratified extension of a group where the group is \(\mbox{Base}(R_{e})=\delta^{-1}(e)\).
To calculate \(R_{e}\) and \(\mbox{Base}(R_{e})\) in a practical setting, we proceed as follows. First, the two easy cases are when \(K=\emptyset\), in which case \(e=(1+A)\) and \(\mbox{Base}_{R_{e}}\) is the group of units of \(R\), while if \(K=N\) then \(e=(0+A)\) and the group consists of only the zero of \(R\). Suppose now that \(\emptyset\subset K\subset N\) and let \(A_{i}=\Phi^{-1}(P_{i})=P_{i}/A\). Then \(e=\prod_{i\in K}A_{i}^{e_{i}}\) and since \(R\) is a principal ideal ring, if \(e=(x+A)\) then \(\mbox{Base}(R_{e})=\{xv+A\}\) where \(v+A\in V_{x+A}\), and so by Proposition 4.15, \(\mbox{Base}(R_{e})=(x+A)R_{R}\). To determine the stratified structure of \(R_{e}\), note that if \(e_{i}=1\) for all \(i\in K\) then \(R_{e}\) is a group
and so there are no layers. If at least one of the \(e_{i}>1\) and if for a given subset \(K\) and a collection \(f_{i},i\in K\)
\[\prod_{i\in K}A_{i}^{f_{i}}=(y+A)\]
and where \(0<f_{i}\leq e_{i}\) is such that \(j=\min\{f_{i}\mid f_{i}\neq e_{i},i\in K\}\), then \(\{yv+A\mid v+A\in V_{y+A}\}=(y+A)R_{R}\) is a subset of the \(j\)-th layer, and moreover the \(j\)-th layer consists of the union of all such subsets. Note that if \(A_{i}=(a_{i}+A)\) then \(y+A=\prod_{i\in K}(a_{i}^{f_{i}}+A)\).
## 5 Examples
In this short section we illustrate the above theory by considering a number of examples of Dedekind domains and examining the semilattice and stratified structure of the multiplicative semigroup of both the domain and of a typical quotient of the domain.
At the more trivial end of the spectrum, suppose that \(S=F\), a field. Then every non-zero element is a unit, so we have \(S_{(0)}=\{0\}\) and \(S_{S}=F^{\times}\). In other words, the multiplicative semigroup of a field is simply a group with zero as expected.
### The integers
As a more interesting example, let \(S=\mathbb{Z}\), the ring of integers. For any \(n\in\mathbb{Z}\), the sets \(nU_{\overline{n}}\) are (isomorphic to) \(\{n,-n\}\) and the units in \(\mathbb{Z}\) are of course \(\pm 1\) and so \(S_{S}\) is the two element group. We know from Theorem 3.3 that \(S_{(0)}\) is a stratified extension of the trivial group and the layered structure of \(S_{(0)}\) is then easy to establish. The first layer of \(S_{(0)}\) consists of every prime integer \(p\). The second layer contains all products \(pq\) of exactly \(2\) (not necessarily distinct) primes \(p\) and \(q\), and in general, layer \(n\) consists of all products of exactly \(n\) (not necessarily distinct) primes.
Given that \(\mathbb{Z}\) is a principal ideal domain, then all ideals of \(\mathbb{Z}\) are of the form \((n)\) for some \(n\in\mathbb{Z}\). If \(R=S/(n)\) then of course \(R=\mathbb{Z}_{n}\) the ring of integers modulo \(n\). To reduce pedantry we will assume that \(\mathbb{Z}_{n}=\{1,\ldots,n\}\). We know from Theorem 4.10 that \(R\) is a semilattice of stratified extensions of groups, \(\mathcal{S}[E(D);R_{e}]\) and that \(E(D)\cong\mathcal{P}(K)\) where \(K=\{1,\ldots,k\}\) and where \(n=p_{1}^{e_{1}}\ldots p_{k}^{e_{k}}\) is the prime factorisation of \(n\) in \(\mathbb{Z}\).
First, note that if \((e)\in E(D)\) then we can assume, without loss of generality, that \(e=\prod_{i\in I}p_{i}^{e_{i}}\in\mathbb{Z}_{n}\) where \(I=\Lambda((e))\in\mathcal{P}(K)\). The base of \(R_{e}\) is \(\mathrm{Base}(R_{e})=eU_{\overline{e}}\) and using Theorem 2.5 and Lemma 2.2, we deduce that \(\mathbb{Z}_{n}/(\overline{e})\cong\mathbb{Z}_{n/e}\) and that
\[eU_{\overline{e}}\cong U_{n/e}\]
where \(U_{n/e}\) is the group of units in \(\mathbb{Z}_{n/e}\). If \((e)\) is square-free then \(R_{e}=U_{n/e}\) otherwise \(R_{e}\) is a stratified extension of \(U_{n/e}\) with height \(m=\max\{e_{j}|j\in\Lambda(e)\}-1\). In this case, the structure of the individual layers of \(R_{e}\) is more complicated to describe is general, but essentially if \(x\) is in the \(i\)-th layer of \(R_{e}\), \(1\leq i\leq m\), then
\[x=\prod_{j\in\Lambda(e)}p_{j}^{g_{j}}u\]
where \(u\in U_{n}\) and \(0<g_{j}\leq e_{j}\) and \(\min\{g_{j}|g_{j}\neq e_{j}\}=i\). Note that \(\{g_{j}|g_{j}\neq e_{j}\}\neq\emptyset\) as otherwise \(x\in\operatorname{Base}(R_{e})\).
As an example, if \(n=12=2^{2}\times 3\), then
\[E(D)=\{(12),(4),(3),(1)\}\]
and we have four subsemigroups
\(R_{(12)}=\{6,12\}\) where \(\operatorname{Base}(R_{(12)})=\{12\}\).
\(R_{(4)}=\{2,4,8,10\}\) where \(\operatorname{Base}(R_{(4)})=\{4,8\}\) and \(\{2,10\}\) forms layer 1.
\(R_{(3)}=\{3,9\}\) which is a group.
\(R_{(1)}=\{1,5,7,11\}\) which is the group of units mod 12.
The semilattice structure can be pictured as
Notice that the semilattice will always be a finite Boolean algebra and the stratification structure is wholly dependent on the prime power factorisation of \(n\).
### The \(p-\)adic integers
Let \(S\) be the \(p\)-adic integers. There are a number of ways to view \(p-\)adic numbers but we consider \(S\) to consist of formal sums
\[S=\left\{\sum_{i\geq 0}a_{i}p^{i}\mid 0\leq a_{i}\leq p-1\right\}\]
with arithmetic performed in the usual formal manner. For more detail we refer the reader to [3]. The expression \(\sum_{i\geq 0}a_{i}p^{i}\) is also known as the \(p-\)_adic expansion_ of the relevant number.
It is easy to demonstrate that the units in \(S\) are the elements where \(a_{0}\neq 0\) in the \(p-\)adic expansion and that non-unit elements have the form \(p^{k}u\) where \(u\) is a unit of \(S\) and \(k\in\mathbb{N}\). It is well-known that \(S\) forms a principal ideal domain and so from Theorem 3.3 we deduce that \(S\) is a (2-element) semilattice of stratified extensions of groups. \(S_{(1)}\) is the group of units and \(\operatorname{Base}(S_{(0)})=\{0\}\). It follows from the definition of \(S\) that the proper non-zero ideals are those of the form \((p^{k})\) for \(k\in\mathbb{N}\), and so clearly \(D_{(0)}\) is isomorphic to the infinite monogenic semigroup with zero. Since \(S\) is a principal ideal domain, it follows from Proposition 3.4 that \(S_{(0)}^{\phantom{(0)}i}=\delta^{-1}(D_{(0)}^{\phantom{(0)}i})\) for
all \(i\in\mathbb{N}\) and so the \(i\)-th layer of \(S_{(0)}\) consists of exactly the elements of the form \(p^{i}u\) where \(u\) is a unit.
Every non-zero proper ideal \(A\unlhd S\) has the form \((p^{k})=(p)^{k}\) for some \(k\in\mathbb{N}\). This means that \(S/A\) is isomorphic to the ring of integers modulo \(p^{k}\). Clearly \((p)\) is a prime ideal and so \(R=S/A\) is a 2-element semilattice of stratified extensions of groups, consisting of the group of units \(R_{(1+A)}\) and the semigroup \(R_{(0+A)}\). The latter is a stratified semigroup with zero and \(k-1\) non-zero layers. For each \(1\leq i\leq k-1\) the \(i\)-th layer consists of elements of the form \(p^{i}u+A\) where \(u\) is a unit of \(S\).
### Rings of algebraic integers
We now consider rings consisting of algebraic integers and as a specific example we shall consider the ring \(S=\mathbb{Z}[\sqrt{-5}]\). It is well know that rings of this nature are Dedekind domains but are not always principal ideal domains. In fact, \(2+\sqrt{-5}\) is an example of an element which can easily be shown to be irreducible but not prime. If \(A\) is an ideal of \(\mathbb{Z}[\sqrt{-5}]\) define a 'norm' on \(S/A\) by \(N(z+A)=(z+A)(\overline{z}+A)=z\overline{z}+A\), where \(\overline{z}\) is the conjugate of \(z\). It is easy to check that \(N\) is multiplicative and that \(z+A\) is a unit in \(S/A\) if \(N(z+A)=\pm 1+A\).
From section 4, \(S\) is a 2-element semilattice of the group of units, \(S_{(1)}=\{1,-1\}\), and a stratified semigroup with 0, \(S_{(0)}\). Since
\[2+\sqrt{-5}=9\times(-2)+(-1+4\sqrt{-5})\times(-\sqrt{-5}),\]
\[(-1+4\sqrt{-5})=(2+\sqrt{-5})^{2}\mbox{ and }9=(2-\sqrt{-5})(2+\sqrt{-5}),\]
it follows that \((3,2+\sqrt{-5})^{2}=(9,-1+4\sqrt{-5})=(2+\sqrt{-5})\) and so although \(2+\sqrt{-5}\) is in the first layer of \(S_{(0)}\) (being irreducible), \((2+\sqrt{-5})\) is not in the first layer of \(D_{(0)}\). The layer structure of \(S_{(0)}\) is not so easy to determine, as clearly \(a+b\sqrt{-5}\) is in the \(i\)-th layer of \(S_{(0)}\) if and only if it can be written as a product of \(i\) irreducible elements.
However determining the structure of a quotient of \(S\) is slightly easier, as we need only factorise a single ideal of \(S\) into a product of prime ideals. As an illustrative example, let us consider
\[A=(10,5+5\sqrt{-5})=(2,1+\sqrt{-5})(5,\sqrt{-5})^{2}\]
and let \(R=S/A\). It is easy to show that \((2,1+\sqrt{-5})\) and \((5,\sqrt{-5})\) are both prime ideals of \(\mathbb{Z}[\sqrt{-5}]\). In fact
\[(2,1+\sqrt{-5})=\{a+b\sqrt{-5}\mid a\equiv b\mbox{ mod }2\}\mbox{ and }(5,\sqrt{-5})=\{5a+b\sqrt{-5}\mid a,b\in\mathbb{Z}\},\]
while
\[A=(10,5+5\sqrt{-5})=\{5a+5b\sqrt{-5}\mid a\equiv b\mbox{ mod }2\}.\]
It is easy to check that the ring \(R\) has cardinality 50. In what follows, we shall frequently simplify the notation by working modulo \(A\) and write the element \(a+b\sqrt{-5}+A\) of \(R\) as simply \(a+b\sqrt{-5}\). We shall also assume a particular set of residues by taking \(0\leq a\leq 9\) and \(0\leq b\leq 4\).
Note from the comments preceding Proposition 4.7 that \(|E(D)|=|{\cal P}(\{1,2\})|\) and it can then be easily verified that \((5,\sqrt{-5})^{2}/A=(5)\) and that \((2,1+\sqrt{-5})/A=(6)\) and so
\[E(D)=\{(0),(1),(5),(6)\}.\]
It follows that \(R_{(1)}\) and \(R_{(6)}\) are groups and \(R_{(0)}\) and \(R_{(5)}\) are stratified extensions of groups, each with a height of 1. For each \(e\in E(D)\), the group \({\rm Base}(R_{e})\) is equal to \(xU_{\overline{x}}\) where \((x)=e\) and hence isomorphic to \(U_{\overline{x}}\).
In practical terms, \(R_{(1)}=R_{R}\) is the group of units of \(R\), and using norms we can deduce that \(|R_{R}|=20\) and in fact
\[R_{R}=\{a+b\sqrt{-5}\mid a\not\equiv b\ {\rm mod}\ 2,a\not\equiv 0\ {\rm mod}\ 5\}.\]
For \(R_{(5)}\), it follows that \({\rm Base}(R_{(5)})=\{5v\mid v\in R_{R}\}=5R_{R}=\{5\}\). To find the elements in layer 1 of \(R_{(5)}\) we note that \((5,\sqrt{-5})/A=(\sqrt{-5})\) and so layer 1 is
\[\delta^{-1}((\sqrt{-5}))=\{(\sqrt{-5})v\mid v\in R_{R}\}=\{\sqrt{-5},3\sqrt{-5 },7\sqrt{-5},9\sqrt{-5}\}.\]
For \(R_{(6)}\), it follows that
\[{\rm Base}(R_{(6)})=6R_{R}=\{a+b\sqrt{-5}\mid a\equiv b\ {\rm mod}\ 2,a\not \equiv 0\ {\rm mod}\ 5\}.\]
Note that \(\mid\!{\rm Base}(R_{(6)})|=20\) also.
Finally, the layer 1 in \(R_{(0)}\) can be calculated in the same way as for \(R_{(5)}\) using the fact that \((2,1+\sqrt{-5})/A\ (5,\sqrt{-5})/A=(5+\sqrt{-5})\). It then follows easily that the first layer of \(R_{(0)}\) is
\[(5+\sqrt{-5})R_{R}=\{2\sqrt{-5},4\sqrt{-5},5+\sqrt{-5},5+3\sqrt{-5}\}.\]
### Integers revisited
For a final example we return to a less complicated ring in order to demonstrate a more complicated layer structure. Let \(S={\mathbb{Z}}\), \(A=(6000)=(2)^{4}(3)(5)^{3}\) and \(R=S/A\). Working modulo \(A\), let \(e\in E(D)\) be the ideal \((2000)=(2)^{4}(5)^{3}\). Then \(R_{e}\) is a stratified extension of a group with height 3. Applying our previous results, we see that \({\rm Base}(R_{e})=\delta^{-1}((2000))=\{2000u\mid u\in R_{R}\}=2000R_{R}\).
For the layers, note that layer 1 of \(D_{e}\) consists of \((2)(5)\), \((2)^{2}(5)\), \((2)^{3}(5)\), \((2)^{4}(5)\), \((2)(5)^{2}\) and \((2)(5)^{3}\). Layer 1 of \(R_{e}\) is hence the union of \(10R_{R}\), \(20R_{R}\), \(40R_{R}\), \(80R_{R}\), \(50R_{R}\) and \(250R_{R}\).
Proceeding in a similar fashion, layer 2 of \(R_{e}\) is the union of \(100R_{R}\), \(200R_{R}\), \(400R_{R}\) and \(500R_{R}\), while layer 3 is simply the set \(1000R_{R}\).
|
2310.16933 | Covariance Operator Estimation: Sparsity, Lengthscale, and Ensemble
Kalman Filters | This paper investigates covariance operator estimation via thresholding. For
Gaussian random fields with approximately sparse covariance operators, we
establish non-asymptotic bounds on the estimation error in terms of the
sparsity level of the covariance and the expected supremum of the field. We
prove that thresholded estimators enjoy an exponential improvement in sample
complexity compared with the standard sample covariance estimator if the field
has a small correlation lengthscale. As an application of the theory, we study
thresholded estimation of covariance operators within ensemble Kalman filters. | Omar Al-Ghattas, Jiaheng Chen, Daniel Sanz-Alonso, Nathan Waniorek | 2023-10-25T18:58:12Z | http://arxiv.org/abs/2310.16933v2 | # Covariance Operator Estimation: Sparsity, Lengthscale, and Ensemble Kalman Filters
###### Abstract
This paper investigates covariance operator estimation via thresholding. For Gaussian random fields with approximately sparse covariance operators, we establish non-asymptotic bounds on the estimation error in terms of the sparsity level of the covariance and the expected supremum of the field. We prove that thresholded estimators enjoy an exponential improvement in sample complexity compared with the standard sample covariance estimator if the field has a small correlation lengthscale. As an application of the theory, we study thresholded estimation of covariance operators within ensemble Kalman filters.
Covariance operator estimation; thresholding; small lengthscale regime; ensemble Kalman filters
## 1 Introduction
This paper studies thresholded estimation of the covariance operator of a Gaussian random field. Under a sparsity assumption on the covariance model, we bound the estimation error in terms of the sparsity level and the expected supremum of the field. Using this bound, we then analyze covariance operator estimation in the interesting regime where the correlation lengthscale is small, and show that the thresholded covariance estimator achieves an exponential improvement in sample complexity compared with the standard sample covariance estimator. As an application of the theory, we demonstrate the advantage of using thresholded covariance estimators within ensemble Kalman filters.
The first contribution of this paper is to lift the theory of covariance estimation from finite to infinite dimension. In the finite-dimensional setting, a rich body of work [3, 6, 13, 14, 15, 18, 23, 51, 53] shows that, exploiting various forms of sparsity, it is possible to consistently estimate the covariance matrix of a vector \(u\) with \(N\sim\log\bigl{(}\dim(u)\bigr{)}\) samples. The sparsity of the covariance matrix --along with the use of thresholded, tapered, or banded estimators that exploit this structure-- facilitates an exponential improvement in sample complexity relative to the unstructured case, where \(N\sim\dim(u)\) samples are needed [4, 29, 49]. In this work we investigate the setting in which \(u\) is an infinite-dimensional random field with an approximately sparse covariance model. Specifically, we generalize notions of approximate sparsity often employed in the finite-dimensional covariance estimation literature [7, 15]. We show that the statistical error of thresholded estimators can be bounded in terms of the expected supremum of the field and the sparsity level, the latter of which quantifies the rate of spatial decay of correlations of the random field. Our analysis not only lifts existing theory from finite to infinite dimension, but also provides non-asymptotic moment bounds not yet available in finite dimension.
The second contribution of this paper is to showcase the benefit of thresholding in the challenging regime where the correlation lengthscale of the field is small relative to the size of the physical domain. Fields with small correlation lengthscale are ubiquitous in applications. For instance, they arise naturally in climate science and numerical weather forecasting, where global forecasts need to account for the effect of local processes with a small correlation lengthscale, such as cloud formation or propagation of gravitational waves. We show that thresholded estimators achieve an exponential improvement in sample complexity: For a field with lengthscale \(\lambda\) in \(d\)-dimensional physical space, the standard sample covariance requires \(N\sim\lambda^{-d}\) samples, while thresholded estimators only require
\(N\sim\log(\lambda^{-d})\). Therefore, our theory suggests that the parameter \(\lambda^{-d}\) plays the same role in infinite dimension as \(\dim(u)\) in the classical finite-dimensional setting. To analyze thresholded estimators in the small lengthscale regime, we use our general non-asymptotic moment bounds and the sharp scaling of sparsity level and expected supremum with lengthscale.
The third contribution of this paper is to demonstrate the advantage of using thresholded covariance estimators within ensemble Kalman filters [24]. Our interest in covariance operator estimation was motivated by the widespread use of localization techniques within ensemble Kalman methods in inverse problems and data assimilation, see e.g. [19, 25, 30, 31, 47]. Many inverse problems in medical imaging and the geophysical sciences are most naturally formulated in function space [9, 10, 44]; likewise, data assimilation is primarily concerned with sequential estimation of spatial fields, e.g. temperature or precipitation [16, 34]. Theoretical insight for these applications calls for sparse covariance estimation theory in function space, which has not been the focus in the literature. Perhaps partly for this reason, the empirical success of localization techniques in ensemble Kalman methods is poorly understood, with few exceptions that study localization in finite dimension [3, 46]. The work [41] studies the behavior of ensemble Kalman methods under mesh discretization, but it does not consider localization. In this paper, we use our novel non-asymptotic covariance estimation theory to obtain a sufficient sample size to approximate an idealized mean-field ensemble Kalman filter using a localized ensemble Kalman update. In finite dimension, [3] studies the ensemble approximation of mean-field algorithms for inverse problems and [2] conducts a multi-step analysis of ensemble Kalman filters without localization.
The paper is organized as follows. We first state and discuss our three main results in the following section. Then, the next three sections contain the proof of these theorems, along with further auxiliary results of independent interest. We close with conclusions, discussion, and future directions.
**Notation** Given two positive sequences \(\{a_{n}\}\) and \(\{b_{n}\}\), the relation \(a_{n}\lesssim b_{n}\) denotes that \(a_{n}\leq cb_{n}\) for some constant \(c>0\). If the constant \(c\) depends on some quantity \(\tau\), then we write \(a\lesssim_{\tau}b\). If both \(a_{n}\lesssim b_{n}\) and \(b_{n}\lesssim a_{n}\) hold simultaneously, then we write \(a_{n}\asymp b_{n}\). For a finite-dimensional vector \(a\), \(|a|\) denotes its Euclidean norm. For an operator \(\mathcal{A}\), \(\|\mathcal{A}\|\) denotes its operator norm, \(\mathcal{A}^{*}\) its adjoint, and \(\operatorname{Tr}(\mathcal{A})\) its trace.
## 2 Main Results
This section states and discusses the main results of the paper. In Subsection 2.1 we analyze the thresholded sample covariance estimator in a general setting, and establish moment bounds in Theorem 2.2. In Subsection 2.2 we consider a small lengthscale regime, and show in Theorem 2.5 that the thresholded estimator significantly improves upon the standard sample covariance estimator. Finally, in Subsection 2.3 we apply our new covariance estimation theory to demonstrate the advantage of using thresholded covariance estimators within ensemble Kalman filters.
### Thresholded Estimation of Covariance Operators
Let \(u,u_{1},u_{2},\ldots,u_{N}\) be _i.i.d._ centered almost surely continuous Gaussian random functions on \(D=[0,1]^{d}\) taking values in \(\mathbb{R}\) with covariance function (kernel) \(k:D\times D\to\mathbb{R}\) and covariance operator \(\mathcal{C}:L^{2}(D)\to L^{2}(D)\), so that, for \(x,x^{\prime}\in D\) and \(\psi\in L^{2}(D)\),
\[k(x,x^{\prime}):=\mathbb{E}\left[u(x)u(x^{\prime})\right],\qquad(\mathcal{C} \psi)(\cdot):=\int_{D}k\left(\cdot,x^{\prime}\right)\psi(x^{\prime})\,dx^{ \prime}.\]
The sample covariance function \(\widehat{k}(x,x^{\prime})\) and sample covariance operator \(\widehat{C}\) are defined analogously by
\[\widehat{k}(x,x^{\prime}):=\frac{1}{N}\sum_{n=1}^{N}u_{n}(x)u_{n}(x^{\prime}), \qquad(\widehat{C}\,\psi)(\cdot):=\int_{D}\widehat{k}(\cdot,x^{\prime})\psi(x^ {\prime})\,dx^{\prime}.\]
We introduce the thresholded sample covariance estimators with thresholding parameter \(\rho_{N}\)
\[\widehat{k}_{\rho_{N}}(x,x^{\prime}):=\widehat{k}(x,x^{\prime})\mathbf{1}_{ \{|\widehat{k}(x,x^{\prime})|\geq\rho_{N}\}}(x,x^{\prime}),\qquad(\widehat{C} _{\rho_{N}}\,\psi)(\cdot):=\int_{D}\widehat{k}_{\rho_{N}}(\cdot,x^{\prime}) \psi(x^{\prime})\,dx^{\prime},\]
where \(\mathbf{1}_{A}\) denotes the indicator function of the set \(A\). Our first main result, Theorem 2.2 below, relies on the following general assumption:
**Assumption 2.1**.: \(u,u_{1},u_{2},\ldots,u_{N}\) are _i.i.d._ centered almost surely continuous Gaussian random functions on \(D=[0,1]^{d}\) taking values in \(\mathbb{R}\) with covariance function \(k\). Moreover, the following holds:
* \(\sup_{x\in D}\mathbb{E}\left[u(x)^{2}\right]=1\).
* For some \(q\in(0,1)\) and \(R_{q}>0\), \(\sup_{x\in D}\left(\int_{D}|k(x,x^{\prime})|^{q}\,dx^{\prime}\right)^{\frac{1 }{q}}\leq R_{q}\).
Assumption 2.1 (i) normalizes the fields to have unit maximum marginal variance over \(D\), whereas Assumption 2.1 (ii) generalizes standard notions of sparsity in finite dimension --defined in terms of the row-wise maximum \(\ell_{q}\)-"norm" of the target covariance matrix [7, 15]-- to our infinite-dimensional setting. Our first main result establishes moment bounds on the deviation of the thresholded covariance estimator from its target in terms of the approximate sparsity level \(R_{q}\) and the expected supremum of the field, the latter of which determines the scaling of \(\rho_{N}\). We prove Theorem 2.2 and several auxiliary results of independent interest in Section 3.
**Theorem 2.2**.: _Suppose that Assumption 2.1 holds. Let \(1\leq c_{0}\leq\sqrt{N}\) and set_
\[\rho_{N} :=c_{0}\Bigg{[}\frac{1}{N}\vee\frac{1}{\sqrt{N}}\mathbb{E}\Big{[} \sup_{x\in D}u(x)\Big{]}\vee\frac{1}{N}\Big{(}\mathbb{E}\Big{[}\sup_{x\in D}u (x)\Big{]}\Big{)}^{2}\Bigg{]}, \tag{2.1}\] \[\widehat{\rho}_{N} :=c_{0}\Bigg{[}\frac{1}{N}\vee\frac{1}{\sqrt{N}}\Big{(}\frac{1}{ N}\sum_{n=1}^{N}\sup_{x\in D}u_{n}(x)\Big{)}\vee\frac{1}{N}\Big{(}\frac{1}{N} \sum_{n=1}^{N}\sup_{x\in D}u_{n}(x)\Big{)}^{2}\Bigg{]}. \tag{2.2}\]
_Then, for any \(p\geq 1\),_
\[\left[\mathbb{E}\|\widehat{C}_{\widehat{\rho}_{N}}-C\|^{p}\right]^{\frac{1}{p }}\lesssim_{P}R_{q}^{q}\rho_{N}^{1-q}+\rho_{N}e^{-\frac{c}{p}N\left(\rho_{N} \wedge\rho_{N}^{2}\right)}, \tag{2.3}\]
_where \(c\) is an absolute constant._
To the best of our knowledge, Theorem 2.2 is the first result in the literature to consider covariance operator estimation under the natural sparsity Assumption 2.1 (ii). Importantly, the thresholding parameter \(\widehat{\rho}_{N}\) is a function of the available data only, and so the estimator \(\widehat{C}_{\widehat{\rho}_{N}}\) in Theorem 2.2 can be readily implemented by the practitioner. In contrast to existing results in the finite-dimensional setting (see e.g. [7, 15]) which provide in-probability or moment bounds of order up to \(p=2\), Theorem 2.2 provides moment bounds for all \(p\geq 1\). The first of the two terms in (2.3) is reminiscent of existing results for covariance matrix estimation. For example, [51, Theorem 6.27] shows that with high-probability the
deviation of the thresholded covariance matrix estimator from its target is at most a constant multiple of \(\widetilde{R}_{q}\widetilde{\rho}_{N}^{1-q}\), where \(\widetilde{R}_{q}\) is a finite-dimensional analog of \(R_{q}\) and \(\widetilde{\rho}_{N}\) is a specified thresholding parameter. In contrast to our result, \(\widetilde{\rho}_{N}\) necessarily depends on the desired confidence level and so [51, Theorem 6.27] cannot be used to derive moment bounds of arbitrary order. The second term in (2.3) depends only on the expected supremum of the field, and, as we will show, is negligible in the small lengthscale regime. Consequently, Theorem 2.2 shows that the tuning parameter of the covariance operator estimator need not be tied to the confidence level. The proof technique therefore contributes to the literature on confidence parameter independent estimators; see e.g. [5] for an analogous finding that, contrary to standard practice [8], the Lasso tuning parameter need not depend on the confidence level.
**Remark 2.3**.: As in the finite-dimensional setting [15, 23], our thresholded estimator \(\widehat{C}_{\widetilde{\rho}_{N}}\) is positive semi-definite with high probability, but it is not guaranteed to be positive semi-definite. Fortunately, a simple modification ensures positive semi-definiteness while maintaining the same order of estimation error achieved by the original estimator. Notice that \(\widehat{C}_{\widetilde{\rho}_{N}}\) is a self-adjoint and Hilbert-Schmidt operator since \(\int_{D\times D}\big{|}\widetilde{k}_{\rho_{N}}(x,x^{\prime})\big{|}^{2}dxdx ^{\prime}<\infty\), see [32, Example 9.23]. Therefore, there is an orthonormal basis \(\{\varphi_{i}\}_{i=1}^{\infty}\) of \(L^{2}(D)\) consisting of eigenfunctions of \(\widehat{C}_{\widetilde{\rho}_{N}}\) such that \(\widehat{k}_{\rho_{N}}(x,x^{\prime})=\sum_{i=1}^{\infty}\widehat{\lambda}_{i }\varphi_{i}(x)\varphi_{i}(x^{\prime})\), where \(\widehat{\lambda}_{i}\) is the \(i\)-th eigenvalue of \(\widehat{C}_{\widetilde{\rho}_{N}}\). Let \(\widehat{\lambda}_{i}^{+}=\widehat{\lambda}_{i}\lor 0\) be the positive part of \(\widehat{\lambda}_{i}\) and define
\[\widehat{k}_{\rho_{N}}^{+}(x,x^{\prime}):=\sum_{i=1}^{\infty}\widehat{\lambda }_{i}^{+}\varphi_{i}(x)\varphi_{i}(x^{\prime}),\qquad(\widehat{C}_{\rho_{N}}^ {+}\psi)(\cdot):=\int_{D}\widehat{k}_{\rho_{N}}^{+}(\cdot,x^{\prime})\psi(x^{ \prime})\,dx^{\prime}.\]
Then, \(\widehat{C}_{\rho_{N}}^{+}\) is positive semi-definite and further
\[\|\widehat{C}_{\rho_{N}}^{+}-C\| \leq\|\widehat{C}_{\rho_{N}}^{+}-\widehat{C}_{\rho_{N}}\|+\| \widehat{C}_{\rho_{N}}-C\|\leq\max_{i:\widehat{\lambda}_{i}\leq 0}| \widehat{\lambda}_{i}|+\|\widehat{C}_{\rho_{N}}-C\|\] \[\leq\max_{i:\widehat{\lambda}_{i}\leq 0}|\widehat{\lambda}_{i}- \lambda_{i}|+\|\widehat{C}_{\rho_{N}}-C\|\leq 2\|\widehat{C}_{\rho_{N}}-C\|,\]
where \(\lambda_{i}\) is the \(i\)-th eigenvalue of \(\mathcal{C}\). Thus, \(\widehat{C}_{\rho_{N}}^{+}\) is positive semi-definite and attains the same estimation error as the original thresholded estimator \(\widehat{C}_{\rho_{N}}\). In light of this fact, we will henceforth assume that \(\widehat{C}_{\rho_{N}}\) is positive semi-definite wherever needed.
### Small Lengthscale Regime
Our second main result, Theorem 2.5, shows that in the small lengthscale regime thresholded estimators enjoy an exponential improvement in sample complexity relative to the sample covariance estimator. To formalize this regime, we introduce the following additional assumption:
**Assumption 2.4**.: The following holds:
* The covariance function \(k\) is isotropic and positive, so that \(k(x,x^{\prime})=k(|x-x^{\prime}|)>0\). Moreover, \(k(r)\) is differentiable, strictly decreasing on \([0,\infty)\), and satisfies \(k(r)\to 0\) as \(r\to\infty\).
* The kernel \(k=k_{\lambda}\) depends on a correlation lengthscale parameter \(\lambda>0\) such that \(k_{\lambda}(\alpha r)=k_{\lambda}\alpha^{-1}(r)\) for any \(\alpha>0\), and \(k_{\lambda}(0)=k(0)\) is independent of \(\lambda\).
Assumption 2.4 (i) requires isotropy of the covariance kernel on \(D\); this assumption, while restrictive, is often invoked in applications [43, 52]. Assumption 2.4 (ii) makes explicit the dependence of the
kernel on the correlation lengthscale parameter \(\lambda\). As discussed later, the nonparametric Assumption 2.4 is satisfied by important parametric covariance functions, such as squared exponential and Matern models. The _small lengthscale regime_ holds whenever the underlying covariance function satisfies Assumption 2.4 and \(\lambda\) is sufficiently small. Theorem 2.5 compares the errors of sample and thresholded covariance estimators in the small lengthscale regime. The proof can be found in Section 4.
**Theorem 2.5**.: _Suppose that Assumptions 2.1 and 2.4 hold. Let \(c_{0}\gtrsim 1\) be an absolute constant and set_
\[\widehat{\rho}_{N}:=\frac{c_{0}}{\sqrt{N}}\left(\frac{1}{N}\sum_{n=1}^{N}\sup _{x\in D}u_{n}(x)\right).\]
_Further, assume that \(N\gtrsim\log(\lambda^{-d})\). Then, the sample covariance estimator and the thresholded covariance estimator satisfy that, for sufficiently small \(\lambda\),_
\[\frac{\mathbb{E}\|\widehat{C}-C\|}{\|C\|}\asymp c(d)\left(\sqrt{ \frac{\lambda^{-d}}{N}}\vee\frac{\lambda^{-d}}{N}\right), \tag{2.4}\] \[\frac{\mathbb{E}\|\widehat{C}_{\widehat{\rho}_{N}}-C\|}{\|C\|} \leq c(d,q)\left(\frac{\log(\lambda^{-d})}{N}\right)^{\frac{1-q}{2}}, \tag{2.5}\]
_where \(c(d)\) is a constant that only depends on \(d\), \(c(d,q)\) is a constant that only depends on \(d\) and \(q\)._
**Remark 2.6**.: The term \(c(d,q)\) in (2.5) admits a form
\[c(d,q)\asymp\frac{\int_{0}^{\infty}k_{1}(r)^{q}r^{d-1}dr}{\int_{0}^{\infty}k_{ 1}(r)r^{d-1}dr},\]
where \(k_{1}(r)\) is the kernel function with correlation lengthscale parameter \(\lambda=1\) in Assumption 2.4.
Theorem 2.5 shows that, for sufficiently small \(\lambda\), we need \(N\gtrsim\lambda^{-d}\) samples to control the relative error of the sample covariance estimator, while \(N\gtrsim\log(\lambda^{-d})\) samples suffice to control the relative error of the thresholded estimator. The error bound in (2.5) is reminiscent of the convergence rate of thresholded estimators for \(\ell_{q}\)-sparse covariance matrix estimation [7, 15], \(s_{0}\big{(}\frac{\log\dim(u)}{N}\big{)}^{(1-q)/2}\), where \(s_{0}\) characterizes the sparsity level. Therefore, Theorem 2.5 indicates that, in our infinite-dimensional setting, the parameter \(\lambda^{-d}\) plays an analogous role to \(\dim(u)\) and \(c(d,q)\) plays an analogous role to the sparsity level \(s_{0}\). However, we remark that the estimation error in Theorem 2.5 is _relative error_, whereas in the finite-dimensional covariance matrix estimation literature [7, 12, 15], the estimation error is often _absolute error_. While in the finite-dimensional setting the sparsity parameter \(s_{0}\) may increase with \(\dim(u)\), the constant \(c(d,q)\) in our bound (2.5) is independent of the lengthscale parameter \(\lambda\). Moreover, inspired by the minimax optimality of thresholded estimators for \(\ell_{q}\)-sparse covariance matrix estimation [15], we conjecture that the convergence rate (2.5) is also minimax optimal, and we intend to investigate this question in future work.
The bound (2.4) for the sample covariance estimator relies on the seminal work [35], which shows that, for any sample size \(N\),
\[\frac{\mathbb{E}\|\widehat{C}-C\|}{\|C\|}\asymp\sqrt{\frac{r(C)}{N}}\vee\frac {r(C)}{N},\qquad r(C):=\frac{\mathrm{Tr}(C)}{\|C\|}. \tag{2.6}\]
Consequently, (2.4) follows by a sharp characterization of the operator norm and the trace of \(\mathcal{C}\) in terms of \(\lambda\). In contrast, the bound (2.5) for the thresholded estimator relies on our new Theorem 2.2, and requires an analogous characterization of the thresholding parameter \(\rho_{N}\) and approximate sparsity level \(R_{q}\) in terms of \(\lambda\).
In the remainder of this subsection, we illustrate Theorem 2.5 with a simple numerical experiment where we consider the estimation of covariance operators for squared exponential (SE) and Matern (Ma) models on the unit interval at small lengthscales. We emphasize that our theory is developed under mild nonparametric assumptions on the covariance kernel as outlined in Assumption 2.4; however, for simplicity here we focus on two important parametric models. For \(x,x^{\prime}\in D\), define the corresponding covariance functions
\[k^{\text{SE}}_{\lambda}(x,x^{\prime}) :=\exp\left(-\frac{|x-x^{\prime}|^{2}}{2\lambda^{2}}\right), \tag{2.7}\] \[k^{\text{Ma}}_{\lambda}(x,x^{\prime}) :=\frac{2^{1-\nu}}{\Gamma(\nu)}\left(\frac{\sqrt{2\nu}}{\lambda} |x-x^{\prime}|\right)^{\nu}K_{\nu}\left(\frac{\sqrt{2\nu}}{\lambda}|x-x^{ \prime}|\right), \tag{2.8}\]
where \(\Gamma\) denotes the Gamma function and \(K_{\nu}\) denotes the modified Bessel function of the second kind. In both cases, the parameter \(\lambda\) is interpreted as the correlation lengthscale of the field and Assumption 2.4 is satisfied. Moreover, Assumption 2.1 is satisfied by the squared exponential model, and it is satisfied by the Matern model provided that the smoothness parameter \(\nu\) satisfies \(\nu>(\frac{d-1}{2}\vee\frac{1}{2})\). We refer to [42, Lemma 4.2] for the almost sure continuity of random samples and to [39, Appendix 3, Lemma 11] for the Holder continuity of the Matern covariance function \(k^{\text{Ma}}(r)\). For the Matern model, we take the smoothness parameter to be \(\nu=3/2\) in our experiments.
To ensure numerical tractability, we discretize the domain \(D=[0,1]\) using a fine mesh of \(L=1250\) uniformly spaced grid points. We consider a total of \(30\) lengthscales arranged uniformly in log-space and ranging from \(10^{-3}\) to \(10^{-0.1}\). For each lengthscale \(\lambda\), with corresponding covariance operator \(\mathcal{C}\), the discretized covariance operators are given by the \(L\times L\) covariance matrices
\[\mathcal{C}^{ij}:=k(x_{i},x_{j}),\qquad 1\leq i,j\leq L,\]
and we sample \(N=5\log(\lambda^{-1})\) realizations of a Gaussian process on the mesh, denoted \(u_{1},\ldots u_{N}\sim\mathcal{N}(0,C)\). We then compute the empirical and thresholded sample covariance matrices
\[\widehat{\mathcal{C}}^{ij}:=\frac{1}{N}\sum_{n=1}^{N}u_{n}(x_{i})u_{n}(x_{j}), \qquad\widehat{\mathcal{C}}^{ij}_{\widehat{\rho}_{N}}:=\widehat{\mathcal{C}} ^{ij}\mathbf{1}_{\{|\widehat{\mathcal{C}}^{ij}|\geq\widehat{\rho}_{N}\}}, \qquad 1\leq i,j\leq L,\]
scaling the thresholding level \(\widehat{\rho}_{N}\) as described in Theorem 2.2.
To quantify the performance of each of the estimators, we compute their relative errors
\[\varepsilon:=\frac{\|\widehat{\mathcal{C}}-\mathcal{C}\|}{\|\mathcal{C}\|}, \qquad\varepsilon_{\widehat{\rho}_{N}}:=\frac{\|\widehat{\mathcal{C}}_{ \widehat{\rho}_{N}}-\mathcal{C}\|}{\|\mathcal{C}\|}.\]
The experiment is repeated a total of \(100\) times for each lengthscale. In Figure 1, we plot average relative errors as well as \(95\%\) confidence intervals over the \(100\) trials for both squared exponential and Matern models, along with the sample size for each lengthscale setting. Our theoretical results are clearly illustrated: taking only \(N\asymp\log(\lambda^{-1})\) samples, the relative error in the thresholded covariance operator remains constant as the lengthscale decreases, whereas the relative error in the sample covariance operator diverges. Notice that Figure 1 also shows that thresholding can increase the relative error for fields with large correlation lengthscale.
### Application in Ensemble Kalman Filters
Nonlinear filtering is concerned with online estimation of the state of a dynamical system from partial and noisy observations. Filtering algorithms blend the dynamics and observations by sequentially solving inverse problems of the form
\[y=\mathcal{A}u+\eta, \tag{2.9}\]
where \(y\in\mathbb{R}^{d_{y}}\) denotes the observation, \(u\in L^{2}(D)\) denotes the state, \(\mathcal{A}:L^{2}(D)\rightarrow\mathbb{R}^{d_{y}}\) is a linear observation operator, and \(\eta\sim\mathcal{N}(0,\Gamma)\) is the observation error with positive definite covariance matrix \(\Gamma\). In Bayesian filtering [40], the model dynamics define a prior or _forecast_ distribution on the state, which is combined with the data likelihood implied by the observation model (2.9) to obtain a posterior or _analysis_ distribution.
Ensemble Kalman filters (EnKFs) rely on an ensemble of particles \(\{u_{n}\}_{n=1}^{N}\) to represent the forecast distribution [24]. Taking as input a forecast ensemble \(\{u_{n}\}_{n=1}^{N}\overset{\mathrm{i.i.d.}}{\sim}\mathcal{N}(0,C)\) and observed data \(y\) generated according to (2.9), EnKFs produce an analysis ensemble \(\{v_{n}\}_{n=1}^{N}\). Each analysis particle \(v_{n}\) is obtained by nudging a forecast particle \(u_{n}\) towards the observed data \(y\). The amount of nudging is controlled by a _Kalman gain_ operator to be estimated using the first two moments of the forecast ensemble. In this subsection we show that thresholded covariance operator estimators within the EnKF analysis step can dramatically reduce the ensemble size required to approximate an idealized _mean-field_ EnKF that uses the population moments of the forecast distribution.
Define the mean-field EnKF analysis update by
\[\upsilon_{n}^{\star}:=u_{n}+\mathcal{K}(C)\big{(}y-\mathcal{A}u_{n}-\eta_{n} \big{)},\qquad 1\leq n\leq N, \tag{2.10}\]
where \(\{\eta_{n}\}_{n=1}^{N}\overset{\mathrm{i.i.d.}}{\sim}\mathcal{N}(0,\Gamma)\) and
\[\mathcal{K}(C):=C\mathcal{A}^{*}(\mathcal{A}C\mathcal{A}^{*}+\Gamma)^{-1} \tag{2.11}\]
Figure 1: Plots of the average relative errors and 95% confidence intervals achieved by the sample (\(\varepsilon\), dashed blue) and thresholded (\(\varepsilon_{\widehat{\rho}_{N}}\), solid red) covariance estimators based on sample size (\(N\), dotted green) for the squared exponential kernel (left) and Matérn kernel (right) over 100 trials.
denotes the Kalman gain. Practical algorithms do not have access to the forecast distribution, and rely instead on the forecast ensemble to estimate both \(\mathcal{C}\) and \(\mathcal{K}\). We will investigate two popular analysis steps, given by
\[\upsilon_{n} :=u_{n}+\mathcal{K}(\widehat{\mathcal{C}})\big{(}y-\mathcal{A}u_{n }-\eta_{n}\big{)}, 1\leq n\leq N, \tag{2.12}\] \[\upsilon_{n}^{\rho} :=u_{n}+\mathcal{K}(\widehat{\mathcal{C}}_{\rho_{N}})\big{(}y- \mathcal{A}u_{n}-\eta_{n}\big{)}, 1\leq n\leq N. \tag{2.13}\]
The analysis step in (2.12) is known as the _perturbed observation_ or _stochastic_ EnKF [11]. For simplicity of exposition, we will assume here that when updating \(u_{n}\), this particle is not included in the sample covariance \(\widehat{\mathcal{C}}\) used to define the Kalman gain. This slight modification of the sample covariance will facilitate a cleaner statement and proof of our main result, Theorem 2.7, without altering the qualitative behavior of the algorithm. The analysis step in (2.13) is based on a thresholded covariance operator estimator. Again, we assume that the thresholded estimator \(\widehat{\mathcal{C}}_{\rho_{N}}\) is defined without using the particle \(u_{n}\). The following result is a direct consequence of our theory on covariance operator estimation in the small lengthscale regime. The proof can be found in Section 5.
**Theorem 2.7** (Approximation of Mean-Field EnKF).: _Suppose that Assumptions 2.1 and 2.4 hold. Let \(y\) be generated according to (2.9) with bounded observation operator \(\mathcal{A}:L^{2}(D)\to\mathbb{R}^{d_{y}}\). Let \(\upsilon_{n}^{\star}\) be the mean-field EnKF update in (2.10), and let \(\upsilon_{n}\) and \(\upsilon_{n}^{\rho}\) be the EnKF and localized EnKF updates in (2.12) and (2.13). Let \(c_{0}\gtrsim 1\) be an absolute constant and set_
\[\rho_{N}\asymp\frac{c_{0}}{\sqrt{N}}\Big{(}\frac{1}{N}\sum_{n=1}^{N}\sup_{x \in D}u_{n}(x)\Big{)}.\]
_Then,_
\[\mathbb{E}\big{[}|\upsilon_{n}-\upsilon_{n}^{\star}|\,|\,u_{n}, \eta_{n}\big{]}\lesssim c\left[c(d)\left(\sqrt{\frac{\lambda^{-d}}{N}}\lor \frac{\lambda^{-d}}{N}\right)\right],\] \[\mathbb{E}\big{[}|\upsilon_{n}^{\rho}-\upsilon_{n}^{\star}|\,|\,u_ {n},\eta_{n}\big{]}\lesssim c\left[c(d,q)\left(\frac{\log(\lambda^{-d})}{N} \right)^{\frac{1-q}{2}}\right],\]
_where \(c=\|\mathcal{A}\|\|\Gamma^{-1}\|\|C\|\|y-\mathcal{A}u_{n}-\eta_{n}|\)._
## 3 Thresholded Estimation of Covariance Operators
This section studies thresholded estimation of covariance operators in the general setting of Assumption 2.1. In Subsection 3.1 we establish concentration bounds for the empirical thresholding parameter \(\widehat{\rho}_{N}\) defined in (2.2). In Subsection 3.2 we show uniform error bounds on the sample covariance function estimator \(\widehat{k}(x,x^{\prime})\). These results are used in Subsection 3.3 to prove our first main result, Theorem 2.2.
### Concentration of the Empirical Thresholding Parameter
The following auxiliary result can be found in [48, Lemma 6.15].
**Lemma 3.1**.: _Under Assumption 2.1 (i), it holds with probability at least \(1-2e^{-t}\) that_
\[\left|\frac{1}{N}\sum_{n=1}^{N}\sup_{x\in D}u_{n}(x)-\mathbb{E}\left[\sup_{x\in D }u(x)\right]\right|\leq\sqrt{\frac{2t}{N}}.\]
Proof.: By Gaussian concentration, \(\sup_{x\in D}u(x)\) is \(\sup_{x\in D}\mathrm{Var}[u(x)]\)-sub-Gaussian. Since under Assumption 2.1 (i), \(\sup_{x\in D}\mathrm{Var}[u(x)]=1\), a Chernoff bound argument gives the result.
The next result establishes moment and concentration bounds for the estimator \(\widehat{\rho}_{N}\) of the thresholding parameter \(\rho_{N}\). The proof relies heavily on Lemma 3.1.
**Lemma 3.2**.: _Under the setting of Theorem 2.2, it holds that_
* _For any_ \(p\geq 1\)_,_ \(\mathbb{E}\big{[}\,\widehat{\rho}_{N}^{p}\big{]}\lesssim_{P}\rho_{N}^{p}\)_._
* _For any_ \(t\in(0,1)\)_,_ \[\mathbb{P}\big{[}\,\widehat{\rho}_{N}<t\rho_{N}\big{]} \leq 2\,e^{-\frac{1}{2}(1-\sqrt{t})^{2}N(\mathbb{E}[\sup_{x\in D }u(x)])^{2}}\mathbf{1}\big{\{}\mathbb{E}[\sup_{x\in D}u(x)]\geq 1/\sqrt{N}\big{\}}\] (3.1) \[\leq 2\,e^{-\frac{1}{2}(1-\sqrt{t})^{2}N(\rho_{N}\wedge\rho_{N}^{ 2})}\,.\] (3.2)
Proof.: We first prove (A). Without loss of generality, we assume \(c_{0}=1\) in the definition of \(\widehat{\rho}_{N}\) and \(\rho_{N}\) in Theorem 2.2. Let \(t>0\) and define \(\mathcal{E}_{t}\) to be the event on which \(\big{|}\frac{1}{N}\sum_{n=1}^{N}\sup_{x\in D}u_{n}(x)-\mathbb{E}\big{[}\sup_{ x\in D}u(x)\big{]}\big{|}\leq t\). It holds on \(\mathcal{E}_{t}\) that
\[\widehat{\rho}_{N} \leq\frac{1}{N}\vee\frac{\mathbb{E}[\sup_{x\in D}u(x)]+t}{\sqrt{ N}}\vee\frac{(\mathbb{E}[\sup_{x\in D}u(x)]+t)^{2}}{N}\] \[\leq\frac{1}{N}\vee\frac{2\mathbb{E}[\sup_{x\in D}u(x)]}{\sqrt{ N}}\vee\frac{2t}{\sqrt{N}}\vee\frac{4(\mathbb{E}[\sup_{x\in D}u(x)])^{2}}{N} \vee\frac{4t^{2}}{N}\] \[\leq 4\rho_{N}\vee\frac{2t}{\sqrt{N}}\vee\frac{4t^{2}}{N},\]
and \(\mathbb{P}(\widehat{\rho}_{N}\leq 4\rho_{N}\vee\frac{2t}{\sqrt{N}}\vee\frac{4t^ {2}}{N})\geq\mathbb{P}(\mathcal{E}_{t})\geq 1-2e^{-Nt^{2}/2}\) by Lemma 3.1. It follows then that
\[\mathbb{E}\big{[}\,\widehat{\rho}_{N}^{p}\big{]} =p\int_{0}^{\infty}t^{p-1}\,\mathbb{P}\big{[}\,\widehat{\rho}_{N }\geq t\big{]}dt=p\int_{0}^{4\rho_{N}}t^{p-1}\,\mathbb{P}\big{[}\,\widehat{ \rho}_{N}\geq t\big{]}dt+p\int_{4\rho_{N}}^{\infty}t^{p-1}\,\mathbb{P}\big{[} \,\widehat{\rho}_{N}\geq t\big{]}dt\] \[\leq(4\rho_{N})^{p}+2p\int_{4\rho_{N}}^{\infty}t^{p-1}\,e^{-\frac {N}{2}\min\{\frac{Nt^{2}}{4},\frac{Nt}{4}\}}dt\lesssim_{P}\rho_{N}^{p}+\frac{1 }{N^{p}}\lesssim_{P}\rho_{N}^{p}.\]
We next show (B). To prove (3.1), we can assume \(c_{0}=1\) without loss of generality. Notice that
\[\mathbb{P}[\widehat{\rho}_{N}<t\rho_{N}]=\mathbb{P}\Bigg{[}\left(\frac{1}{N }<t\rho_{N}\right)\Big{]}\,\bigcap\,\Big{(}\frac{1}{\sqrt{N}}\Big{(}\frac{1}{N }\sum_{n=1}^{N}\sup_{x\in D}u_{n}(x)\Big{)}<t\rho_{N}\Big{)}\bigcap\,\Big{(} \frac{1}{N}\Big{(}\frac{1}{N}\sum_{n=1}^{N}\sup_{x\in D}u_{n}(x)\Big{)}^{2}<t \rho_{N}\Big{)}\Bigg{]}\]
\[=1-\mathbb{P}\Bigg{[}\left(\frac{1}{N}\geq t\rho_{N}\right)\bigcup\Big{(} \frac{1}{\sqrt{N}}\Big{(}\frac{1}{N}\sum_{n=1}^{N}\sup_{x\in D}u_{n}(x)\Big{)} \geq t\rho_{N}\Big{)}\bigcup\,\Big{(}\frac{1}{N}\Big{(}\frac{1}{N}\sum_{n=1}^{ N}\sup_{x\in D}u_{n}(x)\Big{)}^{2}\geq t\rho_{N}\Big{)}\Bigg{]}.\]
We consider three cases.
_Case 1:_ If \(\mathbb{E}[\sup_{x\in D}u(x)]<\frac{1}{\sqrt{N}}\), then \(\rho_{N}=\frac{1}{N}\) and \(\mathbb{P}[\widehat{\rho}_{N}<t\rho_{N}]\leq 1-\mathbb{P}\left[\frac{1}{N}\geq t \rho_{N}\right]=0\).
_Case 2:_ If \(\frac{1}{\sqrt{N}}\leq\mathbb{E}[\sup_{x\in D}u(x)]\leq\sqrt{N}\), then \(\rho_{N}=\frac{1}{\sqrt{N}}\mathbb{E}[\sup_{x\in D}u(x)]\) and
\[\mathbb{P}[\widehat{\rho}_{N}<t\rho_{N}] \leq 1-\mathbb{P}\left[\left(\frac{1}{\sqrt{N}}\Big{(}\frac{1}{N} \sum_{n=1}^{N}\sup_{x\in D}u_{n}(x)\Big{)}\geq t\rho_{N}\right)\right]=1- \mathbb{P}\left[\frac{1}{N}\sum_{n=1}^{N}\sup_{x\in D}u_{n}(x)\geq t\,\mathbb{ E}[\sup_{x\in D}u(x)]\right]\] \[\leq 2\exp\Big{(}-\frac{1}{2}(1-t)^{2}N(\mathbb{E}[\sup_{x\in D }u(x)])^{2}\Big{)},\]
where the last step follows by Lemma 3.1.
_Case 3:_ If \(\mathbb{E}[\sup_{x\in D}u(x)]>\sqrt{N}\), then \(\rho_{N}=\frac{1}{N}(\mathbb{E}[\sup_{x\in D}u(x)])^{2}\) and
\[\mathbb{P}[\widehat{\rho}_{N}<t\rho_{N}] \leq 1-\mathbb{P}\left[\frac{1}{N}\Big{(}\frac{1}{N}\sum_{n=1}^{N }\sup_{x\in D}u_{n}(x)\Big{)}^{2}\geq t\rho_{N}\right]=1-\mathbb{P}\left[ \left|\frac{1}{N}\sum_{n=1}^{N}\sup_{x\in D}u_{n}(x)\right|\geq\sqrt{t}\, \mathbb{E}[\sup_{x\in D}u(x)]\right]\] \[\leq 2\exp\Big{(}-\frac{1}{2}(1-\sqrt{t})^{2}N(\mathbb{E}[\sup_{ x\in D}u(x)])^{2}\Big{)}.\]
Combining the three cases above and noticing that \((1-\sqrt{t})^{2}\leq(1-t)^{2}\) for \(t\in(0,1)\) yields the first inequality in (3.1). To prove (3.2), recall that \(1\leq c_{0}\leq\sqrt{N}\) in the definition of \(\rho_{N}\). If \(\mathbb{E}[\sup_{x\in D}u(x)]<1/\sqrt{N}\), then (3.2) is trivial. If \(\frac{1}{\sqrt{N}}\leq\mathbb{E}[\sup_{x\in D}u(x)]\leq\sqrt{N}\), then \(\rho_{N}=\frac{c_{0}}{\sqrt{N}}\mathbb{E}[\sup_{x\in D}u(x)]\) and \(N(\mathbb{E}[\sup_{x\in D}u(x)])^{2}=\frac{N^{2}\rho_{N}^{2}}{c_{0}^{2}}\geq N \rho_{N}^{2}\), so that
\[2\,e^{-\frac{1}{2}(1-\sqrt{t})^{2}N(\mathbb{E}[\sup_{x\in D}u(x)])^{2}}\mathbf{ 1}\{\mathbb{E}[\sup_{x\in D}u(x)]\geq 1/\sqrt{N}\}\leq 2\,e^{-\frac{1}{2}(1- \sqrt{t})^{2}N\rho_{N}^{2}}.\]
If \(\mathbb{E}[\sup_{x\in D}u(x)]>\sqrt{N}\), then \(\rho_{N}=\frac{c_{0}}{N}(\mathbb{E}[\sup_{x\in D}u(x)])^{2}\) and \(N(\mathbb{E}[\sup_{x\in D}u(x)])^{2}=\frac{N^{2}\rho_{N}}{c_{0}}\geq N^{3/2} \rho_{N}\geq N\rho_{N}\), so that
\[2\,e^{-\frac{1}{2}(1-\sqrt{t})^{2}N(\mathbb{E}[\sup_{x\in D}u(x)])^{2}} \mathbf{1}\{\mathbb{E}[\sup_{x\in D}u(x)]\geq 1/\sqrt{N}\}\leq 2\,e^{-\frac{1}{2}(1- \sqrt{t})^{2}N\rho_{N}}.\qed\]
### Covariance Function Estimation
In this subsection we establish uniform error bounds on the sample covariance function estimator. These bounds will play a central role in our analysis of thresholded estimation of covariance operators developed in the next subsection. We first establish a high-probability bound, which is uniform over both arguments of the covariance function.
**Proposition 3.3**.: _Under Assumption 2.1, there exist positive absolute constants \(c_{1},c_{2}\) such that, for all \(t\geq 1\), it holds with probability at least \(1-c_{1}e^{-c_{2}t}\) that_
\[\sup_{x,x^{\prime}\in D}\left|\widehat{k}(x,x^{\prime})-k(x,x^{\prime})\right| \lesssim\left[\left(\frac{t}{N}\vee\sqrt{\frac{t}{N}}\right)\mathbb{E}\left[ \sup_{x\in D}u(x)\right]\right]\vee\frac{\left(\mathbb{E}\left[\sup_{x\in D}u (x)\right]\right)^{2}}{N}.\]
Proof.: We will apply the product empirical process bound in [37, Theorem 1.13]. To that end, define the evaluation functional at \(x\in D\) by
\[\ell_{x}:u\longmapsto\ell_{x}(u)=u(x)\]
and write
\[\left|\widehat{k}(x,x^{\prime})-k(x,x^{\prime})\right|=\left|\frac{1}{N}\sum _{n=1}^{N}u_{n}(x)u_{n}(x^{\prime})-\mathbb{E}[u(x)u(x^{\prime})]\right|= \left|\frac{1}{N}\sum_{n=1}^{N}\ell_{x}(u_{n})\ell_{x^{\prime}}(u_{n})- \mathbb{E}[\ell_{x}(u)\ell_{x^{\prime}}(u)]\right|,\]
so that
\[\sup_{x,x^{\prime}\in D}\left|\widehat{k}(x,x^{\prime})-k(x,x^{\prime})\right| =\sup_{f,g\in\mathcal{F}}\left|\frac{1}{N}\sum_{n=1}^{N}f(u_{n})g(u_{n})- \mathbb{E}[f(u)g(u)]\right|,\]
where \(\mathcal{F}:=\{\ell_{x}\}_{x\in D}\) denotes the family of evaluation functionals. Note that \(\{\ell_{x}\}_{x\in D}\) are continuous linear functionals on \(C(D)\), the space of continuous functions on \(D\) endowed with its usual topology. We can then apply [37, Theorem 1.13] (see also [3, Theorem B.11]) which implies that, with probability \(1-c_{1}e^{-c_{2}t}\),
\[\sup_{x,x^{\prime}\in D}\left|\widehat{k}(x,x^{\prime})-k(x,x^{\prime})\right| \lesssim\left[\left(\frac{t}{N}\vee\sqrt{\frac{t}{N}}\right)\left(\sup_{f\in \mathcal{F}}\|f\|_{\psi_{2}}\gamma_{2}(\mathcal{F},\psi_{2})\right)\right] \vee\frac{\gamma_{2}^{2}(\mathcal{F},\psi_{2})}{N}, \tag{3.3}\]
where here and henceforth \(\gamma_{2}\) denotes Talagrand's generic complexity [45, Definition 2.7.3] and \(\psi_{2}\) denotes the Orlicz norm with Orlicz function \(\psi(x)=e^{x^{2}}-1\), see e.g. [50, Definition 2.5.6]. Since \(u\) is Gaussian, the \(\psi_{2}\)-norm of linear functionals is equivalent to the \(L^{2}\)-norm. Hence,
\[\sup_{f\in\mathcal{F}}\|f\|_{\psi_{2}}\lesssim\sup_{f\in\mathcal{F}}\|f\|_{L^ {2}}=\sup_{f\in\mathcal{F}}\sqrt{\mathbb{E}\left[f^{2}(u)\right]}=\sup_{x\in D }\sqrt{\mathbb{E}\left[u^{2}(x)\right]}=\sup_{x\in D}\sqrt{k(x,x)}=1, \tag{3.4}\]
where we used Assumption 2.1 (i) in the last step. Next, to control the complexity \(\gamma_{2}(\mathcal{F},\psi_{2})\), let
\[\mathsf{d}(x,x^{\prime}):=\sqrt{\mathbb{E}\left[(u(x)-u(x^{\prime}))^{2} \right]}=\left\|\ell_{x}(\cdot)-\ell_{x^{\prime}}(\cdot)\right\|_{L^{2}(P)}, \quad x,x^{\prime}\in D,\]
where \(P\) is the distribution of the random function \(u\). Then,
\[\gamma_{2}(\mathcal{F},\psi_{2})\stackrel{{\rm(i)}}{{\lesssim}} \gamma_{2}(\mathcal{F},L^{2})=\gamma_{2}(D,\mathsf{d})\stackrel{{ \rm(ii)}}{{\asymp}}\mathbb{E}\left[\sup_{x\in D}u(x)\right], \tag{3.5}\]
where (i) follows by the equivalence of \(\psi_{2}\) and \(L^{2}\) norms for linear functionals and (ii) follows by Talagrand's majorizing-measure theorem [45, Theorem 2.10.1]. Combining the inequalities (3.3), (3.4), and (3.5) gives the desired result.
**Corollary 3.4**.: _Under Assumption 2.1, it holds that, for any \(p\geq 1\),_
\[\left(\mathbb{B}\left[\sup_{x,x^{\prime}\in D}\left|\widehat{k}(x,x^{\prime})-k( x,x^{\prime})\right|^{p}\right]\right)^{\frac{1}{p}}\lesssim_{p}\frac{\mathbb{E} \left[\sup_{x\in D}u(x)\right]}{\sqrt{N}}\vee\frac{\left(\mathbb{E}\left[\sup _{x\in D}u(x)\right]\right)^{2}}{N}.\]
Proof.: The result follows by integrating the tail bound in Proposition 3.3.
In contrast to Proposition 3.3, the following result provides uniform control over the error when holding fixed one of the two covariance function inputs. For this easier estimation task, we obtain an improved exponential tail bound that we will use in the proof of Theorem 2.2.
**Proposition 3.5**.: _Suppose that Assumption 2.1 holds. Let \(1\leq c_{0}\leq N\) and set_
\[\rho_{N}:=c_{0}\left[\frac{1}{N}\vee\frac{1}{\sqrt{N}}\mathbb{E}\Big{[}\sup_{ x\in D}u(x)\Big{]}\vee\frac{1}{N}\left(\mathbb{E}\Big{[}\sup_{x\in D}u(x) \Big{]}\right)^{2}\right].\]
_Then, for every \(x^{\prime}\in D\), it holds with probability at least \(1-4e^{-c_{1}N\left(\rho_{N}\wedge\rho_{N}^{2}\right)}\) that_
\[\sup_{x\in D}\left|\widehat{k}(x,x^{\prime})-k(x,x^{\prime})\right|\lesssim \rho_{N}.\]
Proof.: We will apply the multiplier empirical process bound in (37, Theorem 4.4). To that end, we write
\[\left|\widehat{k}(x,x^{\prime})-k(x,x^{\prime})\right|=\left|\frac{1}{N}\sum _{n=1}^{N}u_{n}(x)u_{n}(x^{\prime})-\mathbb{E}[u(x)u(x^{\prime})]\right|=\left| \frac{1}{N}\sum_{n=1}^{N}\ell_{x}(u_{n})\ell_{x^{\prime}}(u_{n})-\mathbb{E}[ \ell_{x}(u)\ell_{x^{\prime}}(u)]\right|,\]
so that for the class \(\mathcal{F}:=\{\ell_{x}\}_{x\in D}\) of evaluation functionals and for a fixed \(g\in\mathcal{F}\), we have
\[\sup_{x\in D}\left|\widehat{k}(x,x^{\prime})-k(x,x^{\prime})\right| =\sup_{f\in\mathcal{F}}\left|\frac{1}{N}\sum_{n=1}^{N}f(u_{n})g( u_{n})-\mathbb{E}[f(u)g(u)]\right|\] \[=\frac{1}{N}\sup_{f\in\mathcal{F}}\left|\sum_{n=1}^{N}\left(f(u_{ n})\xi_{n}-\mathbb{E}[f(u)\xi]\right)\right|,\]
where \(\xi_{n}:=g(u_{n})\). Note that \(\xi_{1},\ldots,\xi_{N}\) are i.i.d. copies of \(\xi\sim\mathcal{N}\big{(}0,k(x^{\prime},x^{\prime})\big{)}\), where \(x^{\prime}\in D\) is the point indexed by \(g\). By (37, Theorem 4.4) we have that for any \(s,t\geq 1\), it holds with probability at least \(1-2e^{-c_{1}s^{2}\left(\mathbb{E}[\sup_{x\in D}u(x)]\right)^{2}}-2e^{-c_{1} Nt^{2}}\) that
\[\sup_{x\in D}\left|\widehat{k}(x,x^{\prime})-k(x,x^{\prime})\right|\lesssim \frac{st\|\xi\|_{\psi_{2}}\,\mathbb{E}[\sup_{x\in D}u(x)]}{\sqrt{N}}\leq\frac {st\,\mathbb{E}[\sup_{x\in D}u(x)]}{\sqrt{N}}, \tag{3.6}\]
where the last inequality follows by the fact that \(\|\xi\|_{\psi_{2}}\leq\sqrt{k(x^{\prime},x^{\prime})}\leq\sup_{x\in D}\sqrt{k (x,x)}=1\). We consider three cases:
_Case 1:_ If \(\mathbb{E}[\sup_{x\in D}u(x)]<\frac{1}{\sqrt{N}}\), then \(\rho_{N}=\frac{c_{0}}{N}<1\). We take
\[s=\frac{c_{0}}{\sqrt{N}\,\mathbb{E}[\sup_{x\in D}u(x)]}>1,\qquad t=1,\]
and then (3.6) implies that it holds with probability at least \(1-2e^{-c_{1}c_{0}^{2}/N}-2e^{-c_{1}N}\stackrel{{\rm(i)}}{{\geq}}1-4e^ {-c_{1}c_{0}^{2}/N}=1-4e^{-c_{1}N\rho_{N}^{2}}\) that
\[\sup_{x\in D}|\widehat{k}(x,x^{\prime})-k(x,x^{\prime})|\lesssim\frac{st\, \mathbb{E}[\sup_{x\in D}u(x)]}{\sqrt{N}}=\frac{c_{0}}{N}=\rho_{N},\]
where (i) follows since \(c_{0}<N\) by assumption.
_Case 2:_ If \(\frac{1}{\sqrt{N}}\leq\mathbb{E}[\sup_{x\in D}u(x)]\leq\sqrt{N}\), then \(\rho_{N}=\frac{c_{0}}{\sqrt{N}}\mathbb{E}[\sup_{x\in D}u(x)]\). In this case, if \(\rho_{N}=\frac{c_{0}}{\sqrt{N}}\mathbb{E}[\sup_{x\in D}u(x)]>1\), we take
\[s=\sqrt{\frac{c_{0}\sqrt{N}}{\mathbb{E}[\sup_{x\in D}u(x)]}}\geq 1,\qquad t= \sqrt{\frac{c_{0}}{\sqrt{N}}\mathbb{E}[\sup_{x\in D}u(x)]}>1,\]
and then (3.6) implies that it holds with probability at least \(1-4e^{-c_{1}c_{0}\sqrt{N}\mathbb{E}[\sup_{x\in D}u(x)]}=1-4e^{-c_{1}N\rho_{N}}\) that
\[\sup_{x\in D}|\widehat{k}(x,x^{\prime})-k(x,x^{\prime})|\lesssim\frac{st\, \mathbb{E}[\sup_{x\in D}u(x)]}{\sqrt{N}}=\frac{c_{0}}{\sqrt{N}}\mathbb{E}[ \sup_{x\in D}u(x)]=\rho_{N}.\]
If \(\rho_{N}=\frac{c_{0}}{\sqrt{N}}\mathbb{E}[\sup_{x\in D}u(x)]\leq 1\), then we take \(s=c_{0}\geq 1\) and \(t=1\), and (3.6) implies that, with probability at least
\[1-2e^{-c_{1}c_{0}^{2}(\mathbb{E}[\sup_{x\in D}u(x)])^{2}}-2e^{-c_{1}N}\geq 1-4e ^{-c_{1}c_{0}^{2}(\mathbb{E}[\sup_{x\in D}u(x)])^{2}}=1-4e^{-c_{1}N\rho_{N}^{2 }},\]
it holds that
\[\sup_{x\in D}|\widehat{k}(x,x^{\prime})-k(x,x^{\prime})|\lesssim\frac{st\, \mathbb{E}[\sup_{x\in D}u(x)]}{\sqrt{N}}=\frac{c_{0}}{\sqrt{N}}\mathbb{E}[ \sup_{x\in D}u(x)]=\rho_{N}.\]
_Case 3:_ If \(\mathbb{E}[\sup_{x\in D}u(x)]>\sqrt{N}\), then \(\rho_{N}=\frac{c_{0}}{N}(\mathbb{E}[\sup_{x\in D}u(x)])^{2}>1\). We take
\[s=\sqrt{c_{0}}\geq 1,\qquad t=\sqrt{c_{0}}\frac{\mathbb{E}[\sup_{x\in D}u(x) ]}{\sqrt{N}}>1,\]
and (3.6) implies that it holds with probability at least \(1-4e^{-c_{1}c_{0}(\mathbb{E}[\sup_{x\in D}u(x)])^{2}}=1-4e^{-c_{1}N\rho_{N}}\) that
\[\sup_{x\in D}|\widehat{k}(x,x^{\prime})-k(x,x^{\prime})|\lesssim\frac{st\, \mathbb{E}[\sup_{x\in D}u(x)]}{\sqrt{N}}=\frac{c_{0}}{N}(\mathbb{E}[\sup_{x \in D}u(x)])^{2}=\rho_{N}.\]
Combining the three cases above gives the desired result.
### Proof of Theorem 2.2
Proof of Theorem 2.2.: The operator norm can be upper bounded as
\[\|\widehat{C}_{\widehat{\rho}N}-C\|\leq\sup_{x\in D}\int_{D}\big{|}\widehat{ k}_{\widehat{\rho}N}(x,x^{\prime})-k(x,x^{\prime})\big{|}\,dx^{\prime}.\]
Let \(\Omega_{x}:=\{x^{\prime}\in D:|k(x,x^{\prime})|\geq\widehat{\rho}_{N}\}\) and let \(\Omega_{x}^{c}\) be its complement. Then, we have
\[\mathbb{E}\|\widehat{C}_{\widehat{\rho}_{N}}-C\|^{p}\leq\mathbb{E} \left[\left(\sup_{x\in D}\int_{D}\,\left|\widehat{k}_{\widehat{\rho}_{N}}(x,x^{ \prime})-k(x,x^{\prime})\right|dx^{\prime}\right)^{p}\right]\] \[\leq 2^{p-1}\mathbb{E}\left[\left(\sup_{x\in D}\int_{\Omega_{x}}\, \left|\widehat{k}_{\widehat{\rho}_{N}}(x,x^{\prime})-k(x,x^{\prime})\right|dx^ {\prime}\right)^{p}\right]+2^{p-1}\mathbb{E}\left[\left(\sup_{x\in D}\int_{ \Omega_{x}^{c}}\,\left|\widehat{k}_{\widehat{\rho}_{N}}(x,x^{\prime})-k(x,x^ {\prime})\right|dx^{\prime}\right)^{p}\right]\] \[\lesssim_{p}\mathbb{E}\left[\left(\sup_{x\in D}\int_{\Omega_{x}} \,\left|\widehat{k}_{\widehat{\rho}_{N}}(x,x^{\prime})-k(x,x^{\prime})\right| dx^{\prime}\right)^{p}\right]+\mathbb{E}\left[\left(\sup_{x\in D}\int_{\Omega_{x}^{c}}\, \left|k(x,x^{\prime})\right|\mathbf{1}\left\{\left|\widehat{k}(x,x^{\prime}) \right|<\widehat{\rho}_{N}\right\}dx^{\prime}\right)^{p}\right]\] \[\quad+\mathbb{E}\left[\left(\sup_{x\in D}\int_{\Omega_{x}^{c}}\, \left|\widehat{k}(x,x^{\prime})-k(x,x^{\prime})\right|\mathbf{1}\left\{\left| \widehat{k}(x,x^{\prime})\right|\geq\widehat{\rho}_{N}\right\}\mathbf{1}\left\{ \left|\widehat{k}(x,x^{\prime})-k(x,x^{\prime})\right|<4|k(x,x^{\prime})| \right\}dx^{\prime}\right)^{p}\right]\] \[\quad+\mathbb{E}\left[\left(\sup_{x\in D}\int_{\Omega_{x}^{c}}\, \left|\widehat{k}(x,x^{\prime})-k(x,x^{\prime})\right|\mathbf{1}\left\{\left| \widehat{k}(x,x^{\prime})\right|\geq\widehat{\rho}_{N}\right\}\mathbf{1}\left\{ \left|\widehat{k}(x,x^{\prime})-k(x,x^{\prime})\right|\geq 4|k(x,x^{\prime})| \right\}dx^{\prime}\right)^{p}\right]\] \[=:I_{1}+I_{2}+I_{3}+I_{4}. \tag{3.7}\]
We next bound the four terms \(\left\{I_{i}\right\}_{i=1}^{4}\). To ease notation, we define
\[\|\widehat{k}-k\|_{\max}:=\sup_{x,x^{\prime}\in D}\,|\widehat{k}(x,x^{\prime} )-k(x,x^{\prime})|.\]
For \(I_{1}\), using that
\[\left|\widehat{k}_{\widehat{\rho}_{N}}(x,x^{\prime})-k(x,x^{\prime})\right| \leq\left|\widehat{k}_{\widehat{\rho}_{N}}(x,x^{\prime})-\widehat{k}(x,x^{ \prime})\right|+\left|\widehat{k}(x,x^{\prime})-k(x,x^{\prime})\right|\leq \widehat{\rho}_{N}+\|\widehat{k}-k\|_{\max},\]
we have
\[I_{1}=\mathbb{E}\left[\left(\sup_{x\in D}\int_{\Omega_{x}}\,\left|\widehat{k} _{\widehat{\rho}_{N}}(x,x^{\prime})-k(x,x^{\prime})\right|dx^{\prime}\right)^ {p}\right]\leq\mathbb{E}\left[\left(\sup_{x\in D}\mathrm{Vol}(\Omega_{x}) \right)^{p}\left(\widehat{\rho}_{N}+\|\widehat{k}-k\|_{\max}\right)^{p}\right],\]
where \(\mathrm{Vol}(\Omega_{x})\) denotes the Lebesgue measure of \(\Omega_{x}\). Notice that
\[R_{q}^{q}\geq\sup_{x\in D}\int_{D}|k(x,x^{\prime})|^{q}dx^{\prime}\geq\sup_{x \in D}\int_{\Omega_{x}}|k(x,x^{\prime})|^{q}dx^{\prime}\geq\sup_{x\in D}\int_{ \Omega_{x}}\widehat{\rho}_{N}^{\,q}dx^{\prime}=\widehat{\rho}_{N}^{\,q}\sup_{ x\in D}\mathrm{Vol}(\Omega_{x}).\]
Combining this bound with the trivial bound \(\sup_{x}\mathrm{Vol}(\Omega_{x})\leq\mathrm{Vol}(D)=1\) gives
\[\sup_{x\in D}\mathrm{Vol}(\Omega_{x})\leq R_{q}^{q}\widehat{\rho}_{N}^{-q}\wedge 1.\]
Therefore, by Cauchy-Schwarz, we have that
\[I_{1} \leq\mathbb{E}\left[\left(R_{q}^{q}\widehat{\rho}_{N}^{-q}\wedge 1 \right)^{p}(\widehat{\rho}_{N}+\|\widehat{k}-k\|_{\max})^{p}\right]\] \[\leq\sqrt{\mathbb{E}\left[\left(R_{q}^{q}\widehat{\rho}_{N}^{-q} \wedge 1\right)^{2p}\right]\,\mathbb{E}\left[\left(\widehat{\rho}_{N}+\|\widehat{k}- k\|_{\max}\right)^{2p}\right]}. \tag{3.8}\]
Using Lemma 3.2 and Corollary 3.4 yields that
\[\mathbb{E}\left[\left(\widehat{\rho}_{N}+\|\widehat{k}-k\|_{\max}\right)^{2p} \right]\lesssim_{p}\mathbb{E}\left[\left(\widehat{\rho}_{N}\right)^{2p}\right] +\mathbb{E}\left[\|\widehat{k}-k\|_{\max}^{2p}\right]\lesssim_{P}\rho_{N}^{2p}. \tag{3.9}\]
On the other hand,
\[\mathbb{E}\Big{[}\big{(}R_{q}^{q}\widehat{\rho}_{N}^{-q}\wedge 1 \big{)}^{2p}\Big{]} =R_{q}^{2pq}\,\mathbb{E}\Big{[}\widehat{\rho}_{N}^{-2pq}\wedge R_{q} ^{-2pq}\Big{]}=R_{q}^{2pq}\int_{0}^{\infty}\mathbb{P}\Big{[}\big{(}\widehat{ \rho}_{N}^{-2pq}\wedge R_{q}^{-2pq}\big{)}>t\Big{]}dt\] \[=R_{q}^{2pq}\int_{0}^{R_{q}^{-2pq}}\mathbb{P}\Big{[}\widehat{\rho }_{N}^{-2pq}>t\Big{]}dt=2pqR_{q}^{2pq}\int_{R_{q}}^{\infty}\mathbb{P}[\widehat{ \rho}_{N}<t]\ t^{-2pq-1}dt.\]
If \(R_{q}>\rho_{N}\), then
\[\mathbb{E}\Big{[}\big{(}R_{q}^{q}\widehat{\rho}_{N}^{-q}\wedge 1 \big{)}^{2p}\Big{]}\leq 2pqR_{q}^{2pq}\int_{\rho_{N}}^{\infty}t^{-2pq-1}dt=R_{q}^{ 2pq}\rho_{N}^{-2pq}. \tag{3.10}\]
If \(R_{q}<\rho_{N}\), then
\[\mathbb{E}\Big{[}\big{(}R_{q}^{q}\widehat{\rho}_{N}^{-q}\wedge 1 \big{)}^{2p}\Big{]}=2pqR_{q}^{2pq}\Bigg{(}\int_{\rho_{N}}^{\infty}+\int_{R_{q} }^{\rho_{N}}\Bigg{)}\mathbb{P}[\widehat{\rho}_{N}<t]\ t^{-2pq-1}dt\] \[\leq 2pqR_{q}^{2pq}\int_{\rho_{N}}^{\infty}t^{-2pq-1}dt+2pqR_{q}^ {2pq}\int_{R_{q}}^{\rho_{N}}\mathbb{P}[\widehat{\rho}_{N}<t]\ t^{-2pq-1}dt\] \[=R_{q}^{2pq}\rho_{N}^{-2pq}+2pqR_{q}^{2pq}\rho_{N}^{-2pq}\int_{R_ {q}\rho_{N}^{-1}}^{1}\mathbb{P}[\widehat{\rho}_{N}<t\rho_{N}]\ t^{-2pq-1}dt\] \[\overset{\rm(i)}{\leq}R_{q}^{2pq}\rho_{N}^{-2pq}+2pqR_{q}^{2pq} \rho_{N}^{-2pq}\int_{R_{q}\rho_{N}^{-1}}^{1}2\exp\Big{(}-\frac{1}{2}(1-\sqrt{t })^{2}N(\rho_{N}\wedge\rho_{N}^{2})\Big{)}t^{-2pq-1}dt\] \[\overset{\rm(ii)}{=}R_{q}^{2pq}\rho_{N}^{-2pq}\Bigg{[}1+8pq\int_ {0}^{\sqrt{N\,(\rho_{N}\wedge\rho_{N}^{2})\,(1-\sqrt{R_{q}\rho_{N}^{-1}})}}\, \frac{\big{(}N(\rho_{N}\wedge\rho_{N}^{2})\big{)}^{2pq}\exp(-\frac{1}{2}t^{2}) }{\big{(}\sqrt{N(\rho_{N}\wedge\rho_{N}^{2})\,-t}\big{)}^{4pq+1}}\ dt\Bigg{]}\] \[\overset{\rm(iii)}{\lesssim}R_{q}^{2pq}\rho_{N}^{-2pq}+R_{q}^{2pq }\rho_{N}^{-2pq}\cdot 8pq\Bigg{(}\frac{2R_{q}^{-2pq}\rho_{N}^{2pq}}{4pq}e^{-\frac{1}{8}N (\rho_{N}\wedge\rho_{N}^{2})\big{(}1-\sqrt{R_{q}\rho_{N}^{-1}}\big{)}^{2}}+ \frac{2^{4pq}}{4pq}\Bigg{)}\] \[\lesssim_{P}R_{q}^{2pq}\rho_{N}^{-2pq}+e^{-\frac{1}{8}N(\rho_{N} \wedge\rho_{N}^{2})\,\big{(}1-\sqrt{R_{q}\rho_{N}^{-1}}\big{)}^{2}}\] \[\overset{\rm(iv)}{\lesssim}_{P}R_{q}^{2pq}\rho_{N}^{-2pq}+e^{-cN( \rho_{N}\wedge\rho_{N}^{2})}, \tag{3.11}\]
where (i) follows from Lemma 3.2, (ii) follows by a change of variable, and (iii) follows by applying Lemma 3.6 below with \(\alpha=\sqrt{N(\rho_{N}\wedge\rho_{N}^{2})}\) and \(\beta=\sqrt{N(\rho_{N}\wedge\rho_{N}^{2})}\sqrt{R_{q}\rho_{N}^{-1}}\). To prove (iv), notice that if \(R_{q}\leq\frac{1}{4}\rho_{N}\), then \(\big{|}1-\sqrt{R_{q}\rho_{N}^{-1}}\big{|}>\frac{1}{2}\) and (iv) holds; if \(\frac{1}{4}\rho_{N}<R_{q}<\rho_{N}\), then
\[e^{-\frac{1}{8}N(\rho_{N}\wedge\rho_{N}^{2})\,\big{(}1-\sqrt{R_{q}\rho_{N}^{-1} }\big{)}^{2}}\leq 1<16^{p}R_{q}^{2p}\rho_{N}^{-2p}\leq 16^{p}R_{q}^{2pq}\rho_{N}^{-2 pq}.\]
Combining the inequalities (3.8), (3.9), (3.10), and (3.11) gives that
\[I_{1}\leq\sqrt{\mathbb{E}\Big{[}\big{(}R_{q}^{q}\widehat{\rho}_{N}^{-q}\wedge 1 \big{)}^{2p}\Big{]}\ \mathbb{E}\Big{[}\big{(}\widehat{\rho}_{N}+\|\widehat{k}-k\|_{\max}\big{)}^{2p }\Big{]}}\lesssim_{P}R_{q}^{pq}\rho_{N}^{P(1-q)}+\rho_{N}^{p}e^{-cN(\rho_{N} \wedge\rho_{N}^{2})}.\]
For \(I_{2}\) and \(I_{3}\),
\[I_{2}+I_{3} =\mathbb{E}\Bigg{[}\left(\sup_{x\in D}\int_{\Omega_{x}^{c}}|k(x,x^{ \prime})|\,\mathbf{1}\big{\{}|\widehat{k}(x,x^{\prime})|<\widehat{\rho}_{N} \big{\}}\,dx^{\prime}\right)^{p}\Bigg{]}\] \[\leq\mathbb{E}\Bigg{[}\left(\widehat{\rho}_{N}\sup_{x\in D}\int_{ \Omega_{x}^{c}}\left(\frac{|k(x,x^{\prime})|}{\widehat{\rho}_{N}}\right)^{q} dx^{\prime}\right)^{p}\Bigg{]}\leq\mathbb{E}\left[R_{q}^{pq}\widehat{\rho}_{N}^{(1-q)} \right]\stackrel{{\rm(ii)}}{{\lesssim}}p\ R_{q}^{pq}\rho_{N}^{p(1-q )},\]
where (i) follows since \(q\in(0,1)\) and \(|k(x,x^{\prime})|<\widehat{\rho}_{N}\) for \(x^{\prime}\in\Omega_{x}^{c}\). To prove (ii), we notice that if \(p(1-q)\leq 1\), then using Jensen's inequality and Lemma 3.2 yields that \(\mathbb{E}\big{[}\,\widehat{\rho}_{N}^{\,p(1-q)}\big{]}\leq(\mathbb{E}[\, \widehat{\rho}_{N}])^{p(1-q)}\lesssim\rho_{N}^{p(1-q)}\). If \(p(1-q)>1\), Lemma 3.2 implies that \(\mathbb{E}[\,\widehat{\rho}_{N}^{\,p(1-q)}\,]\lesssim\rho_{N}^{\,p(1-q)}\).
For \(I_{4}\),
\[I_{4} =\mathbb{E}\Bigg{[}\left(\sup_{x\in D}\int_{\Omega_{x}^{c}}\ |\widehat{k}(x,x^{ \prime})-k(x,x^{\prime})|\mathbf{1}\big{\{}|\widehat{k}(x,x^{\prime})|\geq \widehat{\rho}_{N}\big{\}}\mathbf{1}\big{\{}|\widehat{k}(x,x^{\prime})-k(x,x^ {\prime})|\geq 4|k(x,x^{\prime})|\big{\}}\,dx^{\prime}\right)^{p}\Bigg{]}\] \[\stackrel{{\rm(i)}}{{\leq}}\mathbb{E}\Bigg{[}\left( \sup_{x\in D}\int_{\Omega_{x}^{c}}\ |\widehat{k}(x,x^{\prime})-k(x,x^{\prime})|\mathbf{1}\big{\{}| \widehat{k}(x,x^{\prime})|\geq\widehat{\rho}_{N}\big{\}}\mathbf{1}\big{\{}| \widehat{k}(x,x^{\prime})-k(x,x^{\prime})|\geq\frac{2}{3}\widehat{\rho}_{N} \big{\}}\,dx^{\prime}\right)^{p}\Bigg{]}\] \[\leq\mathbb{E}\Bigg{[}\left(\|\widehat{k}-k\|_{\max}\int_{D} \mathbf{1}\big{\{}\sup_{x\in D}|\widehat{k}(x,x^{\prime})-k(x,x^{\prime})| \geq\frac{2}{3}\widehat{\rho}_{N}\big{\}}\,dx^{\prime}\right)^{p}\Bigg{]}\] \[\leq\bigg{(}\mathbb{E}\big{[}\|\widehat{k}-k\|_{\max}^{2p}\big{]} \bigg{)}^{1/2}\bigg{(}\mathbb{E}\bigg{[}\left(\int_{D}\mathbf{1}\big{\{}\sup_{ x\in D}|\widehat{k}(x,x^{\prime})-k(x,x^{\prime})|\geq\frac{2}{3}\widehat{\rho}_{N} \big{\}}\,dx^{\prime}\right)^{2p}\bigg{]}\bigg{)}^{1/2}\] \[\stackrel{{\rm(ii)}}{{\leq}}\Big{(}\mathbb{E}\big{[} \|\widehat{k}-k\|_{\max}^{2p}\big{\}}\big{]}^{1/2}\bigg{(}\mathbb{E}\Big{[} \int_{D}\mathbf{1}\big{\{}\sup_{x\in D}|\widehat{k}(x,x^{\prime})-k(x,x^{ \prime})|\geq\frac{2}{3}\widehat{\rho}_{N}\big{\}}\,dx^{\prime}\Big{]}\bigg{)} ^{1/2}\] \[=\bigg{(}\mathbb{E}\big{[}\|\widehat{k}-k\|_{\max}^{2p}\big{\}} \big{)}^{1/2}\bigg{(}\int_{D}\mathbb{P}\bigg{[}\sup_{x\in D}|\widehat{k}(x,x^{ \prime})-k(x,x^{\prime})|\geq\frac{2}{3}\widehat{\rho}_{N}\bigg{]}\,dx^{\prime }\bigg{)}^{1/2},\]
where (i) follows since \(|\widehat{k}(x,x^{\prime})-k(x,x^{\prime})|\geq 4|k(x,x^{\prime})|\) implies that \(|\widehat{k}(x,x^{\prime})|\geq 3|k(x,x^{\prime})|\), and therefore if \(|\widehat{k}(x,x^{\prime})-k(x,x^{\prime})|\geq 4|k(x,x^{\prime})|\) and \(|\widehat{k}(x,x^{\prime})|\geq\widehat{\rho}_{N}\), then it holds that
\[|\widehat{k}(x,x^{\prime})-k(x,x^{\prime})|\geq|\widehat{k}(x,x^{\prime})|-|k(x, x^{\prime})|\geq\frac{2}{3}|\widehat{k}(x,x^{\prime})|\geq\frac{2}{3}\widehat{\rho}_{N}.\]
To prove (ii), note that \(p\geq 1\) and \(\int_{D}\mathbf{1}\left\{\sup_{x\in D}|\widehat{k}(x,x^{\prime})-k(x,x^{\prime})| \geq\frac{2}{3}\widehat{\rho}_{N}\right\}dx^{\prime}\leq 1\). Next, notice that
\[\mathbb{P}\left[\sup_{x\in D}|\widehat{k}(x,x^{\prime})-k(x,x^{ \prime})|\geq\frac{2}{3}\widehat{\rho}_{N}\right] =\mathbb{P}\left[\frac{2}{3}(\rho_{N}-\widehat{\rho}_{N})+\sup_{x \in D}|\widehat{k}(x,x^{\prime})-k(x,x^{\prime})|\geq\frac{2}{3}\rho_{N}\right]\] \[\leq\mathbb{P}\left[\sup_{x\in D}|\widehat{k}(x,x^{\prime})-k(x, x^{\prime})|\geq\frac{1}{3}\rho_{N}\right]+\mathbb{P}\left[\rho_{N}-\widehat{ \rho}_{N}\geq\frac{1}{2}\rho_{N}\right].\]
Lemma 3.2 then implies that
\[\mathbb{P}\left[\rho_{N}-\widehat{\rho}_{N}\geq\frac{1}{2}\rho_{N}\right]= \mathbb{P}\left[\widehat{\rho}_{N}\leq\frac{1}{2}\rho_{N}\right]\lesssim e^{- c_{1}N(\rho_{N}\wedge\rho_{N}^{2})},\]
and Proposition 3.5 gives that
\[\mathbb{P}\left[\sup_{x\in D}|\widehat{k}(x,x^{\prime})-k(x,x^{\prime})|\geq \frac{1}{3}\rho_{N}\right]\lesssim e^{-c_{2}N(\rho_{N}\wedge\rho_{N}^{2})}.\]
Moreover, Corollary 3.4 yields that \(\left(\mathbb{E}\left[\left\|\widehat{k}-k\right\|_{\max}^{2p}\right]\right)^ {1/2}\lesssim_{P}\rho_{N}^{p}\). Therefore,
\[I_{4}\leq\left(\mathbb{E}\left[\left\|\widehat{k}-k\right\|_{\max}^{2p}\right] \right)^{1/2}\left(\int_{D}\mathbb{P}\left[\sup_{x\in D}|\widehat{k}(x,x^{ \prime})-k(x,x^{\prime})|\geq\frac{2}{3}\widehat{\rho}_{N}\right]dx^{\prime} \right)^{1/2}\lesssim_{P}\rho_{N}^{p}e^{-cN(\rho_{N}\wedge\rho_{N}^{2})}.\]
Combining (3.7) with the estimates of \(I_{1},I_{2},I_{3}\), and \(I_{4}\) gives that
\[\mathbb{E}\|\widehat{C}_{\widehat{\rho}_{N}}-C\|^{p}\lesssim_{P}I_{1}+I_{2}+I _{3}+I_{4}\lesssim_{P}R_{q}^{pq}\rho_{N}^{p(1-q)}+\rho_{N}^{p}e^{-cN(\rho_{N} \wedge\rho_{N}^{2})},\]
and hence
\[\left[\mathbb{E}\|\widehat{C}_{\widehat{\rho}_{N}}-C\|^{p}\right]^{\frac{1}{p} }\lesssim_{P}R_{q}^{q}\rho_{N}^{1-q}+\rho_{N}e^{-\frac{c}{p}N(\rho_{N}\wedge \rho_{N}^{2})}.\qed\]
The following lemma was used in the proof of Theorem 2.2.
**Lemma 3.6**.: _For any \(\alpha>\beta>0\) and \(q>0\), it holds that_
\[\int_{0}^{\alpha-\beta}e^{-\frac{1}{2}t^{2}}(\alpha-t)^{-q-1}\,dt\leq\frac{2 \beta^{-q}}{q}e^{-\frac{(\alpha-\beta)^{2}}{8}}+\frac{1}{q}\Big{(}\frac{\alpha }{2}\Big{)}^{-q}.\]
Proof.: Integrating by parts gives that
\[\int_{0}^{\alpha-\beta}e^{-\frac{1}{2}t^{2}}(\alpha-t)^{-q-1}dt =\frac{\beta^{-q}}{q}e^{-\frac{(\alpha-\beta)^{2}}{2}}-\frac{ \alpha^{-q}}{q}+\int_{0}^{\alpha-\beta}e^{-\frac{1}{2}t^{2}}t\frac{(\alpha-t) ^{-q}}{q}dt\] \[=\frac{\beta^{-q}}{q}e^{-\frac{(\alpha-\beta)^{2}}{2}}-\frac{ \alpha^{-q}}{q}+\left(\int_{0}^{\frac{\alpha-\beta}{2}}+\int_{\frac{\alpha- \beta}{2}}^{\alpha-\beta}\right)e^{-\frac{1}{2}t^{2}}t\frac{(\alpha-t)^{-q}}{ q}dt.\]
First,
\[\int_{0}^{\frac{\alpha-\beta}{2}}e^{-\frac{1}{2}t^{2}}t\frac{(\alpha-t)^{-q} }{q}dt\leq\frac{1}{q}\Big{(}\alpha-\frac{\alpha-\beta}{2}\Big{)}^{-q}\int_{0} ^{\frac{\alpha-\beta}{2}}e^{-\frac{1}{2}t^{2}}t\,dt\leq\frac{1}{q}\Big{(} \frac{\alpha+\beta}{2}\Big{)}^{-q}\leq\frac{1}{q}\Big{(}\frac{\alpha}{2} \Big{)}^{-q}.\]
Second,
\[\int_{\frac{\alpha-\beta}{2}}^{\alpha-\beta}e^{-\frac{1}{2}t^{2}}t\frac{(\alpha- t)^{-q}}{q}dt\leq\frac{1}{q}(\alpha-(\alpha-\beta))^{-q}\int_{\frac{\alpha-\beta}{2}}^{ \alpha-\beta}e^{-\frac{1}{2}t^{2}}t\,dt\leq\frac{\beta^{-q}}{q}e^{-\frac{( \alpha-\beta)^{2}}{8}}.\]
Thus,
\[\int_{0}^{\alpha-\beta}e^{-\frac{1}{2}t^{2}}(\alpha-t)^{-q-1}dt \leq\frac{\beta^{-q}}{q}e^{-\frac{(\alpha-\beta)^{2}}{2}}-\frac{ \alpha^{-q}}{q}+\frac{\beta^{-q}}{q}e^{-\frac{(\alpha-\beta)^{2}}{8}}+\frac{1 }{q}\left(\frac{\alpha}{2}\right)^{-q}\] \[\leq\frac{2\beta^{-q}}{q}e^{-\frac{(\alpha-\beta)^{2}}{8}}+\frac {1}{q}\left(\frac{\alpha}{2}\right)^{-q}.\qed\]
## 4 Small Lengthscale Regime
This section studies thresholded estimation of covariance operators under the small lengthscale regime formalized in Assumption 2.4. We first present three lemmas which establish the sharp scaling of the \(L^{q}\)-sparsity level, the operator norm of the covariance operator, and the suprema of Gaussian fields in the small lengthscale regime. Combining these lemmas and Theorem 2.2, we then prove Theorem 2.5.
The following result establishes the scaling of the \(L^{q}\)-sparsity level in the small lengthscale regime.
**Lemma 4.1**.: _Under Assumption 2.4, it holds that_
\[\sup_{x\in D}\int_{D}|k(x,x^{\prime})|^{q}dx^{\prime}\asymp\lambda^{d}A(d)\int _{0}^{\infty}k_{1}(r)^{q}r^{d-1}dr,\quad\lambda\to 0.\]
_where \(A(d)\) denotes the surface area of the unit sphere in \(\mathbb{R}^{d}\)._
Proof.: Using \(k_{\lambda}(r)=k_{1}(\lambda^{-1}r)\), we have that
\[\sup_{x\in D}\int_{D} |k(x,x^{\prime})|^{q}dx^{\prime}\geq\int_{D\times D}k(x,x^{\prime })^{q}dxdx^{\prime}=\int_{[0,1]^{d}\times[0,1]^{d}}k_{\lambda}(|x-x^{\prime}|) ^{q}\,dxdx^{\prime}\] \[=\int_{[0,1]^{d}\times[0,1]^{d}}k_{1}(\lambda^{-1}|x-x^{\prime}| )^{q}\,dxdx^{\prime}=\lambda^{2d}\int_{[0,\lambda^{-1}]^{d}\times[0,\lambda^{- 1}]^{d}}k_{1}(|x-x^{\prime}|)^{q}\,dxdx^{\prime}\] \[\overset{\rm(i)}{=}\lambda^{2d}\int_{[-\lambda^{-1},\lambda^{-1 }]^{d}}k_{1}(|w|)^{q}\prod_{j=1}^{d}(\lambda^{-1}-|w_{j}|)\,dw\] \[=\lambda^{d}\int_{[-\lambda^{-1},\lambda^{-1}]^{d}}k_{1}(|w|)^{q} \prod_{j=1}^{d}(1-\lambda|w_{j}|)\,dw\] \[\overset{\rm(ii)}{\asymp}\lambda^{d}\int_{\mathbb{R}^{d}}k_{1}(| w|)^{q}\,dw\overset{\rm(iii)}{=}\lambda^{d}A(d)\int_{0}^{\infty}k_{1}(r)^{q}r^{d-1 }\,dr,\quad\lambda\to 0, \tag{4.1}\]
where (i) follows by a change of variables \(w=x-x^{\prime},z=x+x^{\prime}\) and integrating \(z\), (ii) follows by dominated convergence as \(\lambda\to 0\), and (iii) follows from the polar coordinate transform in \(\mathbb{R}^{d}\). On the
other hand,
\[\sup_{x\in D}\int_{D}|k(x,x^{\prime})|^{q}dx^{\prime} \leq\sup_{x\in D}\int_{\mathbb{R}^{d}}k\left(|x-x^{\prime}|\right)^{ q}dx^{\prime}\] \[=\int_{\mathbb{R}^{d}}k\left(|x^{\prime}|\right)^{q}dx^{\prime}= \lambda^{d}\int_{\mathbb{R}^{d}}k_{1}\left(|x^{\prime}|\right)^{q}dx^{\prime}= \lambda^{d}A(d)\int_{0}^{\infty}k_{1}(r)^{q}r^{d-1}\,dr,\]
which concludes the proof.
Next, we establish the scaling of the operator norm of the covariance operator.
**Lemma 4.2**.: _Under Assumption 2.4, it holds that_
\[\|\mathcal{C}\|\asymp\lambda^{d}A(d)\int_{0}^{\infty}k_{1}(r)r^{d-1}dr,\quad \lambda\to 0.\]
_where \(A(d)\) denotes the surface area of the unit sphere in \(\mathbb{R}^{d}\)._
Proof.: First, the operator norm can be upper bounded by
\[\|\mathcal{C}\|\leq\sup_{x\in D}\int_{D}|k\left(x,x^{\prime}\right)|dx^{\prime }\asymp\lambda^{d}A(d)\int_{0}^{\infty}k_{1}(r)r^{d-1}dr,\quad\lambda\to 0,\]
where the last step follows by Lemma 4.1.
For the lower bound, taking the test function \(\psi(x)\equiv 1\) yields that
\[\|\mathcal{C}\| =\sup_{\|\psi\|_{L^{2}}=1}\Big{(}\int_{D}\Big{(}\int_{D}k\left(x,x^{\prime}\right)\psi(x^{\prime})dx^{\prime}\Big{)}^{2}dx\Big{)}^{1/2}\geq \Big{(}\int_{D}\Big{(}\int_{D}k\left(x,x^{\prime}\right)dx^{\prime}\Big{)}^{2 }dx\Big{)}^{1/2}\] \[\overset{\rm(i)}{\geq}\frac{1}{\sqrt{\mathrm{Vol}(D)}}\int_{D \times D}k\left(x,x^{\prime}\right)dxdx^{\prime}\overset{\rm(ii)}{=}\int_{D \times D}k\left(x,x^{\prime}\right)dxdx^{\prime}\] \[\overset{\rm(iii)}{\asymp}\lambda^{d}A(d)\int_{0}^{\infty}k_{1}( r)r^{d-1}\,dr,\quad\lambda\to 0,\]
where (i) follows by Cauchy-Schwarz inequality, (ii) follows since \(\mathrm{Vol}(D)=1\) for \(D=[0,1]^{d}\), and (iii) follows from (4.1) with \(q=1\). This completes the proof.
Finally, we establish the scaling of the suprema of Gaussian fields in the small lengthscale regime.
**Lemma 4.3**.: _Under Assumption 2.4 (i), it holds that_
\[\mathbb{E}\bigg{[}\sup_{x\in D}u(x)\bigg{]}\asymp\sqrt{d}\int_{0}^{\infty} \sqrt{k(0)-k\left(c\sqrt{d}e^{-t^{2}}\right)}\,dt,\]
_where \(c\) is an absolute constant. Furthermore, if Assumption 2.4 (ii) also holds, then_
\[\mathbb{E}\bigg{[}\sup_{x\in D}u(x)\bigg{]}\asymp\sqrt{k(0)d\log\Big{(}\frac{ \sqrt{d}}{s\lambda}\Big{)}}\,,\quad\lambda\to 0,\]
_where \(s>0\) is the unique solution of \(k_{1}(s)=\frac{1}{s}k(0)\), which is independent of \(\lambda\)._
**Proof.** By Fernique's theorem [26] and the discussion in [48, Theorem 6.19], for the stationary Gaussian random field \(u\), it holds that
\[\mathbb{E}\bigg{[}\sup_{x\in D}u(x)\bigg{]}\asymp\int_{0}^{\infty}\sqrt{\log \mathcal{M}(D,\mathsf{d},\varepsilon)}\ d\varepsilon, \tag{4.2}\]
where \(\mathcal{M}(D,\mathsf{d},\varepsilon)\) denotes the smallest cardinality of an \(\varepsilon\)-net of \(D\) in the canonical metric \(\mathsf{d}\) given by
\[\mathsf{d}(x,x^{\prime}):=\big{(}\mathbb{E}[(u(x)-u(x^{\prime}))^{2}]\big{)}^{ 1/2}=\sqrt{2k\left(0\right)-2k\left(\left|x-x^{\prime}\right|\right)}<\sqrt{2k \left(0\right)},\quad x,x^{\prime}\in D.\]
This bound implies that \(\mathcal{M}(D,\mathsf{d},\varepsilon)=1\) for \(\varepsilon\geq\sqrt{2k\left(0\right)}\), and hence we can assume without loss of generality that \(\varepsilon<\sqrt{2k\left(0\right)}\) in the rest of the proof. Next, notice that
\[\mathsf{d}(x,x^{\prime})=\sqrt{2k\left(0\right)-2k\left(\left|x-x^{\prime} \right|\right)}\leq\varepsilon\quad\Longleftrightarrow\quad\left|x-x^{ \prime}\right|\leq k^{-1}(k(0)-\varepsilon^{2}/2),\]
where \(k^{-1}\) is the inverse function of \(k\). By the standard volume argument [50, Proposition 4.2.12],
\[\mathcal{M}(D,\mathsf{d},\varepsilon) =\mathcal{M}\big{(}D,\left|\cdot\right|,k^{-1}(k(0)-\varepsilon^{ 2}/2)\big{)}\] \[\geq\left(\frac{1}{k^{-1}(k(0)-\varepsilon^{2}/2)}\right)^{d} \frac{\operatorname{Vol}(D)}{\operatorname{Vol}(B_{2}^{d})}\geq\frac{1}{c_{1} }\left(\frac{1}{k^{-1}(k(0)-\varepsilon^{2}/2)}\right)^{d}\left(\frac{d}{2 \pi e}\right)^{d/2},\]
where we used that \(\operatorname{Vol}(D)=1\) and that, for the Euclidean unit ball \(B_{2}^{d}\), it holds that \(\operatorname{Vol}(B_{2}^{d})\leq c_{1}(2\pi e/d)^{d/2}\) for some absolute constant \(c_{1}>1\). On the other hand, using the fact that \(D=[0,1]^{d}\subset\sqrt{d}B_{2}^{d}\), as well as \(\mathcal{M}(B_{2}^{d},\left|\cdot\right|,\varepsilon)\leq(3/\varepsilon)^{d}\) for \(\varepsilon\leq 1\)[50, Corollary 4.2.13],
\[\mathcal{M}(D,\mathsf{d},\varepsilon)=\mathcal{M}\big{(}D,\left| \cdot\right|,k^{-1}(k(0)-\varepsilon^{2}/2)\big{)}\] \[\leq\mathcal{M}\big{(}B_{2}^{d},\left|\cdot\right|,d^{-1/2}k^{-1} (k(0)-\varepsilon^{2}/2)\big{)}\leq\left[\left(\frac{3}{k^{-1}(k(0)- \varepsilon^{2}/2)}\right)^{d}d^{\,d/2}\right]\lor 1,\quad\varepsilon<\sqrt{2k \left(0\right)}.\]
Therefore, (4.2) and the bounds we just established on the covering number \(\mathcal{M}(D,\mathsf{d},\varepsilon)\) imply that
\[\mathbb{E}\bigg{[}\sup_{x\in D}u(x)\bigg{]} \asymp\int_{0}^{\sqrt{2k\left(0\right)}}\sqrt{\log\left(\frac{1}{ c_{1}}\left(\frac{1}{k^{-1}(k(0)-\varepsilon^{2}/2)}\right)^{d}\left(\frac{d}{2 \pi e}\right)^{d/2}\right)\lor 0}\ d\varepsilon\] \[\asymp\sqrt{d}\int_{0}^{\sqrt{2k\left(0\right)}}\sqrt{\log\left( \frac{c\sqrt{d}}{k^{-1}(k(0)-\varepsilon^{2}/2)}\right)\lor 0}\ d\varepsilon.\]
By a change of variable \(t:=\sqrt{\log\left(\frac{c\sqrt{d}}{k^{-1}(k(0)-\varepsilon^{2}/2)}\right)}\), then \(\varepsilon=\sqrt{2\left(k(0)-k(c\sqrt{d}e^{-t^{2}})\right)}\) and
\[\mathbb{E}\bigg{[}\sup_{x\in D}u(x)\bigg{]} \asymp\sqrt{d}\int_{0}^{\infty}-t\ \frac{d}{dt}\left(\sqrt{k(0)-k(c\sqrt{d}e^{-t^{2}})}\right)\ dt\] \[=\sqrt{d}\left(-t\sqrt{k(0)-k(c\sqrt{d}e^{-t^{2}})}\right]_{0}^{ \infty}+\int_{0}^{\infty}\sqrt{k(0)-k(c\sqrt{d}e^{-t^{2}})}\ dt\right)\] \[=\sqrt{d}\int_{0}^{\infty}\sqrt{k(0)-k(c\sqrt{d}e^{-t^{2}})}\ dt,\]
where in the last equality we used that \(k(0)-k(c\sqrt{d}e^{-t^{2}})\asymp c\sqrt{d}e^{-t^{2}}\) as \(t\to\infty\) since \(k(r)\) is assumed to be differentiable at \(r=0\).
Furthermore, if Assumption 2.42 (ii) also holds, let \(s>0\) be the unique solution of \(k_{1}(s)=\frac{1}{2}k(0)\), which is independent of \(\lambda\). Then, for \(\lambda<c\sqrt{d}/s\),
\[\mathbb{E}\left[\sup_{x\in D}u(x)\right] \asymp\sqrt{d}\int_{0}^{\infty}\sqrt{k(0)-k_{\lambda}(c\sqrt{d}e^ {-t^{2}})}\ dt=\sqrt{d}\int_{0}^{\infty}\sqrt{k(0)-k_{1}(c\lambda^{-1}\sqrt{d }e^{-t^{2}})}\ dt\] \[=\sqrt{d}\left[\int_{t<\sqrt{\log\left(\frac{c\sqrt{d}}{s\lambda }\right)}}+\int_{t>\sqrt{\log\left(\frac{c\sqrt{d}}{s\lambda}\right)}}\right] \sqrt{k(0)-k_{1}(c\lambda^{-1}\sqrt{d}e^{-t^{2}})}\ dt=:I_{1}+I_{2}.\]
For the first term \(I_{1}\), we have
\[\sqrt{\frac{k(0)d}{2}}\sqrt{\log\left(\frac{c\sqrt{d}}{s\lambda}\right)}\leq I _{1}\leq\sqrt{k(0)d}\sqrt{\log\left(\frac{c\sqrt{d}}{s\lambda}\right)}.\]
Therefore, for any \(\lambda<c\sqrt{d}/s\), \(I_{1}\asymp\sqrt{k(0)d\log\left(\frac{\sqrt{d}}{s\lambda}\right)}\).
To bound the second term \(I_{2}\), we notice that there is some constant \(M>0\) such that \(k(0)-k_{1}(r)\leq M\,r\) for \(r\in[0,s]\), where \(M\) is independent of \(\lambda\). Therefore,
\[I_{2} =\sqrt{d}\int_{t>\sqrt{\log\left(\frac{c\sqrt{d}}{s\lambda}\right) }}\sqrt{k(0)-k_{1}(c\lambda^{-1}\sqrt{d}e^{-t^{2}})}\ dt\leq\sqrt{d}\int_{t> \sqrt{\log\left(\frac{c\sqrt{d}}{s\lambda}\right)}}\sqrt{M}\left(c\lambda^{-1 }\sqrt{d}e^{-t^{2}}\right)^{\frac{1}{2}}\ dt\] \[\lesssim d^{3/4}\lambda^{-\frac{1}{2}}\int_{t>\sqrt{\log\left( \frac{c\sqrt{d}}{s\lambda}\right)}}e^{-\frac{1}{2}t^{2}}\ dt\lesssim\sqrt{d} \left(\log\left(\frac{c\sqrt{d}}{s\lambda}\right)\right)^{-\frac{1}{2}}\to 0, \quad\lambda\to 0,\]
where we used the tail bound of the Gaussian distribution \(\int_{x}^{\infty}e^{-\frac{1}{2}t^{2}}dt\leq\frac{1}{x}e^{-\frac{1}{2}x^{2}}\) for \(x>0\). To seek a lower bound of \(I_{2}\), let \(s^{*}\) be the unique solution of \(k_{1}(s^{*})=\frac{3}{4}k(0)\), which is independent of \(\lambda\). Note that \(s^{*}<s\) since \(k_{1}\) is strictly decreasing, then
\[I_{2} =\sqrt{d}\int_{t>\sqrt{\log\left(\frac{c\sqrt{d}}{s\lambda} \right)}}\sqrt{k(0)-k_{1}(c\lambda^{-1}\sqrt{d}e^{-t^{2}})}\ dt\geq\sqrt{d}\int_{ \sqrt{\log\left(\frac{c\sqrt{d}}{s\lambda}\right)}}^{\sqrt{\log\left(\frac{c \sqrt{d}}{s\lambda}\right)}}\sqrt{k(0)-k_{1}(c\lambda^{-1}\sqrt{d}e^{-t^{2}}) }\ dt\] \[\geq\sqrt{d}\sqrt{k(0)-\frac{3}{4}k(0)}\left(\sqrt{\log\left( \frac{c\sqrt{d}}{s^{*}\lambda}\right)}-\sqrt{\log\left(\frac{c\sqrt{d}}{s \lambda}\right)}\ \right)\] \[=\frac{1}{2}\sqrt{k(0)d}\ \log\left(\frac{s}{s^{*}}\right)\left( \sqrt{\log\left(\frac{c\sqrt{d}}{s^{*}\lambda}\right)}+\sqrt{\log\left(\frac{ c\sqrt{d}}{s\lambda}\right)}\ \right)^{-1}\gtrsim\sqrt{d}\left(\log\left(\frac{c\sqrt{d}}{s^{*}\lambda} \right)\right)^{-\frac{1}{2}}\to 0,\quad\lambda\to 0.\]
Therefore, \(I_{2}\asymp\sqrt{d}\left(\log\left(\frac{c\sqrt{d}}{\lambda}\right)\right)^{- \frac{1}{2}}\to 0\) as \(\lambda\to 0\). Consequently,
\[\mathbb{E}\left[\sup_{x\in D}u(x)\right]\asymp I_{1}+I_{2}\asymp\sqrt{k(0)d \log\left(\frac{\sqrt{d}}{s\lambda}\right)}\,\quad\lambda\to 0.\qed\]
**Remark 4.4**.: Lemma 4.3 admits a clear heuristic interpretation. Consider a uniform mesh \(\mathcal{P}\) of the unit cube \(D=[0,1]^{d}\) comprising \((1/\lambda)^{d}\) points that are distance \(\lambda\) apart. For a random field \(u(x)\) with lengthscale \(\lambda\), the values \(u(x_{i})\) and \(u(x_{j})\) at mesh points \(x_{i}\neq x_{j}\in\mathcal{P}\) are roughly uncorrelated. Thus, \(\{u(x_{i})\}_{i=1}^{\lambda-d}\) are roughly i.i.d. univariate Gaussian random variables, and, for small \(\lambda\), we may approximate
\[\mathbb{E}\bigg{[}\sup_{x\in D}u(x)\bigg{]}\approx\mathbb{E}\bigg{[}\sup_{x_{i }\in\mathcal{P}}u(x_{i})\bigg{]}\approx\sqrt{\log(\lambda^{-d})}.\]
This heuristic derivation matches the scaling of the expected supremum with \(\lambda\) in Lemma 4.3.
We are now ready to prove Theorem 2.5.
Proof of Theorem 2.5.: In this proof we treat \(d\) as a constant. Notice that under Assumption 2.1 (i) and Assumption 2.4 (i), it holds that \(\operatorname{Tr}(C)=\int_{D}k(x,x)\,dx=k(0)\operatorname{Vol}(D)=1\). Moreover, Lemma 4.2 shows that \(\|C\|\asymp\lambda^{d}\) as \(\lambda\to 0\). Plugging the former two results into (2.6) yields the bound (2.4). For the thresholded estimator, we apply Theorem 2.2 with an appropriate choice of the constant \(c_{0}\in[1,\sqrt{N}]\). By Lemma 4.3, \(\mathbb{E}[\sup_{x\in D}u(x)]\asymp\sqrt{\log(\lambda^{-d})}\) as \(\lambda\to 0\). We assume that \(N\geq c_{0}^{2}\left(\mathbb{E}[\sup_{x\in D}u(x)]\right)^{2}\asymp\log( \lambda^{-d})\), so that the thresholding parameter satisfies
\[\rho_{N}=\frac{c_{0}}{\sqrt{N}}\mathbb{E}\bigg{[}\sup_{x\in D}u(x)\bigg{]}\leq 1.\]
It follows that
\[\begin{split}\rho_{N}e^{-cN(\rho_{N}\wedge\rho_{N}^{2})}=\rho_{N} e^{-cN\rho_{N}^{2}}&=\rho_{N}e^{-cc_{0}^{2}(\mathbb{E}[\sup_{x\in D}u(x )])^{2}}\\ &=\rho_{N}e^{-cc^{\prime}c_{0}^{2}d\log(1/\lambda)}=\rho_{N} \lambda^{cc^{\prime}c_{0}^{2}d}\leq\rho_{N}^{1-q}\lambda^{cc^{\prime}c_{0}^{2 }d},\end{split} \tag{4.3}\]
where \(c^{\prime}\) is an absolute constant. On the other hand, using Lemma 4.1 we have that
\[R_{q}^{q}\rho_{N}^{1-q}\asymp\rho_{N}^{1-q}\lambda^{d}A(d)\int_{0}^{\infty}k_ {1}(r)^{q}r^{d-1}dr. \tag{4.4}\]
Comparing (4.3) with (4.4), we see that if \(c_{0}\) is chosen so that \(cc^{\prime}c_{0}^{2}>1\), then the upper bound \(R_{q}^{q}\rho_{N}^{1-q}+\rho_{N}e^{-cN(\rho_{N}\wedge\rho_{N}^{2})}\) in Theorem 2.2 is dominated by \(R_{q}^{q}\rho_{N}^{1-q}\) as \(\lambda\to 0\). Therefore, for sufficiently small \(\lambda\),
\[\mathbb{E}\|\widehat{C}_{\widehat{\rho}_{N}}-C\|\lesssim R_{q}^{q}\rho_{N}^{1 -q}\leq\|C\|\,c(d,q)\left(\frac{\log(\lambda^{-d})}{N}\right)^{\frac{1-q}{2}},\]
where \(c(d,q)\) is a constant that only depends on \(d\) and \(q\).
## 5 Application in Ensemble Kalman Filters
Proof of Theorem 2.7.: First, we write
\[|\nu_{n}-\nu_{n}^{\star}|=|(\mathscr{K}(\widehat{C})-\mathscr{K}(C))(y- \mathscr{A}u_{n}-\eta_{n})|\leq\|\mathscr{K}(\widehat{C})-\mathscr{K}(C)\| \|y-\mathscr{A}u_{n}-\eta_{n}|. \tag{5.1}\]
For the first term in (5.1), it follows by the continuity of the Kalman gain operator [36, Lemma 4.1] that
\[\|\mathcal{K}(\widehat{C})-\mathcal{K}(C)\|\leq\|\widehat{C}-C\|\| \mathcal{A}\|\|\Gamma^{-1}\|\Big{(}1+\|C\|\|\mathcal{A}\|^{2}\|\Gamma^{-1}\| \Big{)}. \tag{5.2}\]
Combining the inequalities (5.1), (5.2), and Theorem 2.5 gives that
\[\mathbb{E}\big{[}|\upsilon_{n}-\upsilon_{n}^{\star}|\mid u_{n}, \eta_{n}\big{]}\lesssim\|\mathcal{A}\|\|\Gamma^{-1}\|\|y-\mathcal{A}u_{n}-\eta _{n}\|\,\mathbb{E}\|\widehat{C}-C\|\lesssim c\left[c(d)\left(\sqrt{\frac{ \lambda^{-d}}{N}}\vee\frac{\lambda^{-d}}{N}\right)\right],\]
where \(c=\|\mathcal{A}\|\|\Gamma^{-1}\|\|C\||y-\mathcal{A}u_{n}-\eta_{n}|\) and \(c(d)\) is a constant that only depends on \(d\). Applying the same argument to the perturbed observation EnKF update with localization, \(\upsilon_{n}^{\rho}\), Theorem 2.5 gives that
\[\mathbb{E}\big{[}|\upsilon_{n}^{\rho}-\upsilon_{n}^{\star}|\mid u _{n},\eta_{n}\big{]}\lesssim c\left[c(d,q)\left(\frac{\log(\lambda^{-d})}{N} \right)^{\frac{1-q}{2}}\right],\]
where \(c=\|\mathcal{A}\|\|\Gamma^{-1}\|\|\mathcal{C}\||y-\mathcal{A}u_{n}-\eta_{n}|\) and \(c(d,q)\) is a constant that only depends on \(d\) and \(q\).
## 6 Conclusions, Discussion, and Future Directions
This paper has studied thresholded estimation of sparse covariance operators, lifting the theory of sparse covariance matrix estimation from finite to infinite dimension. We have established non-asymptotic bounds on the estimation error in terms of the sparsity level of the covariance and the expected supremum of the field. In the challenging regime where the correlation lengthscale is small, we have shown that estimation via thresholding achieves an exponential improvement in sample complexity over the standard sample covariance estimator. As an application of the theory, we have demonstrated the advantage of using thresholded covariance estimators within ensemble Kalman filters. While our focus has been on studying the statistical benefit of estimation via thresholding, sparsifying the covariance estimator can also lead to significant computational speed-up in downstream tasks [17, 20, 27].
As mentioned in the discussion of Theorem 2.5, a natural question is whether the convergence rate of our thresholded estimator is minimax optimal. For \(\ell_{q}\)-sparse covariance matrix estimation, [15] established the minimax optimality of thresholded estimators. Inspired by the correspondence between our error bound (2.5) and their optimal rate, we conjecture that our thresholded estimator is also minimax optimal in the infinite-dimensional setting. Another interesting future direction is to relax the assumption of stationarity and generalize our theory to estimating _nonstationary_ random fields. In finite dimension, [12] proposed adaptive thresholding estimators for sparse covariance matrix estimation that account for variability across individual entries. Other interesting extensions include covariance operator estimation for heavy-tailed distributions [1] and robust covariance operator estimation [22, 28]. Finally, connections with the thriving topics of infinite-dimensional regression [38] and operator learning [21, 33] will be explored in future work.
**Acknowledgments** The authors are grateful for the support of NSF DMS-2027056, DOE DE-SC0022232, and the BBVA Foundation.
|
2305.03636 | Dosimetric characterization of single- and dual-port temporary tissue
expanders for postmastectomy radiotherapy using Monte Carlo methods | Purpose: The aim of this work was, a) to assess two treatment planning
strategies for accounting CT-artifacts introduced by temporary
tissue-expanders(TTEs); b) to evaluate the dosimetric impact of two
commercially available and one novel TTE. MethodsThe CT artifacts were managed
using two strategies. 1) Identifying the metal in the RayStation treatment
planning software (TPS) using image window level adjustments, delineate a
contour enclosing the artifact, and setting the density of the surrounding
voxels to unity (RS1). 2) Registering a geometry template with dimensions and
materials from the TTEs (RS2). Both strategies were compared for DermaSpan,
AlloX2, and AlloX2-Pro TTEs using Collapsed-Cone-Convolution (CCC) in
RayStation TPS, Monte Carlo simulations (MC) using TOPAS, and films. Wax slab
phantoms with TTEs and breast phantoms with TTEs balloons were made and
irradiated with a 6 MV AP beam and partial arc, respectively. Results: For the
wax slab phantoms, the dose differences between RS1 and RS2 were 0.5% for
DermaSpan and AlloX2 but 3% for AlloX2-Pro. From TOPAS simulations of RS2, the
impact in dose distributions caused by the magnet attenuation was (6.4+-0.4)%,
(4.9+-0.7)%, and (2.0+-0.9)% for DermaSpan, AlloX2, and AlloX2-Pro. With breast
phantoms, maximum differences in DVH parameters between RS1 and RS2 were as
follows. For AlloX2 at the posterior region: (2.1+-1.0)%, (1.9+-1.0)% and
(1.4+-1.0)% for D1, D10, and average dose, respectively. For AlloX2-Pro at the
anterior region (-1.0+-1.0)%, (-0.6+-1.0)% and (-0.6+-1.0)% for D1, D10 and
average dose, respectively. The impact in D10 caused by the magnet was at most
(5.5+-1.0)% and (-0.8+-1.0)% for AlloX2 and AlloX2-Pro, respectively.
Conclusion: This study showed that the highest differences with respect to
measurements occurred with RS1 and can be mitigated if a template with the
actual port geometry and materials is used. | Jose Ramos-Méndez, Catherine Park, Manju Sharma | 2023-05-05T15:48:34Z | http://arxiv.org/abs/2305.03636v1 | Dosimetric characterization of single- and dual-port temporary tissue expanders for postmastectomy radiotherapy using Monte Carlo methods
###### Abstract
The aim of this work was two-fold: a) to assess two treatment planning strategies for accounting CT artifacts introduced by temporary tissue-expanders (TTEs): b) to evaluate the dosimetric impact of two commercially available and one novel TTE.
The CT artifacts were managed using two strategies. 1) identifying the metal in the RayStation treatment planning software (TPS) using image window-level adjustments, delineate a contour enclosing the artifact, and setting the density of the surrounding voxels to unity (RS1). 2) Registering a geometry template with dimensions and materials from the TTEs (RS2). Both strategies were compared for DermaSpan, AlloX2, and AlloX2-Pro TTEs using Collapsed Cone Convolution (CCC) in RayStation TPS, Monte Carlo simulations (MC) using TOPAS, and film measurements. Wax slab phantoms with metallic ports and breast phantoms with TTEs balloons were made and irradiated with a 6 MV AP beam and partial arc, respectively. Dose values along the AP direction calculated with CCC (RS2) and TOPAS (RS1 and RS2) were compared with film measurements. The impact in dose distributions was evaluated with RS2 by comparing TOPAS simulations with and without the metal port.
2023Rmanczos-Mendez, Park and Sharma This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC 5). The use distribution or reproduction in other fourns is permitted, provided the original authorial and the copyright ownerly are credited and that the original publication in this journal is cited. In accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
The aim of this work was two-fold: a) to assess two treatment planning strategies for accounting CT artifacts introduced by temporary tissue-expanders (TTEs): b) to evaluate the dosimetric impact of two commercially available and one novel TTE.
The CT artifacts were managed using two strategies. 1) identifying the metal in the RayStation treatment planning software (TPS) using image window-level adjustments, delineate a contour enclosing the artifact, and setting the density of the surrounding voxels to unity (RS1). 2) Registering a geometry template with dimensions and materials from the TTEs (RS2). Both strategies were compared for DermaSpan, AlloX2, and AlloX2-Pro TTEs using Collapsed Cone Convolution (CCC) in RayStation TPS, Monte Carlo simulations (MC) using TOPAS, and film measurements. Wax slab phantoms with metallic ports and breast phantoms with TTEs balloons were made and irradiated with a 6 MV AP beam and partial arc, respectively. Dose values along the AP direction calculated with CCC (RS2) and TOPAS (RS1 and RS2) were compared with film measurements. The impact in dose distributions was evaluated with RS2 by comparing TOPAS simulations with and without the metal port.
## Results
For the wax slab phantoms, the dose differences between RS1 and RS2 were 0.5% for DermaSpan and AlloX2 but 3% for AlloX2-Pro. From TOPAS simulations of RS2, the impact in dose distributions caused by the magnet attenuation was (6.4 \(\pm\) 0.4) %, (4.9 \(\pm\) 0.7)%, and (2.0 \(\pm\) 0.9)% for DermaSpan, AlloX2, and AlloX2-Pro, respectively. With breast phantoms, maximum differences in DVH parameters between RS1 and RS2 were as follows. For AlloX2 at the posterior region: (2.1 \(\pm\) 10)%, (1.9 \(\pm\) 1.0)% and (1.4 \(\pm\) 1.0)% for D1, D10, and average dose, respectively. For AlloX2-Pro at the anterior region (-1.0 \(\pm\) 1.0)%, (-0.6 \(\pm\) 10)% and (-0.6 \(\pm\) 1.0)% for D1, D10 and average dose, respectively. The impact in D10 caused by the magnet was at most (5.5 \(\pm\) 1.0)% and (-0.8 \(\pm\) 1.0)% for AlloX2 and AlloX2-Pro, respectively.
**Conclusion:** Two strategies for accounting for CT artifacts from three breast TTEs were assessed using CCC, MC, and film measurements. This study showed that the highest differences with respect to measurements occurred with RS1 and can be mitigated if a template with the actual port geometry and materials is used.
###### Acknowledgements.
PMRT, temporary-tissue-expanders, Monte Carlo-TOPAS, high-density metal artifacts, collapsed cone convolution algorithm, breast cancer, radiation effects
## 1 Introduction
Post-mastectomy radiation treatment (PMRT) is selectively recommended for patients with locally advanced and/or high-risk biologically aggressive breast cancers [(1)]. For patients who undergo prosthetic breast reconstruction, radiation increases the risk for adverse effects including capsular contracture, scarring at the implant-tissue junction, development of the seroma and dehiscence of the skin incision [(2)]. As such, a two-stage reconstruction using a temporary tissue expander (TTE), followed by PMRT then delayed final prosthetic reconstruction is often preferred [(3)]. The TTEs help preserve the breast skin and organ at risk contours improving the radiotherapy treatment planning, which in turn alleviates the complication risks. Most TTEs consist of an injection port through which a saline solution is injected to expand the surrounding skin. The port consists of a central high-density magnet enclosed in an encasing to locate the injection site [(4)]. In addition, suction drains are routinely placed to drain the seroma [(5)]. The different TTEs such as CPX(r) (Mentor, Irvine, CA, USA), Natterle(r) (Allergan Inc., Santa Barbara, CA, USA) and DermanSpan (Sientra, Inc., Santa Barbara, CA, USA) have a single port with a high-density magnetic disk placed in a high-density encasing. More recently, Allox2 and Allox2-Pro (Sientra Inc., Santa Barbara, CA, USA) breast TTEs were introduced with a dual port system. One port is used for traditional saline injection, and the second facilitates fluid drainage. This feature of dual ports enables independent management of postoperative seroma and thereby reducing the rate of infection by 7.8% as shown retrospectively for the Allox2 TTE [(6)].
The PMRT is delivered in conventional 2Gy per fraction for a total dose of 50Gy over five weeks, or with more modern hypofractionaltion techniques over 3 weeks. The 3D CT data is used to delineate tumors and organs at risk (OAR), and the electron density information in the Hounsfield units (HU) of the CT data is used in the calculation of dose distributions. The presence of high-density magnets imposes challenges to accurate treatment planning and delivery. Some key challenges are [(1)] the increased scatter dose at the skin surface may lead to skin and subcutaneous toxicity varying from mild erythema to skin fibrosis or skin dyspigmentation [(2)]. The tissue attenuation can lead to cold spots or under dosage of the planning target volume [(3)]. The presence of an implant or other high-density materials leads to streaking artifacts that impede the accurate delineation of tumors and OARs. In addition, due to a limited value range of HU to electron density tables in standard CT systems, the density values of TTEs are not reconstructed correctly in the CT data [(7)], calling into question the accuracy of the computed dose distribution models.
The dosimetric impact in PMRT of single metal ports have been examined in several studies [(8; 9; 10; 11)]. Results largely depend on the treatment modality. For example, for 3D-CRT using single 6MV and 15 MV photon beams, the dose perturbations are reported between 5 to 30% [(9; 11; 12)] and 16% [(11)], respectively. For VMAT, differences below 6% had been reported [(12)]; however, some studies had reported a negligible difference [(13; 14)].
The dual ports cover a significant amount of the treatment volume and perturb the radiation treatment field with increased scatter dose and tissue attenuation beneath the device. To the best of our knowledge, there is no literature on the dose perturbations caused by PMRT with dual metal ports. Therefore, this characterization study aims at a detailed comparison of the three TTEs: single port DermanSpan, dual port Allox2, and the novel Allox2-Pro. We provide a detailed comparison of the three TTEs using flat, breast phantom geometries and six clinical cases. In addition, the dose computed by the collapsed-cone convolution (CCC) algorithm v5.5 in RayStation TPS is compared with TOPAS Monte Carlo Tool calculations and experimental Gafchromic film measurements.
## 2 Methods
### TrueBeam phase space verification
Fifty phase space files containing the positions of particles, angular momenta and kinetic energies generated by Monte Carlo simulations of a 6 MV TrueBeam Linac were obtained from MyVarian at www.myvarian.com/monotecarlo. The total number of primary histories per phase space was \(10^{9}\) and was generated without any variance reduction technique. The phase spaces were scored at a plane positioned at 73.3 cm from the Linac isocenter, upstream of any moving parts of the Linac treatment head. A comparison was performed between the percentage depth-dose and lateral dose distributions at several depths calculated in water and measured data obtained at the time of commissioning for a
TrueBeam Linac at our institution. For that, two open field setups at 3 x 3 cm\({}^{2}\) and 10 x 10 cm\({}^{2}\) defined at 100 cm SSD were used. The water phantom had dimensions of 20 x 20 x 35 cm\({}^{3}\) with a voxel resolution of 1 x 1 x 0.5 mm\({}^{3}\); the highest resolution was used along the beam direction. The following linac devices were included in the simulation: jaws, base plate, 120 Millennium MLC, and mylar tray. The geometry details were obtained from the vendor. The absorbed dose averaged by primary history retrieved at 10 cm depth was used to scale the simulations to the dose calibration conditions at our institution: 1 cGy/MU at a depth of maximum dose for a 10 x 10 cm\({}^{2}\) field defined at 100 cm SSD. An exponential fit was adjusted to the calculated PDD between the range of 5 to 15 cm to retrieve the calculated absorbed dose at 10 cm depth.
The Monte Carlo simulations were performed with TOPAS version 3.7 (15, 16) built on top of Geant4 toolkit version 10.07 patch 3 (17). The physics list was the electromagnetic module called "gdem-standard_opt4" which was described and benchmarked for its application in radiotherapy as reported elsewhere (18). For all dose calculations, azimuthal particle redistribution with a split number of 50 (19) was used through the geometrical particle split technique available in TOPAS (20). The statistical uncertainty of the dose distributions was 0.5% or better in all simulated cases.
### Breast tissue expander geometries
Breast TTEs consisted of a silicon bag filled with saline solution containing one or two draining or filling ports with a high-density magnet embedded to allow its localization. Three breast TTEs were used in this work. Two commercially available (DermsSpan\({}^{\text{TM}}\) and AlloX2\({}^{\text{B}}\)) and a novel TTE (AlloX2-Pro-Sientra, Inc). The geometry details and materials of the ports obtained from Sientra Inc. are presented in Figure 1. The DermsSpan model consisted of a single titanium (p=4.54 g/cm\({}^{3}\)) port with a neodymium (p=7.6 g/cm\({}^{3}\)) magnet enclosed. The AlloX2 model consisted of two titanium ports with one neodymium magnet enclosed in each port. The AlloX2-Pro model consisted of two ports made of peek material (p=1.3 g/cm\({}^{3}\)), with a single neodymium magnet located between the ports. The geometry and densities from all the three ports were saved as contour templates in RayStation.
### Strategies for handling metal artifacts
The CT artifacts caused by the metal-ports are managed using two density override strategies at our institution. The first strategy (hereafter called RS1) consists of identifying the metal by adjusting the image window-level to display only the brightest region, assumed occupied by the metal port. Subsequently, a contour is delineated enclosing the artifact and the density of surrounding voxels is set to unity. The second strategy (hereafter called RS2) consists of registering rigidly a geometry template with the dimensions, materials, and densities from the corresponding metal-ports obtained from the vendor; the density of voxels outside the port geometry is set to unity. Both strategies were compared using Collapsed Cone Convolution (CCC) version 5.5 in RayStation version 11A, and TOPAS Monte Carlo simulations. The resolution of the dose grid for RayStation and TOPAS calculations was 2 x 2 x 2 mm\({}^{3}\). Calculated results were compared with Gafchromic film (Ashland Inc.) measurements using two irradiation setups as described below.
### Wax slab phantom setup
A setup consisting of a wax slab phantom irradiated by an AP field was configured to assist in the validation of TOPAS simulations for each TTE port. For each TTE, the ports were stripped off from the silicon bag and embedded in a slab phantom made of wax (p=0.92 g/cm\({}^{3}\)). The phantom had
dimensions of 30 x 30 x 1.7 cm\({}^{3}\). An irradiation setup was configured consisting of the slab phantom stacked between 1.5 cm thickness of plastic water and 10 cm thickness underneath, see Figure 2. An iterative metal artifact reduction (iMAR) (Siemens Medical System) algorithm was used to reduce the high-density metal artifacts. The setup was simulated and exported to RayStation TPS for planning. The plan consisted of a 6 MV field of 15 x 10 cm\({}^{2}\) defined at 100 SSD, 500 MU delivered in the AP direction. The remained metal artifacts were handled with the two strategies described in section 2.3. The setup was reproduced with TOPAS simulations which included the actual port geometries shown in Figure 1. The ports were aligned to the metal artifact using the RayStation contours from RS2 as a frame of reference. The overlapping of geometries was handled by the feature Layered Mass Geometry (21). Film dosimetry was performed by placing Gafchromic films at different positions as shown in Figure 2.
### Breast tissue expander phantom setup
The effect of using multiple gantry angles was evaluated for AlloX2 and AlloX2-Pro TTEs. The partial arc irradiations were performed on the ports using an open field as detailed below. This setup was representative of a worst-case scenario where multiple x-ray beams interacts with the metal port for most of the irradiation time.
The AlloX2 and AlloX2-Pro TTEs were irradiated in their standard configuration during PMRT i.e., embedded in the silicon bag filled with water. The silicon bag wall (-1.1 g/cm\({}^{3}\)) was about 1 mm of thickness and had a negligible effect on the dose distributions. In this work, water was used instead of saline solution which shown to be dosimetrically equivalent for MV radiation. However, it has a dosimetric impact by 5% for kV photons, as shown by (22). A customized breast phantom holder and bolus (5 mm thickness) were made with wax to immobilize the phantom for reproducibility. The bolus was placed on top of a thermoplastic mesh covering the breast tissue expander with the air gaps filled with superflab bolus as best as possible. CT images were obtained with iMAR algorithm (section 2.3) and exported to RayStation TPS for planning. The plan consisted of a 6 MV conformal arc (3 x 3 cm\({}^{2}\)), gantry angles from 90 to 270 degrees in the counterclockwise direction, delivering 355 MU in a single fraction, see Figure 2. The partial arc configuration considered the contribution of parallel opposed fields at 90 and 270 deg. Contours were drawn for the analysis which included the silicon bag, an expanded wall to the silicon bag of 3 mm thickness split into four contours. These contours covered the anterior (C_Anterior), posterior (C_Posterior), left (C_Left) and right (C_Right) directions of the beam. Pieces of films were positioned at several depths as shown in Figure 2. The films for analysis were 1 x 1 cm\({}^{2}\) and were read at least 24 hours after the irradiation.
## 3 Results
### TrueBeam phase space verification
In Figure 3, the measured percentage depth-dose (PDD) and crossline dose profiles are compared with the ones calculated with the Varian phase spaces for two open fields. For the crossline profiles, several curves are displayed at a depth of 1.5 cm, 10 cm, and 20 cm depths. The bottom of each panel displays the \(\gamma\)-index value resulting from the TOPAS and measurements comparison. As
depicted, for all the panels the \(\gamma\)-index is below unity for the 1%/1mm (PDD) and 2%/1 mm (crossline) criteria.
### Slab wax phantom setup
Panels of Figure 4 show depth-dose profiles for the configuration consisting of the breast tissue expander ports embedded in a wax slab phantom. For DermaSpan and AlloX2-Pro the central profiles are shown, whereas for AlloX2, the profiles crossing the injection port are shown. Film measurements are shown with symbols. At the bottom of the panels, the less restricted of percentage difference and distance-to-agreement to the measured data are shown. The vertical lines delimit the region occupied by the slab wax phantom. For the DermaSpan port (panel A), both RS1 and RS2 were within 2%/1 mm in the buildup region
and distal falloff. Much higher difference was seen at the region immediately at downstream the port. For the AlloX2 (panel B), both R1S1 and R3S2 were within 2%/1 mm at the buildup and distal falloff. Lastly, for the AlloX2-Pro (panel C), at the buildup region, both R5I and R52 were within 2%/1 mm from measurements. At the distal falloff, R5I differed from the film measurements by 2.7% whereas for R52 the differences were within 1%. For all the three ports, TOPAS simulations were within (2% \(\pm\) 0.5%)/1mm in the buildup and distal falloff regions.
The effect of the port in the depth-dose distributions outside the wax phantom was calculated with TOPAS by comparing simulations with the port substituted by wax. Results are shown in the Figure 5. As depicted, the underdosages caused by the attenuation from the magnets in the ports were (6.4 \(\pm\) 0.4)%, (4.9 \(\pm\) 0.7)% and (2.0 \(\pm\) 0.9)% for DermSpan, AlloX2 and AlloX2-Pro, respectively. In the region proximal to the beam entrance, an overdose caused by the backscatter radiation was observed for the AlloX2-Pro. The overdose decays rapidly from about 3% \(\pm\) 1% to zero within the first 5 mm.
### Tissue expander phantom setup
In panels of Figure 6 the dose profiles along the anterior-posterior direction traversing the drain and central magnets are shown for the AlloX2 (panel A) and AlloX2-Pro (panel B), respectively (section 2.5 and Figure 2). Film measurements are shown with symbols at three positions. For both TTEs, RS2 calculated with CCC (CCC (RS2)) agreed reasonably well with TOPAS calculations but did not reproduce the dose perturbation near the magnet, at about the 4 cm position. RS1 (MC (RS1)) and RS2 (MC (RS2)) results calculated with TOPAS had better agreement to the film measurements, been RS2 the closer to the measured data, as shown in the bottom of each panel of Figure 6. The axial isodestributions calculated with TOPAS for AlloX2 and AlloX2-Pro using RS-1 and RS-2 are displayed in Figure 7. As depicted, the most significant dose differences, as large as 25% \(\pm\) 1.5% and 28% \(\pm\) 1.5%, occur locally around the magnet region. These dose differences are almost entirely contained by the silicon bag. The dosimetric impact outside of the bag is minimal as shown for the contour volumes in Tables 1, 2.
The impact of the TTE port in the dose distribution was quantified by comparing dose volume histogram (DVH) parameters for simulations with and without the metal port, for the contours displayed in Figure 2. Results are shown in Tables 1, 2 for the AlloX2 and AlloX2-Pro, respectively. Combined statistical uncertainties were 1.0%, one standard deviation, or better. For AlloX2, the impact of the metal port calculated by R5I and RS2 exceeded statistical uncertainties only for the contour C_Posterior located at the posterior region of the phantom, effect caused by the attenuation introduced by the metal port. In this region, RS1 produced a higher dose than using RS2, e.g., by 2.1% for D10. On the other hand, for AlloX2-Pro the impact of the metal port in the
computation of DVH parameters shown in Table 2 resulted in subpercentage differences, smaller than the combined statistical uncertainty. Furthermore, RS1 and RS2 were statistically equivalent as the percentage differences between DVH parameters fell within the combined statistical uncertainty.
## 4 Discussion
In this work, the dosimetric characterization of three TTEs was performed with the Monte Carlo method and CCC. Dose at selected positions in two irradiation setups, using wax slab phantom (3D-CRT) and customized breast phantom (conformal arc radiotherapy), were compared with film measurements obtaining an overall agreement within 3%. For both irradiation setups, two strategies for handling the CT artifacts produced by TTE metal ports in the calculation of dose distributions for were evaluated.
For the 3D-CRT irradiation setup, the absorbed dose for DermaSpan and AlloX2 was attenuated downstream the magnet. The thickness of each magnet was 2.41 mm and 2.5 mm for DermaSpan and AlloX2, respectively (Figure 1). Under ideal conditions neglecting scattering, the attenuation caused by the magnet (7.4 g/cm\({}^{3}\)) irradiated with MV x-rays was expected to be \(\sim\)5% approximately, the Monte Carlo calculated results also included the titanium port and resulted 6.4% and 4.9%, respectively (Figure 5). Conversely, for the AlloX2-Pro (7.14 mm thickness) the attenuation was substantially lower. This effect was caused by the magnet geometry; the physical dimensions perpendicular to the beam were about one third smaller than for the other two ports. Thus, there was more in-scatter radiation from
\begin{table}
\begin{tabular}{l|l|l|l|l|l|l|l|l|l|l} \hline \(\text{R}\) & \multicolumn{1}{c}{\(\text{Vol}\text{
the unobstructed portion of the beam for AlloX2-Pro. The ins-scatter radiation compensated the attenuation of dose leading to an underdose of about 2%. On the other hand, in the buildup region backscatter dose was observed for AlloX2-Pro only. That backscatter originated by the closer position of the AlloX2-Pro magnet to the TTE surface compared to the other two models. In the literature, backscatter dose factors for 6 MV beams incident in lead (11.3 g/cm\({}^{3}\)) had been reported to reduce from a factor of 1.03 to 1 within the first centimeter (23). The dose profile calculated with Monte Carlo in this work showed the equivalent behavior as that reported in the literature. The clinical impact of the backscatter dose is expected to be negligible as the maximum extent of the magnet dictates the diameter of the region irradiated by backscatter radiation. This diameter (10.4 mm) is smaller than the diameter stated by ICRU 50 (15 mm) for the definition of a hot spot (24).
The calculated absorbed dose using CCC for both strategies (RS1 and RS2) agreed with Monte Carlo and film measurements within 2%/1 mm for DermaSpan and AlloX2 and within 3%/1 mm for AlloX2-Pro in the buildup region and distal falloff of the depth-dose distribution (Figure 4).
The higher discrepancies occurred in the region downstream of the magnet within the first centimeter. While this discrepancy was a result of the limitations of the dose calculation algorithms and dose grid resolution, its location was expected to be within the filled TTE silicon bag region which might encompass at least 4.5 cm thickness, having minimal impact on the patient. The closer agreement between RS1 and RS2 for DermaSpan and AlloX2 at the distal falloff region was not surprising. The maximum density (2.5 g/cm\({}^{3}\)) from the CT density tables assigned to the metal artifact for RS1 was about three times smaller than the actual magnet density (7.4 g/cm\({}^{3}\)) used in RS2, however, the thickness of the identified metal artifact was also about three times greater than the thickness of the actual magnet geometry. Thus, the amount of attenuation in both cases was similar. On the other hand, for AlloX2-Pro the thickness of the metal artifact and the magnet were about the same dimension. Therefore, there was less attenuation using strategy RS1 that led to an overdose of about 3% compared with RS2.
The impact of the beam direction was quantified using partial arc irradiation and a customized phantom for both AlloX2 and AlloX2-Pro. Comparison between simulations with and without port were presented in Tables 1, 2. For AlloX2, the metal port attenuated the dose distribution posteriorly leading to a reduction of the D10 parameter by 5.5%, calculated with RS2. By using RS1, this value can be overestimated by \(\sim\)2% as shown in Table 1. For the regions located at the lateral positions, the effect of the port was mitigated by the opposed radiation fields (23). This compensation resulted in a negligible difference in the DVH parameters as shown in Table 1. On the other hand, for AlloX2-Pro the impact of the metal port under partial arc irradiation resulted in sub-percentage differences in the DVH parameters, as shown in Table 2. This effect resulted from the small size of the magnet, which allowed more contribution from the in-scatter radiation, as shown for the slab wax phantom setup. Finally, sub-percentage differences in DVH parameters between RS1 and RS2 resulted from the comparable dimension of the metal artifact and the magnet.
In this work 6 MV beams were considered. Retrospective studies reported that 6 MV beams are mostly used for 3D planning of breast with tangents (25) while higher energy beams are often used for large brest separations to improve homogeneity. Above 10 MV, the dose distributions are highly affected by the pair production process within the first 2 cm from the surface of metal objects (23). In addition, the production of photoneutrons takes relevance. Contrary to CCC, these two interaction processes can be explicitly modeled with the Monte Carlo method so that dose differences between the two methods are expected near the metal ports. The dosimetric study of high energy beams in metal ports is out of the scope of current work as a prior validation of TOPAS for the simulation of the photoneutrons yield is needed. This task is ongoing in our research group and will be presented in future work.
In a typical IMRT treatment in VMAT mode, for example, the MLC modulation might partially or totally occlude the radiation directed to the metal port. Thus, the partial arc configuration represented the extreme scenario when the port was irradiated all the time. The highest differences found in the DVH parameters calculated with RS1 and RS2 might be mitigated by the MLC modulation.
Finally, caution must be practiced when higher saturation HU values are used, which lead to higher density values. The density assigned to the identified metal artifact in RS1 highly depended upon the CT density tables and the delineation of artifacts. Thus, we recommend using the actual port geometry and materials. Templates compatible with RayStation TPS are provided in the supplementary material of this work to reduce the delineation time for the TTEs studied in this work.
## 5 Conclusions
The dosimetric impact of the TTEs in PMRT depended on the geometry, artifact delineation method, and irradiation conditions. The greatest differences with respect to measurements were observed in the RS1 strategy. Using a template with the actual port geometry and materials (RS2) can alleviate the differences and reduce the artifact delineation time. Negligible dose perturbation was observed for the novel TTE under continuous partial arc irradiation conditions compared to a single beam at normal incidence.
## Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
## Author contributions
MS was involved in conceptualizing the research design, planning, and continued supervision of the work. MS performed and processed the experimental data and analysis and wrote on the manuscript. JM conceptualized and executed the Monte Carlo simulations, analyzed the data, and wrote the manuscript. CP aided
in interpreting the results and worked on the manuscript. All authors contributed to the article and approved the submitted version.
## Funding
This work is supported in part by a sponsored grant sponsored by Sientra Inc CA0160869 and NIH/NCI U24 CA215123.
## Acknowledgments
We acknowledge Peter Li from the University of California Berkely for his help in analyzing the film data, Annette Villa and Naoki Dominguez-Kondo from the University of California San Francisco for making the wax phantoms and for providing the TOPAS geometry extension for the 120 Millennium MLC, respectively; and Darren Sawkey from Varian for providing the 120 Millennium MLC geometry details. We would like to thank Joanna C Yang, Adam Melancon and Junhan Pan for the initial discussions on AlloX2 TTEs.
## Conflict of interest
This work was partially supported by a research grant from Sientra Inc.
## Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
|
2305.04159 | Lookahead When It Matters: Adaptive Non-causal Transformers for
Streaming Neural Transducers | Streaming speech recognition architectures are employed for low-latency,
real-time applications. Such architectures are often characterized by their
causality. Causal architectures emit tokens at each frame, relying only on
current and past signal, while non-causal models are exposed to a window of
future frames at each step to increase predictive accuracy. This dichotomy
amounts to a trade-off for real-time Automatic Speech Recognition (ASR) system
design: profit from the low-latency benefit of strictly-causal architectures
while accepting predictive performance limitations, or realize the modeling
benefits of future-context models accompanied by their higher latency penalty.
In this work, we relax the constraints of this choice and present the Adaptive
Non-Causal Attention Transducer (ANCAT). Our architecture is non-causal in the
traditional sense, but executes in a low-latency, streaming manner by
dynamically choosing when to rely on future context and to what degree within
the audio stream. The resulting mechanism, when coupled with our novel
regularization algorithms, delivers comparable accuracy to non-causal
configurations while improving significantly upon latency, closing the gap with
their causal counterparts. We showcase our design experimentally by reporting
comparative ASR task results with measures of accuracy and latency on both
publicly accessible and production-scale, voice-assistant datasets. | Grant P. Strimel, Yi Xie, Brian King, Martin Radfar, Ariya Rastrow, Athanasios Mouchtaris | 2023-05-07T01:43:32Z | http://arxiv.org/abs/2305.04159v2 | # Lookahead When It Matters: Adaptive Non-causal Transformers
###### Abstract
Streaming speech recognition architectures are employed for low-latency, real-time applications. Such architectures are often characterized by their causality. Causal architectures emit tokens at each frame, relying only on current and past signal, while non-causal models are exposed to a window of future frames at each step to increase predictive accuracy. This dichotomy amounts to a trade-off for real-time Automatic Speech Recognition (ASR) system design: profit from the low-latency benefit of strictly-causal architectures while accepting predictive performance limitations, or realize the modeling benefits of future-context models accompanied by their higher latency penalty. In this work, we relax the constraints of this choice and present the Adaptive Non-Causal Attention Transducer (ANCAT). Our architecture is non-causal in the traditional sense, but executes in a low-latency, streaming manner by dynamically choosing when to rely on future context and to what degree within the audio stream. The resulting mechanism, when coupled with our novel regularization algorithms, delivers comparable accuracy to non-causal configurations while improving significantly upon latency, closing the gap with their causal counterparts. We showcase our design experimentally by reporting comparative ASR task results with measures of accuracy and latency on both publicly accessible and production-scale, voice-assistant datasets.
Machine Learning, ICML
## 1 Introduction
Modern speech recognition applications have leveraged the benefits afforded by fully-neural architectures to drive enhanced user experiences. For example, these architectures allow popular virtual assistants such as Amazon Alexa, Google Assistant, and Apple Siri to rely on robust, far-field speech recognition as their primary medium of real-time interaction. Similarly, offline services such as recorded call transcription and video caption generation apply these architectures as well. The dominant techniques like Neural-Transducers, Listen-Attend-Spell, and Transformers are all principally sequence-to-sequence models consisting of an audio encoder mapping acoustic input to high-level representations for autoregressive decoding (Graves, 2012; Chan et al., 2016; Mohamed et al., 2019; Chorowski et al., 2015; Han et al., 2020). And while each neural ASR architecture has its individual design elements, generally speaking, each is categorized or implemented as a causal or non-causal model.
Causal ASR models are streamable designs which emit predictions for each frame as they arrive from the audio signal, without access to future frames (He and al., 2019; Yu et al., 2021; Radfar et al., 2022). These models are referred to as causal in the sense that each frame prediction is only dependent on its left (past) context. Causal configurations are particularly attractive for real-time processing applications where fast response times are essential for the user experience, such as virtual assistants or live stream caption generation. Non-causal models, on the other hand, have access to information from future frames. Non-causal models can be either streaming with a bounded number of lookahead frames or full-context with access to the entirety of the audio signal, before beginning to decode (Zhang et al., 2020; Moritz et al., 2020; Yeh et al., 2019; Tripathi et al., 2020). Naturally, these models can deliver significantly improved accuracy over their causal counterparts by leveraging future information to disambiguate predictions holistically. The accuracy gains of non-causal processing typically are accompanied by a steep price of higher latency in real-time applications however, which can hinder these models for production deployment.
In this paper, we introduce the Adaptive Non-Causal Attention Transducer (ANCAT), a neural ASR architecture which has the accuracy improvements witnessed by non-causal models, while yielding a latency that is more comparable
to streaming causal architectures. ANCAT achieves this improvement by providing flexibility to the model to rely on variable future context at each frame. However, it is trained to be selective about how and when it does so, taking into account both the accuracy and latency implications to poll for future context only where necessary to make accurate predictions. The resulting behavior is that the model's latency costs for adaptively ingesting lookahead context, when aggregated over the entire frame sequence, is comparatively minimal. Our model is trained fully end-to-end, jointly learning how and where to apply non-causal context in tandem with its traditional token prediction objectives.
After summarizing motivating related work in Section 2, the remainder of the paper is organized according to our contributions: Section 3 introduces the ANCAT design and key architectural concepts, Section 4 derives novel loss functions developed for training our architecture and regularizing future context in connection with different notions of latency commonly applied in the literature, and Section 5 presents empirical results which justify our design elements and showcase the modeling capabilities of ANCAT for streaming ASR tasks on both open-source and industrial data.
## 2 Related Work
**Causal, Non-causal, and Streamable ASR.** Bridging the accuracy benefits of non-causality with the low-latency deployability of causal models has been the focus of many prior studies. Several works (Audhkhai et al., 2021; Moritz et al., 2021; Yu et al., 2021) find that distillation techniques leveraging non-causal right context benefit the training of fully causal models. For instance, "dual-mode" ASR, where a single end-to-end model is jointly trained with full-context mode using shared weights, improves causal ASR accuracy (Yu et al., 2021).
Other approaches enable more accurate streaming by permitting ingestion of a finite amount of frame-wise lookahead context at the cost of a latency penalty. For example, using a fixed right-context window of lookahead frames at each layer is common (Zhang et al., 2020). Chunking approaches are also effective for non-causality (Tsunoo et al., 2019; Dong et al., 2019; Shi et al., 2021; Chen et al., 2021). With chunking, the stream is broken down into fixed-sized groupings of non-overlapping, adjacent frames. Within a chunk, all frames have access to one another and possibly frames from prior chunks. Similar to dual-mode training, (Swietojanski et al., 2022) also extends in-place distillation to chunking of variable sizes, training a single model but allowing different chunk sizes to be configured at inference to match hardware specifications. Scout-networks (Wang et al., 2020), meanwhile, use word boundaries to define the position and sizes of chunks and use a separate network trained on forced alignment data to predict boundaries at inference time.
Another set of approaches unify causal and non-causal context into one system by training streaming and full-context encoders and stacking them. (Narayanan et al., 2021) and (Li et al., 2021) propose "cascaded" audio encoders where a causal first-pass encoder is followed by a stacked non-causal encoder operating on the first encoder's outputs. These modules are jointly learned to produce accuracy improvements, but accept the latency impact from the full-context second-pass encoder.
**Adaptive Neural Compute.** Adaptive compute, also referred to commonly as variable or dynamic compute, is a technique that adjusts the amount of neural computation a model executes during inference as a function of each individual input. The motivating intuition of the approach is that since each instance has different characteristics, the corresponding amount of computation expended should reflect this variety, conditioning for more resources and operations only where necessary. Researchers have explored these ideas extensively across machine learning areas, including NLP (Graves, 2016; Jernite et al., 2017; Dehghani et al., 2019; Elbayad et al., 2020), vision (Bolukbasi et al., 2017; Figurnov et al., 2017), speech (Macoskey et al., 2021; Ng et al., 2022; Peng et al., 2023) and recommendation systems (Song et al., 2019). We refer readers to (Han et al., 2021) for a comprehensive survey. While (Sukhbaatar et al., 2020; Chang et al., 2020) also propose learning static attention span adjustment, our mechanism will vary the span across inputs and frames adaptively for streaming. Additionally, all adaptive techniques effectively tie their compute to a particular cost metric, such as floating point operations. For our application, we will show how to link our dynamic compute mechanism to key latency measures for real-time speech recognition.
**Latency Measures.** There are intricacies to assessing the latency of a streaming ASR architecture, and as a result, numerous metrics have been proposed to encapsulate its various facets. User-perceived latency (UPL), endpointer latency, first token emission delay, algorithmic latency, mean alignment delay, and partial recognition latency are all among those measures which are considered in the literature (Shangguan et al., 2021; Sainath et al., 2020; Inaguma et al., 2020; Yu et al., 2021). While each definition attempts to capture a different latency driver, for models using non-causal context, it is natural to use algorithmic latency, and therefore, we also adopt this measure to inform the design of our adaptive non-causal streaming architecture. _Algorithmic latency_ reports the time required processing the input to produce the output of an audio frame - frame length combined with the amount of lookahead frames used (Shi et al., 2021). Since our algorithm is nondeterministic, we adjust to reporting a statistical version of the metric. We
also consider a _compute-induced UPL_ metric in our design, detailed in Section 4.3, which jointly accounts for processing speed and additional work scheduled by our non-causal mechanism to derive their combined impact on UPL.
## 3 Adaptive Non-causal Design
In this work we focus on the Neural Transducer architecture consisting classically of three modules: an encoder network, a prediction network, and a joint network. The encoder network takes a sequence of \(T\) feature frame vectors \((x_{1},\dots,x_{T})\) extracted from the audio signal and maps it to a corresponding sequence of high-level acoustic representations. The prediction network operates autoregressively over a sequence of \(U\) labels \((y_{1},\dots,y_{U})\). The joint network combines the outputs from the encoder and prediction networks to model the likelihood for the next label emitted.
The transducer architecture is a popular choice for real-time applications because the encoder network can be trained for streaming, allowing the joint network to emit its prediction on the \(i\)-th frame relying on a bounded number of future frames; however, full-context encoders can be applied to yield additional accuracy for non-streaming scenarios. Our design utilizes Transformer-based encoders which are constructed from \(L\) stacked blocks (a.k.a. layers, terms which we use interchangeably), each of which accepts the output from the previous block and produces hidden vector \(h_{i}^{\ell}\) for frame \(i\) and layer \(\ell\). Transformer-based blocks consist of various compositions of sublayers such as feed-forwards, layer-norms, and even convolutions in the case of Conformer (Gulati et al., 2020), but all contain the multi-headed self-attention (MHSA) sublayer.
### Compute DAGs and Attention Masking
Transformer-based attention architectures define a natural compute graph of input dependencies for each output of each layer. Under this directed acyclic graph (DAG) representation, each vertex \(v_{i}^{\ell}\) (or node) represents the compute on frame \(i\) of layer \(\ell\), and a directed edge \(\left(v_{j}^{\ell-1},v_{i}^{\ell}\right)\) between vertices of adjacent layers indicates the reliance on the output of \(v_{j}^{\ell-1}\) for computing \(v_{i}^{\ell}\), namely that \(h_{j}^{\ell-1}\) (or a mapping thereof) is attended over when computing \(h_{i}^{\ell}\).
Our ANCAT architecture is designed to dynamically and strategically fill in the edges of this compute DAG, balancing both accuracy and latency considerations. To accomplish this, we train the traditional encoder weights while introducing jointly trainable schedulers \(\mathcal{S}^{\ell}\) into the architecture, one for each layer \(\ell\). Each scheduler \(\mathcal{S}^{\ell}\) accepts the prior layer's result \(h_{i}^{\ell-1}\) and determines the future, non-causal inputs to attend over to compute \(h_{i}^{\ell}\). Hence, \(\mathcal{S}^{\ell}(h_{i}^{\ell-1})\) predicts the non-causal edges from vertices \(v_{j}^{\ell-1}\) to vertex \(v_{i}^{\ell}\) for positions \(j>i\).
During training, we leverage a series of attention masks \(M^{\ell}\in[0,1]^{T\times T}\) to represent these edges, where \(M_{i,j}^{\ell}=1\) indicates the full presence of the \((v_{j}^{\ell-1},v_{i}^{\ell})\) edge and \(0\) indicates its absence:
\[M_{i,j}^{\ell}=\begin{cases}1&j\leq i\\ \mathcal{S}^{\ell}(h_{i}^{\ell-1})_{j}&\text{otherwise}\end{cases}\quad 1\leq i,j\leq T\]
The mask is applied for computing attention scores across each of the attention heads in the MHSA sublayer. Namely, for a scaled dot product value matrix \(P\) computed from query matrix \(X_{\text{query}}\) and key matrix \(X_{\text{key}}\) for a particular head
\[P=\frac{X_{\text{query}}X_{\text{key}}^{\top}}{\sqrt{d}},\]
the masked attention scores can be computed as
\[A_{i,j}=\frac{\exp{(P_{i,j})}M_{i,j}^{\ell}}{\sum_{t}^{T}\exp{(P_{i,t})}M_{i,t} ^{\ell}}\quad 1\leq i,j\leq T\]
for each attention head of layer \(\ell\) or using method of (Xie et al., 2022).
### Learned Schedulers
The role of the scheduler is to fill in the forward (right of the main diagonal) values of its corresponding attention mask. To simplify the learning process, instead of predicting each
Figure 1: Adaptive Non-Causal Attention Transducer (ANCAT). The architecture is a neural transducer with an acoustic encoder of \(L\) stacked Transformer-based blocks where each layer is augmented with an attention scheduler. Each scheduler learns to fill in the right context attention connections and passes the resulting mask to the multi-headed self-attention.
connection individually, we view each scheduler \(\mathcal{S}\) as estimating the length of the non-causal attention span to apply at its layer (i.e., the number of future frames to consider) on a particular input. We employ a smooth, differentiable, reverse "\(\mathsf{S}\)"-shaped function decreasing from \(1\) to \(0\) with an adjustable parameter \(\tau\) to control the sharpness of the curve. The schedulers learn where to shift the center \(o\) of this curve over a span of \(K\) maximum lookahead frames. Specifically, \(\mathcal{S}^{\ell}\) is computed as
\[o_{i}^{\ell} =\sigma\left(\text{FFN}^{\ell}(h_{i}^{\ell-1})\right)(K+\epsilon)\] \[\mathcal{S}^{\ell}(h_{i}^{\ell})_{j} =\begin{cases}1-\sigma\left(\left(j-i-o_{i}^{\ell}\right)/\tau \right)&j\leq K\\ 0&\text{otherwise,}\end{cases}\]
where \(\sigma\) is the standard sigmoid function and \(\text{FFN}^{\ell}\) is a learnable feed-forward network consisting of two linear transforms with a non-linear activation in between, with the second transform projecting to a single scalar. The small constant \(\epsilon\) is used for enabling the \(K\)th lookahead frame to potentially have a near-full connection while maintaining numeric stability during training.
Note that the design permits "soft edges" during the learning process with mask values between \(0\) and \(1\). However, \(\tau\) can be annealed towards \(0\) to gradually morph all soft edges into definitive binary ones. Our loss function design will leverage this property as well.
## 4 Regularizing Future Context
Complementing the design of the ANCAT architecture, we craft several loss functions which regularize the decisions of the schedulers to explicitly account for different notions of latency for streaming systems.
### Naive Regularization
To balance both the primary training objective of the neural transducer with the incurred impact of the schedulers' ingestion of non-causal frames, one can modify the training loss function to the form
\[\mathcal{L}=\mathcal{L}_{\text{transducer}}+\lambda\mathcal{L}_{\text{sched}}\]
A first attempt would be to simply regularize over all lookahead connections, with the intuition that attending over the fewest possible future frames across layers will yield lower latency
\[\mathcal{L}_{\text{L1}}=\frac{1}{T}\sum_{\ell}\sum_{j>i}M_{i,j}^{\ell}\]
This L1-type regularization will serve as a baseline for our experiments. However, indiscriminately regularizing in this manner does not account for a connection's non-local effects on latency. For example, a forward edge present in a lower level can dramatically impact the prediction delay over many frames because of the propagating effects layer over layer. Furthermore, the simple regularization above is not directly tied to an explicit latency measure.
### Algorithmic Latency
Here, we formulate how to regularize with respect to the notion of algorithmic latency. Recall that traditionally, algorithmic latency would refer to the minimal theoretic amount of time the model needs to wait before it can emit a symbol at a particular frame, resulting from the number of non-causal future frames relied upon \(\delta(i)\) for prediction at each timestep \(i\) along with the frame length (which is a constant).1 This delay is a deterministic value for standard architectures. In contrast, with our ANCAT architecture, the algorithmic latency can vary from frame to frame, and we therefore adopt a time-averaged mean definition:
Footnote 1: It is common to include the “current” \(i\)-th frame in this calculation; however, for clarity of presentation under our setup, we omit it. But since this singular frame amounts to an additive constant, one can easily translate any result to account for it.
\[\text{algorithmic latency}=\frac{1}{T}\sum_{i}(\text{frame length})\delta(i)\]
We derive an algorithm below to directly integrate the algorithmic latency stemming from the schedulers' decisions and aggregate the result into a regularization loss we apply during training.
To do so, we first define the notion of a dependency function. We define \(D_{i,j}^{\ell}\in[0,1]\) to represent the (fractional) dependency for computing \(h_{i}^{\ell}\) on the input node \(v_{j}^{0}\). In other words, \(D_{i,j}^{\ell}\) represents if \(v_{j}^{0}\) is connected to \(v_{i}^{\ell}\) by some
Figure 2: Compute graph example of ANCAT for three layers. Non-causal edges (blue) are provided by the learned schedulers. Dependency nodes (gray) of \(v_{i}^{L}\) (cyan) determine the algorithmic latency. The difference \(\delta(i)\) between the final dependency frame (red) and \(v_{i}^{L}\) determines the algorithmic latency for frame \(i\).
pathway through the compute DAG. The dependency values can be fractional because the edges will be fractional during training.
For the ANCAT architecture, we propose the following memoized dynamic programming formulation using the attention masks to recursively compute the dependency matrices at each layer
\[D_{i,j}^{\ell}=\begin{cases}M_{i,j}^{\ell}&\ell=1\\ \max_{t}\;\;M_{i,t}^{\ell}\cdot D_{t,j}^{\ell-1}&\ell>1\end{cases}\]
This algorithm can be viewed as computing the maximum fractional strength of a pathway between an input frame and a compute node. Interestingly, this algorithm becomes a special case reduction of a classic shortest path in a graph problem using edge weight from \(v_{j}^{\ell-1}\) and \(v_{i}^{\ell}\) as \(-\ln M_{i,j}^{\ell}\) and with the final \(D_{i,j}^{\ell}\) values extracted as \(\exp\left\{-\text{distance}\left(v_{j}^{0},v_{i}^{\ell}\right)\right\}\).
Importantly, for our application, the algorithm produces dependency matrices for each layer \(\ell\) that has \(D_{i,j}^{\ell}\) as monotonically decreasing in \(j\). The following argument by induction proves this fact: The base case is straight forward because \(D_{i,j}^{1}=M_{i,j}^{1}\) and by scheduler function monotonicity. Now let \(D_{i,j}^{\ell}=M_{i,t}^{\ell}\cdot D_{t,j}^{\ell-1}\) for some \(t\). Since \(D_{i,j-1}^{\ell-1}\geq D_{t,j}^{\ell-1}\) by induction hypothesis, there exists at least one \(t^{\prime}\), namely \(t^{\prime}=t\), over which a maximum is taken such that \(D_{i,j-1}^{\ell}\geq M_{i,t^{\prime}}^{\ell}\cdot D_{t^{\prime},j-1}^{\ell-1} \geq M_{i,t}^{\ell}\cdot D_{t,j}^{\ell-1}=D_{i,j}^{\ell}\).
As a result of this monotonicity, we treat \(D_{i,j}^{\ell}\) as an approximate, inverse cumulative distribution function over \(j\) where \(\hat{P}(\text{last dependency frame of }v_{i}^{\ell}\leq t)=1-D_{i,t}^{\ell}\). Therefore, the corresponding density function \(F_{i,j}^{\ell}=D_{i,j-1}^{\ell}-D_{i,j}^{\ell}\) provides a natural weighting for the position of the final lookahead frame required to compute node \(v_{i}^{\ell}\).
Observe that indeed \(\sum_{j}F_{i,j}^{\ell}=1\) for all \(i,\ell\) and that the distribution turns into a strictly one-hot vector coding which indicates the exact position of the last frame dependency of \(v_{i}^{\ell}\) as \(\tau\) is annealed to produce binary connections from the schedulers.
We now can use this algorithm to directly regularize mean (fractional) algorithmic latency over all frames of the utterance
\[\mathcal{L}_{\text{Alg.Lat.}}=\frac{1}{T}\sum_{i}\hat{\delta}(i)=\frac{1}{T} \sum_{j>i}\left(j-i\right)F_{i,j}^{L}\]
using \(F_{i,:}^{L}\) to indicate (weight) the last frame dependencies of the top layer \(L\) at each frame.
### Compute-Induced UPL
While algorithmic latency as a metric provides perspective on the minimum delay a streaming ASR model is to expect, it operates under the assumption that all compute is instantaneous, and therefore, only serves as a lower bound of the model's response UPL. To more realistically model and regularize with respect to UPL, we propose a loss which also accounts for processing speed.
We denote \(\mu\) as the effective processor speed in terms of compute nodes (encoder layers) which can be processed each second and \(\rho\) as the feature frame rate in frames per second. We use these constants and the algorithm presented in Section 4.2 to compute at which timestep each node becomes available to establish an accounting of the compute backlog. The final amount of work remaining (nodes left to compute) in the backlog on the final frame will then be used to regularize the UPL.
Letting \(q_{i}\) to be the number of nodes which have their input dependencies met on frame \(i\), we have
\[q_{i}=\sum_{\ell}\sum_{t<i}F_{t,i}^{\ell}\]
which is the amount of new work to be added to the backlog at \(i\). Defining \(b_{i}\) to be the buffered node backlog (lag accumulated at the \(i\)-th timestep), using a method similar to that of (Macoskey et al., 2021), we express the compute-induced UPL loss as the response delay in seconds \(\mathcal{L}_{\text{UPL}}=b_{T}/\mu\) with \(b_{i}\) derived recursively as
\[b_{i}=\begin{cases}0&i=0\\ \max\;\;\{b_{i-1}+q_{i}-\mu/\rho,0\}&i>0\end{cases}\]
where \(\mu/\rho\) is the compute throughput in terms of how many nodes per timestep can be burned down in the backlog. Figure 3 illustrates how this UPL computation is carried out.
## 5 Experimental Results
We organize our results in this section to demonstrate ANCAT's combined accuracy and algorithmic latency improvements on publicly available ASR data, supplemented with a
Figure 3: Example of the UPL algorithm in action. Work which cannot be completed on the current frame under a specified throughput is carried over to the subsequent frame. The remaining work at the end of the utterance defines the UPL.
summary of findings on industry data, model dynamics, and compute-induced UPL studies while referring the reader to corresponding appendices for their full results.
### LibriSpeech Experimental Setup
We investigate our architectures using the LibriSpeech corpus (Panayotov et al., 2015) comprised of 960 hours of training data collected from read audio books. Our evaluations report on the associated "clean" test data. Audio clips are preprocessed with a 64-dimensional log-filterbank energy feature extractor, and these feature vectors are stacked with a stride size of 2 and downsampled by 3 before being processed by a small convolution front-end to produce 120ms frames as input for the transformer-based blocks.
We conduct our experiments on two popular transformer-based architectures: Conformer (Gulati et al., 2020) and Transformer-Transducer (T-T) (Zhang et al., 2020). For both Conformer and T-T, we use encoders consisting of 14 stacked blocks, one-layer 640-unit LSTM prediction networks, and a 512-unit feed-forward joint network. For our ANCAT variants, we augment each block with learnable schedulers of 64-dimension hidden units. The detailed model configurations are listed in Appendix G.
For LibriSpeech, all the models are trained for 150 epochs of 1k steps with the Transformer-based encoder backbone and schedulers trained end-to-end jointly using the Adam optimizer (Kingma and Lei Ba, 2015) using the same hyperparameters settings specified by (Gulati et al., 2020). During training, we also apply SpecAugment (Park et al., 2019) with mask parameter \((F=27)\) and 20 time masks with maximum time mask ratio \((p_{S}=0.04)\), where the maximum-size of the time mask is set to \(p_{S}\) times the length of the utterance. We use a vocabulary of 4097 word pieces and the standard RNN-T beam search decoding with a width of 16. Further specifications on model configurations and training hyperparameters are shared in Appendix G.
### Baselines
We establish baseline models for Conformer and T-T models using causal and full-context attention. We also build baselines for non-causal streaming mechanisms proposed in prior studies. These include streaming attention which ingests \(K\) lookahead frames at each layer (Zhang et al., 2020),
denoted here as _Layerwise_, and chunking transformer attention (Shi et al., 2021; Chen et al., 2021), denoted as _Chunked_, which is a popular and efficient approach for comparison. Additionally, we present ANCAT models with basic L1-type regularization over the amount of future frames as described in Section 4.1, marked as ANCAT-_L1_.
### Algorithmic Latency Results
Our main empirical result showcases the significant improvements on each operating point of the latency-accuracy trade-off curve that ANCAT produces when trained with our novel regularization loss, and thereby, shifting the Pareto frontier over existing methods. Namely, for each fixed accuracy target, our ANCAT model produces significant decreases in algorithmic latency over the nearest baseline, and conversely, for a desired algorithmic latency, ANCAT also yields significant improvements in accuracy. We establish this result by training the baseline and ANCAT models under various hyperparameter settings for the maximum per-layer lookahead span \(K\) (i.e., 2, 5, and 10). We report Word Error Rate (WER) (%) and the statistical algorithmic latency (ms) of test utterances. We also report on latency with median algorithmic latency and 90th percentile metrics, denoted as Alg.Latency@50 and Alg.Latency@90. Included is the \(\ell_{1}\)-norm of the accessible future frames in attention maps to characterize the additional total amount of forward attention calculations (additional computation over causal). The aggregation means are taken across all utterances in the test set.
Results for LibriSpeech are arranged in Table 1. While the algorithmic latency values for the non-adaptive models are static for a given \(K\), the WER and latency trade-off can be tuned by setting different penalty degrees on the regularizing term \(\lambda\) for ANCAT models. To make clear comparisons of WER in Table 1, we choose \(\lambda\) values so that under each setting of \(K\) the Alg.Latency approximately matches that of the most efficient comparable baselines. As can be seen, training with our proposed ANCAT using our novel algorithmic latency loss as the regularizer, ANCAT-_Alg.Lat._, consistently yields lower WER over the approaches for equivalent latency operating points. This holds for both Conformer and T-T encoders and all settings \(K\), seeing an average of 11% (and upwards of 18% for the lower latency scenarios) relative WER improvement over non-adaptive models. Observe also, that compared with naive L1 regularization, for a given algorithmic latency budget, our ANCAT-_Alg.Lat._ models allow for a greater total amount of future frame attention based on \(\ell_{1}\)-norm measures. So while ANCAT-_Alg.Lat._ uses overall higher compute, it is more strategic in how it expends it to deliver better WER for matching latency.
To further highlight the improvements in the trade-off between WER and algorithmic latency, we record Conformer ANCAT model performance at multiple levels of training regularization in Figure 4. The plot shows that ANCAT-_Alg.Lat._ consistently outperforms all other architectures at each operating point, and therefore fully defines the Pareto frontier of efficient solutions, providing the most optimal trade-offs of those models under consideration. Furthermore, the plot emphasizes that non-causal adaptivity alone _does not_ provide this improvement since ANCAT-_L1_ closely matches static _Chunked_ performance; rather, ANCAT adaptivity paired with the proper choice of our novel regularization method is critical to the success of the approach under latency considerations.
For further insight into the behavior of our proposed algorithmic latency loss and its impact on scheduling, we visualize the attention masks of an utterance from the test set in Figure 5 and compare against the static approaches. The attention masks are generated with the final training epoch where \(\tau=1e-4\). The darker areas indicate the absence of attention. We depict the maps from the 3rd block and the 13th block in the Conformer models. One can observe that ANCAT-_Alg.Lat._ learns to structure its attention in stepwise patterns which effectively operates like a chunking strategy of varying sizes and locations conditioned on the input. In contrast, ANCAT-_L1_ attention patterns are jagged and emphasizes more localized decisions of the schedulers as opposed to the better coordinated decisions of ANCAT-_Alg.Lat._ accounting for the global impact on latency. We also show masks of an ANCAT model trained with higher latency penalization to demonstrate greater toggling-off of future attention connections, bordering being a fully causal model.
Figure 4: WER vs. algorithmic latency on LibriSpeech test “clean” data for four different types of Conformer models: _Layerwise_ (\(K\) at 0, 1, and 2 frames of right context per layer), _Chunked_ (\(K\) at 2, 5, 10, 15, and 20 frames), ANCAT-_L1_ (\(K=10\)) varying \(\lambda\), and ANCAT-_Alg.Lat._ (\(K=10\)) varying \(\lambda\). ANCAT-_Alg.Lat._ provides a more optimal accuracy-latency trade-off over other models.
### Industrial Voice Assistant
We repeat our algorithmic latency experiments on a de-identified large, industrial voice assistant dataset for again Conformer and T-T to verify robustness of our modeling approach both across architectures and data compositions. We find similar results to that of LibriSpeech with ANCAT_Alg.Lat._ consistently outperforming _Layerwise_, _Chunked_, and ANCAT-_L1_ models at each operating point and improving upon WER by often greater than \(5\%\) relative. Appendix C details this analogous experimental setup and its results.
### Compute-Induced UPL
In addition to applying our algorithmic latency loss as the regularization method, we also conduct experiments to regularize with respect to compute-induced UPL, both for LibriSpeech and industrial data. These results are presented in Appendix D and mirror our findings with algorithmic latency, with ANCAT regularized with the backlog latency method presented in Section 4.3 outperforming the baselines. We find that for all compute throughput settings on which we experimented, for a given UPL budget, ANCAT delivers superior accuracy, with typically ANCAT-_UPL_ improving WER by over 8% relative on LibriSpeech and 7% on industrial data over the nearest baselines.
### Additional Findings and Analysis
We present supplemental visualizations and observations of the approach in further appendices. Appendix A demonstrates that ANCAT promotes superior performance over baselines when comparing with other common latency metrics. Appendix B shows that ANCAT also performs well (10% WER relative improvement compared with baselines) in more challenging conditions using LibriSpeech "other" data and additive noise. We further show how the difficulty of an utterance correlates to the degree of lookahead employed. Appendix E highlights and expands upon additional attention plot examples while Appendix F depicts how the characteristics of ANCAT evolve over the course of training by temperature annealing. The figures show that our "soft" attention connections and latency measures smoothly converge throughout training with the binary edge runtime latency calculations.
## 6 Conclusion
In this work, we introduce an adaptive non-causal architecture for streaming speech recognition. The model learns to dynamically adjust the future context attention span for each individual frame of the audio stream, balancing both accuracy and latency considerations. We accompanied our architecture construction with novel regularizing loss functions which tie the frame-wise lookahead decisions of the model with key latency measures important for speech applications. The resulting mechanism provides a better Pareto frontier of trade-offs against baselines, in many cases with over 15% relative WER improvements for matching latency. Our experiments on public and large, production datasets and different architectures reinforce the robustness and applicability of our approach. We hope that future work will build on these contributions to propose adaptive non-causal approaches for other applications, measures of latency, and models of computation.
Figure 5: Attention mask visualization for the 3rd and 13th blocks of Conformer models. All the attention masks are generated with the same test example utterance. This utterance has a total number of 41 frames after downsampling in the front-end. The max number of lookahead frames is \(K=10\). The darker regions signal toggled-off attention. We select (a), (b), and (c) models to have comparable algorithmic latency performance, and (d) shows a model that is optimized with a higher latency penalty. Notice the deliberate, structured stepwise patterns ANCAT-_Alg.Lat._ emits (c,e) compared to the jagged schedules produce by L1 regularization (d). |
2305.13425 | EINCASM: Emergent Intelligence in Neural Cellular Automaton Slime Molds | This paper presents EINCASM, a prototype system employing a novel framework
for studying emergent intelligence in organisms resembling slime molds. EINCASM
evolves neural cellular automata with NEAT to maximize cell growth constrained
by nutrient and energy costs. These organisms capitalize physically simulated
fluid to transport nutrients and chemical-like signals to orchestrate growth
and adaptation to complex, changing environments. Our framework builds the
foundation for studying how the presence of puzzles, physics, communication,
competition and dynamic open-ended environments contribute to the emergence of
intelligent behavior. We propose preliminary tests for intelligence in such
organisms and suggest future work for more powerful systems employing EINCASM
to better understand intelligence in distributed dynamical systems. | Aidan Barbieux, Rodrigo Canaan | 2023-05-22T19:15:55Z | http://arxiv.org/abs/2305.13425v1 | # EINCASM: Emergent Intelligence in Neural Cellular Automaton Slime Molds
###### Abstract
This paper presents EINCASM, a prototype system employing a novel framework for studying emergent intelligence in organisms resembling simple modus. EINCASM evolves neural cellular automata with NEAT to maximize cell growth constrained by nutrient and energy costs. These organisms capitalize physically simulated fluid to transport nutrients and chemical-like signals to orchestrate growth and adaptation to complex, changing environments. Our framework builds the foundation for studying how the presence of puzzles, physics, communication, competition and dynamic open-ended environments contribute to the emergence of intelligent behavior. We propose preliminary tests for intelligence in such organisms and suggest future work for more powerful systems employing EINCASM to better understand intelligence in distributed dynamical systems.
## Introduction
Emergent intelligence is a phenomenon where the interaction of simple components combine to produce novel behavior aimed at achieving a goal. Consider, for example, how humans distributed across a landscape form powerful societies or how the combination of many simple cells results in organisms that solve puzzles to acquire food. This property begins to manifest in complex particle systems (Schmickl et al., 2016; Gregor and Besse, 2021), although the resultant behavior can be limited and challenging to interpret. By introducing a well defined goal and computer vision techniques, neural cellular automata (NCA) can produce impressive self-organization and intelligence, such as morphogenesis, maze solving, and number recognition (Nichele et al., 2018; Endo and Yasuoka, 2022; Mordvintsev et al., 2022; Randazzo et al., 2020), however, these systems are not open ended or lifelike. We propose EINCASM as a bridge between these systems to produce life-like organisms with interpretable intelligence that display emergence. By focusing on the simple physiology of slime molds, which have well-studied intelligent foraging behavior, our framework aims to be tractable and testable. A main novelty of our work is the inclusion of physical constraints and fluid simulation which organisms must orchestrate via chemical-like communication to grow and adapt.
## Evolution and Intelligence Tests
To maintain homeostasis against the increase of entropy, organisms must find and use energy from the environment. Other organisms and events, such as weather, affect the environment and require adaptation. EINCASM explores these adaptations by evolving NCA that use limited nutrients to replicate and move in a dynamic and complex environment, leading to intelligent behavior to acquire nutrients, such as pathfinding (figure 1d). This extends the work of Nichele et al. (2018) and Mordvintsev et al. (2022) on NCA by leaving the ideal behavior of the system undefined. Instead, fitness is determined by the ability of the NCA to accomplish a goal, i.e., growth, in a wide range of circumstances - which is a common definition of intelligence.
Following Nichele et al. (2018); Stanley et al. (2019), we use a Compositional Pattern Producing Network (CPPN) as the ruleset for the NCA (figure 1b). This represents the cellular physiology of the organism. To evolve the CPPN we simulate a variable-length lifecycle where a NCA grows given physical constraints. Throughout this lifecycle, nutrient sources are removed, cells are degraded, and obstacles move. This is analagous to, for example, a branch falling on a section of slime mold. At the end of the time period fitness is determined simply as the sum of cell mass.
In the current implementation of EINCASM, single organisms are grown in contained, simple environments. However, future systems could include multiple organisms competing for resources with no set notion of an individual, as exemplified by Gregor and Besse (2021) and suggested by Soros and Stanley (2014) to promote truly open-ended complexity.
To measure the intelligence of the organisms produced by EINCASM-like systems, we propose biologically inspired tests. Each test consists of introducing an organism to a novel "puzzle" environments where certain adaptations are required to thrive. By measuring completion, speed, and growth rates, a sort of IQ can be recorded. The preliminary tests are proposed:
* **Coordination**: Will the organism redistribute its mass to explore new areas when a nutrient source is removed?
* **Pathfinding**: Can the organism solve a complex maze to reach rich nutrients?
* **Learning and Knowledge sharing**: Given a repeated feature (e.g. deceptive chemoattractant), does the organism adopt new behavior? Can this behavior be shared with part of the organism that hasn't seen this deception? (Vogel and Dussutour, 2016)
* **Adversarial Nutrient Storage**: Can an organism protect nutrients from competing species and recover them in adverse circumstances?
## 2 Prototype System
EINCASM organisms are defined as a set of cells, each of which performs a local operation and collaborates with other cells either directly by sharing a neighborhood or indirectly by directing cytoplasmic flow via reservoir contraction. Each cell has identical physiology, but is differentiated by its local environment and the signals from other cells, similar to stem cells forming different parts of an organism during development.
Our system is represented by a square tile grid of real-valued static, dynamic, and hidden channels (figure 1a). Static channels define the environment with obstacles, poison, food sources, and chemoattractant, inspired by Tsompanas et al. (2015). Contents of the hidden channels can be used by the agent and are interpreted by Mordvintsev et al. (2022) as chemical or electrical signaling between cells.
The main novelty of our approach lies in the dynamic channels, which represent cell mass, reservoir size, and nutrient containing cytoplasm. These are determined jointly by the agent and physical simulation. This approach was explored by Gregor and Besse (2021) but, as far as we are aware of, is novel in the context of NCA.
On each time step, cells are updated stochastically by applying an evolved 3x3 convolutional kernel to produce new hidden channels (updated without modification), and a desired change in reservoir size and cell mass. Figure 1c describes how cell mass and reservoir size are physically constrained by nutrient consumption and cell mass can be converted freely into nutrients to support network adaptation.
The final reservoir size is then used to update cytoplasmic flow using the Lattice Boltzmann method, following (Conington et al., 2009). Briefly, this enables the organism to produce peristaltic-like motion which pumps nutrients along adjacent sequences of reservoirs. This effect is preferential towards parallel sets of large reservoirs and is important to slime mold's motility and adaptive behavior (Lewis and Guy, 2016; Ray et al., 2019).
## 3 Preliminary Results and Future Work
While a thorough quantitative analysis is still required, our current prototype already demonstrates preliminary signs of coordination and pathfinding within handcrafted environments comprising obstacles and poison. An example can be seen in Figure 1d. Our physical simulation is limited to directly modifying pressure at small scales relative to reservoir size rather than dynamic fluid boundary conditions.
In the future, we plan to enhance the scale and realism of fluid simulation in addition to evaluating competition in multi-agent settings, dynamically generated environments, linguistic evolution (utilizing the hidden channel for pattern communication), and multi-scale competency or modularity.
Figure 1: (a): The channels of the environment perceived by the NCA. (b) The application of the evolved neural architecture. (c) The enforcement of physical constraints and interface with physical simulation. (d) An example organism’s reservoir and chemoattractant as it creates a rudimentary transport network around an obstacle. Equation 1 relates the cytoplasm pressure \(\rho\) to the reservoir of size \(v\) and the change in reservoir size \(\Delta R\). Equation 2 describes the cost in cell mass \(M\) for a change in reservoir size with a movement cost of \(\alpha\). Equation 3 represents the cost in nutrient \(N\) for growth in cell mass \(\Delta M\) with a growth cost of \(\beta\). Finally, equation 4 defines nutrient uptake at a rate of \(\gamma\) on a food source channel \(F\). |
2304.12412 | End-to-End Lidar-Camera Self-Calibration for Autonomous Vehicles | Autonomous vehicles are equipped with a multi-modal sensor setup to enable
the car to drive safely. The initial calibration of such perception sensors is
a highly matured topic and is routinely done in an automated factory
environment. However, an intriguing question arises on how to maintain the
calibration quality throughout the vehicle's operating duration. Another
challenge is to calibrate multiple sensors jointly to ensure no propagation of
systemic errors. In this paper, we propose CaLiCa, an end-to-end deep
self-calibration network which addresses the automatic calibration problem for
pinhole camera and Lidar. We jointly predict the camera intrinsic parameters
(focal length and distortion) as well as Lidar-Camera extrinsic parameters
(rotation and translation), by regressing feature correlation between the
camera image and the Lidar point cloud. The network is arranged in a
Siamese-twin structure to constrain the network features learning to a mutually
shared feature in both point cloud and camera (Lidar-camera constraint).
Evaluation using KITTI datasets shows that we achieve 0.154 {\deg} and 0.059 m
accuracy with a reprojection error of 0.028 pixel with a single-pass inference.
We also provide an ablative study of how our end-to-end learning architecture
offers lower terminal loss (21% decrease in rotation loss) compared to isolated
calibration | Arya Rachman, Jürgen Seiler, André Kaup | 2023-04-24T19:44:23Z | http://arxiv.org/abs/2304.12412v2 | # End-to-End Lidar-Camera Self-Calibration for Autonomous Vehicles
###### Abstract
Autonomous vehicles are equipped with a multi-modal sensor setup to enable the car to drive safely. The _initial_ calibration of such perception sensors is a highly matured topic and is routinely done in an automated factory environment. However, an intriguing question arises on how to maintain the calibration quality throughout the vehicle's operating duration. Another challenge is to calibrate multiple sensors jointly to ensure no propagation of systemic errors. In this paper, we propose Camera Lidar Calibration Network (CaLiCaNet), an end-to-end deep self-calibration network which addresses the automatic calibration problem for pinhole camera and Lidar. We jointly predict the camera intrinsic parameters (focal length and distortion) as well as Lidar-Camera extrinsic parameters (rotation and translation), by regressing feature correlation between the camera image and the Lidar point cloud. The network is arranged in a Siamese-twin structure to constrain the network features learning to a mutually shared feature in both point cloud and camera (Lidar-camera constraint). Evaluation using KITTI datasets shows that we achieve 0.154\({}^{c}\) and 0.059 \(\mathrm{m}\) accuracy with a reprojection error of 0.028 pixel with a single-pass inference. We also provide an ablative study of how our end-to-end learning architecture offers lower terminal loss (21% decrease in rotation loss) compared to isolated calibration.
self-calibration, Lidar, camera, end-to-end learning, autonomous vehicle, multi-sensor
## I Introduction
Modern vehicles are equipped with multi-modal sensors, enabling perception algorithms to better understand the environment. Compared to classical camera-only perception, adding depth-capable sensors such as Radar and Lidar enables unambiguous recognition in a 3D world. Therefore, a camera and Lidar combination is the most used in autonomous vehicle setup, often with multiple instances of the same modality (e.g., 2\(\times\) Lidar plus 3\(\times\) camera).
The use of such multi-sensor setups comes with the need for cascaded calibration: first, each sensor needs to be both calibrated intrinsically relative to its internals, and after that, it needs to be extrinsically calibrated to other sensors. Both calibrations are prerequisites for sensor fusion, a building block of vehicle perception. In order to perceive the environment consistently, the vehicle has to use sensor measurements in a common coordinate system.
The quality of intrinsic calibration generally impacts extrinsic calibration. Minor errors in the intrinsic calibration may affect the extrinsic calibration's accuracy. To tackle this problem, jointly-optimized intrinsic and extrinsic calibration is proposed in [1][2]. The approach offers better performance compared to isolated calibration. However, it requires a static target which is not applicable when a vehicle is in use. For this case, a targetless calibration method is needed.
The classical model-based self-calibration based on Structure-from-Motion (SfM) [3], optical flow [2], and odometry [4] also exist. However, in order to accommodate a broad operating domain and avoid heuristically tuned algorithms, deep-learning-based approaches have been increasingly used for self-calibration (see [5, 6, 7]). Notwithstanding, the comparison between model-based vs. deep-learning-based approaches remains difficult since there is little-to-no back-to-back comparison, and each approach has a different operating domain, not necessarily automotive. Finally, the deep learning approach primarily relies on supervised networks: this means there is a need for a sufficient amount of labelled, high quality data for the network to properly generalize.
Our previous work [8] introduced a self-calibration pipeline to perform intrinsic camera calibration using deep neural networks and a back-to-back comparison against a target-based method. In this paper, we extend the pipeline with a novel end-to-end learning network called CaLiCaNet, aiming to calibrate a camera intrinsically from scratch and, simultaneously, extrinsically to a Lidar. This is a distinction from the majority of existing Lidar-Camera calibration methods that assume perfect camera intrinsic parameters to be available.
To measure the impact of the proposed architecture, we use evaluation metrics that are directly comparable to conventional pattern-based calibration and benchmark our proposal to a dataset that reflects the target domain: KITTI Datasets [9]. We also include an ablation study treating the intrinsic and extrinsic networks as sub-networks. The aim is to understand how end-to-end networks perform better than isolated networks for Lidar-Camera calibration.
To summarize, our contributions consist of the following:
1. We propose end-to-end CaLiCaNet, which to the best of our knowledge is the first application of end-to-end learning for the purpose of sensor calibration.
2. We evaluate our approach using KITTI Dataset with the specific metrics directly comparable to the conventional calibration method. This facilitates an informed decision to switch from the state-of-the-practice to the state-of-the-art calibration approach.
3. We propose a streamlined data collection and label generation pipeline for a self-calibrating automotive Lidar-Camera system.
## II Related Works
Lidar-Camera extrinsic calibration is a well-visited topic. Typically, as a first step, a camera is intrinsically calibrated using a geometric pattern like a checkerboard [10]. Then, its extrinsic calibration to Lidar is done by matching 3D features from the Lidar point cloud to the camera equivalent (i.e., in the image plane). The 3D features from the camera image can be generated from the 3D checkerboard plane or 3D reconstruction method, such as SfM, given the intrinsic parameters.
In practice, the camera intrinsic calibration and Lidar-Camera extrinsic calibration are done (1) sequentially and (2) performed using a specialized pattern before the sensor system is used for intended operation (i.e., offline). Calibrating the camera first, followed by the Lidar-Camera, poses the risk of a systemic error propagating to extrinsic calibration when intrinsic parameters are not sufficiently accurate. Offline calibration also does not consider mechanical shifts occurring during the sensor system operation. The shift is especially relevant for an automotive system, requiring modern vehicles equipped with an array of perception sensors to be periodically recalibrated.
To address the first challenge, we refer to the formulation of joint Lidar-Camera calibration as proposed in [1, 11, 12]. The general idea is to reach a more robust and globally optimal solution bounded by physical measurements of multi-modal sensors. With regard to the second challenge, a global optimization is ideally done without the need for a pattern-like target and uses a natural environment to enable the possibility of online calibration. The SfM-based approach [11] comes close to our needs: natural features are detected using SfM and used as inputs to the Bundle Adjustment algorithm to predict intrinsic parameters. Lastly, Iterative Closest Point (ICP) is used for extrinsic calibration.
Notwithstanding, feature detection and matching using an explicit model require a stable and feature-rich environment and, therefore, generally is not very robust when applied to online calibration. Furthermore, in [11], the evaluation using KITTI dataset is limited to road scenarios (e.g., stable environment), similar limitation can also be found in [3], where it relies on pillar-like structures for the calibration to work. The model-based feature extractor, such as SfM or odometry, typically requires heuristics parameter tuning that needs to be changed once the operating domain (e.g., camera type or environment) is shifted.
Acknowledging these general difficulties, deep-learning-based perception sensor calibration is proposed, as seen in [5, 6, 7]. The main idea is to leverage well-validated and pre-trained image detectors (e.g., ImageNet, Inception, ResNet), which are conventionally used for image classification and object detection, as an environmental feature extractor.
For automotive purposes, DeepPTZ [7] is able to calibrate vehicle cameras with the extension we provided in [8]. We also consider LCCNet [6] and CFNet [13], which evaluate their method directly on KITTI Dataset for Lidar-Camera calibration. Still, these deep-learning approaches have yet to address the need to jointly calibrate camera intrinsic and camera extrinsic.
Unlike the classical model-based online calibration, we believe there is no learning-based solution yet that performs Lidar-Camera calibration end-to-end, the implicit assumption being that perfect camera intrinsic is always available and always rigid notwithstanding with a vehicle dynamics (e.g., vibration) influencing sensor and outdoor environment (e.g., temperature) affecting sensor mechanical setup.
Finally, to the best of our knowledge, end-to-end learnings were widely investigated, albeit within a limited domain (e.g., self-driving [14] or text recognition [15]). Therefore, the behaviour of end-to-end network design is only little visited in the field of vehicle perception, especially sensor calibration.
## III Self-Calibration Problem
### _Problem Formulation_
For the camera, we consider the pinhole model following [16], by which lens distortion is modelled using the Unified Spherical Model (USM). The camera intrinsic calibration is formulated as recovering intrinsic parameters \(K_{i}\), and distortion coefficient \(\xi_{i}\) from \(i\) input images.
Meanwhile, the Lidar extrinsic calibration is described by a rigid 3D transform consisting of rotation \(R_{i}\) and translation \(t_{i}\). For each image \(i\) and point cloud \((X,Y,Z)_{i}\), a point in the 3D world can be projected using matrix \(K\) to camera pixel coordinate \((u,v)\), assuming an undistorted image and no scaling is needed:
\[\left[\begin{array}{c}u\\ v\\ 1\end{array}\right]=K[Rt]\left[\begin{array}{c}X\\ Y\\ Z\\ 1\end{array}\right] \tag{1}\]
When the intrinsic camera parameters and Lidar-Camera parameters estimation are treated within a unified optimization problem, we get a better chance of coming to a globally optimal solution [17]. The advantage is especially relevant since joint optimization provides the physical constraints recognized by both Lidar and the camera.
Based on the above formulation, we delineate our goal of end-to-end Lidar-Camera self-calibration to the acquisition of calibration values \(\theta=\{f,\xi,R,t\}\) given sets of image and 3D point cloud pair from natural scenes. The image sensor
is assumed to be square (focal length \(f\) is symmetrical along the \(x\)- and \(y\)-axis), and the Lidar scan is assumed to be ego-motion-compensated. The constraint is that the measurement does not need to have an explicit geometric pattern nor be taken in a specific environment with certain structure features (e.g., pole or building). Considering the difficulty of modelling such an environment, we defer to a learning-based approach.
### _Deep Learning by Driving: Generating Training Data_
Deep learning approaches for sensor calibration are largely supervised and thus require labelled ground truth. We refer to the learning-by-driving label generation strategy previously proposed in [8]. Our training label generation strategy does not need human annotation nor additional infrastructure not already existing in a modern car and automotive supply chain. First, we pre-calibrate a pair of camera and Lidar sensor with a checkerboard-based, controlled environment setup to get \(\theta_{0}\). We consider \(\theta_{0}\) as ground truth calibration values. We then collect the sensor measurements while the car is driving with the target calibration sensors.
The resulting measurements become a dataset of time-series calibrated frames (point cloud and image). Analogous to augmentation, _realistic_ (i.e., based on known calibration deviation over a long-term driving period) \(\theta_{label}\) are generated together along with sets of distorted images and miscalibrated point clouds, as seen in Fig. 1.
## IV End-to-End Design
### _Isolated Networks_
In this work, we adopt networks with public reference implementation as benchmarks and inspirations for our end-to-end design. Following (1), the output of an intrinsic calibration network can be chained directly to an extrinsic calibration network. In this case, the prediction and training of each intrinsic and extrinsic network happen in isolation.
For the intrinsic networks, we adopt some part of DSNet [7], a Siamese-network designed to predict camera intrinsics calibration \(f\) and \(\xi\) with inputs of two correlated images from different viewpoints. It was originally designed to work with panoptic datasets and panning, rotation, and zoom (PTZ) camera, but we have extended it in [8] to work with vehicle cameras as the target domain. As for extrinsic calibration, inspired by LCCNet [6], we use a volume-cost-based network that correlates the volume cost shared by the image and the projected point cloud depth image. Both isolated networks rely on features correlation to regress \(\theta\).
### _CalicaNet_
CaLiCaNet (Camera Lidar Calibration Network) aims to enable effective end-to-end learning by building upon feature extraction networks to associate features from two correlated sensor measurements and regress them into calibration parameters.
Recall that the DSNet feature map is extracted based on two-correlated monocular images to enforce a bi-directionality constraint. While this has been shown to help the network to focus on relevant features, the features from monocular cameras are inherently two-dimensional. Making the network susceptible to the famous "Wile E. Coyote's tunnel" (a tunnel imagery being painted on a wall to induce illusion of depth). We address the problem by enforcing the constraint from two different sensor modalities: Lidar and camera. Compared to a single modality approach, environmental attenuations specific to a camera (e.g., low light or fake depth) are mitigated by a Lidar, and vice versa (e.g., dark object or sparsity).
On the other hand, LCCNet relies on the assumption that the input image is perfectly undistorted and that the camera intrinsic matrix \(\mathbf{K}\) is sufficiently accurate. If the lens undistortion is inaccurate, for example, the Lidar-Camera extrinsic prediction can be overfitted to the centre region of the image. More importantly, LCCNet requires multi-pass inferences to achieve sufficient accuracy for autonomous driving. With our CaLiCaNet, we mitigate this problem by explicitly incorporating intrinsic parameters in the loss function and implementing the Lidar-Camera constraint by means of weight sharing. Further details of CaLiCaNet can be found in Fig. 1 as well as in the following subsections:
Fig. 1: End-to-end Lidar-Camera Self-Calibration Pipeline with CaLiCaNet. The parts denoted by the dashed line are only used during network training.
#### Iv-B1 **Pre-processing: ensuring consistent input**
We discard featureless images (e.g., frames dominated by sky and rural, empty road) by computing SIFT descriptors on each frame and passing only frames with a certain threshold of key-points count (see [18] for details). Additionally, we also ensure the overlap with the Intersection of Union (IoU)) of \(\geq 0.5\) between an RGB image and the corresponding Lidar 2D projection. These measures are intended so that the network sees more consistent input during training and inference.
#### Iv-B2 **Network Architecture**
We use ResNet50 [19] as a feature extractor and implement the necessary fully connected layers for generating the correlation between inputs. Since we aim to use single-pass inference from natural scenes, we choose a deeper network block. In contrast, DSNet uses Inceptionv3, and LCCNet uses ResNet18. We also aim to accept an arbitrary resolution on the training images and point cloud, owing to the diversity of vehicle sensors; therefore, we place an adaptive average pooling before the fully connected layer. Furthermore, we extend the volume-cost-based correlation module from [6] to work with a Siamese structure. Finally, we opt to use Parametrized ReLU (PReLU) for our network extractor activation function. The ReLU weight, therefore, becomes a trainable parameter. Our functional network architecture as part of the end-to-end pipeline is described in Fig. 1.
#### Iv-B3 **Loss Functions**
The network minimizes the L2-smooth loss \(L\) between regressed calibration values of \(\theta_{pred}\) and that of label \(\theta_{label}\). Except for \(\mathbf{R}\) that is converted to equivalent quaternion representation \(\mathbf{q}\), and therefore quaternion distance is treated as a loss. For vehicle perception purposes, the misalignment in translation does not affect vehicle perception as much as rotation (refer to Fig. 2 for an example of misalignment due to rotation). Therefore, we penalize the network more for predicting the wrong rotation. The training loss \(L_{T}\) thus becomes:
\[L_{T}=\lambda_{f}L_{f}+\lambda_{\xi}L_{\xi}+\lambda_{q}L_{q}+ \lambda_{t}L_{t} \tag{2}\] \[\text{with }\lambda_{q}\geq\lambda_{t}\]
## V Evaluation
We propose two evaluation strategies: the first is evaluating with KITTI Dataset to check the performance in a realistic situation. This serves as validation of our approach. The second strategy is an ablation study to examine closer how an end-to-end architecture enhances the network performance; this serves as verification of our network design. Such two-step approach is a necessary first step in developing safety-critical software running on a vehicle [20].
### _Experimental Setup_
#### V-A1 **Dataset**
To evaluate the calibration accuracy, we use the KITTI Dataset [9], in which the RGB images captured by the right-side monocular camera are used. Ground-truth calibration values \(\theta_{0}\) from the KITTI dataset are generated as follows: we use the OpenCV Omnidirectional Calibration library [21] to perform intrinsic calibration for each daily drive using the corresponding checkerboard recordings. Following the models stated in Section III-A, we obtain \(\theta_{0}\) and the corresponding baseline reprojection root mean square (RMS) error \(\epsilon_{0}\)[16].
#### V-A2 **Label Generation and Training**
The first day of the KITTI drive (2011-09-26) is used as training labels, while the remaining four days are used for evaluation. We generated the labels by setting the labels' value to \(\theta_{0}\) plus a variable deviation (i.e., the miscalibration). The label generation refers to physical changes on the sensors induced by mechanical and environmental factors during everyday driving and is set to \(\theta_{label}=\{f\pm 100\mathrm{px},\xi\pm 0.48,\mathbf{R}\pm 2.0^{\circ}, \mathbf{t}\pm 0.2\mathrm{m}\}\). The distorted image and misaligned point cloud pairs are generated according to \(\theta_{0}\). In total, we generated approximately \(120\,000\) labels (\(\theta_{label}\), image and point cloud) split into 80:20 ratios for training and validation datasets. Adam Optimizer is used with an initial learning rate of \(3\times 10^{-3}\). The batch size used was 60 with maximum epochs of 300.
#### V-A3 **Evaluation Metrics**
For calculating the intrinsic calibration accuracy, we use KITTI's checkerboard which contain \(i\) corners \(\{p_{1j},p_{2j},\cdots,p_{ij}\}\in\mathbb{R}^{3}\) from \(j\)-th checkerboard poses \(\mathbf{R}\mathbf{t}\). The purpose is to calculate the Root Mean Squared Error (RMSE) of the reprojection \(\epsilon\) in pixels, given as:
\[\epsilon=\sqrt{\frac{1}{N}\sum_{i=1}\sum_{j=1}\left\|\mathbf{K}[ \mathbf{R}\mathbf{t}]p_{ij}-x_{ij}\right\|_{2}} \tag{3}\]
Note that this reprojection error value obtained based on \(\theta_{0}\) is considered as the _baseline_ reprojection error \(\epsilon_{0}\). When our
Fig. 2: End-to-End vs Isolated Self-Calibration. **Intrinsic**: red/green halos indicate pixel-wise differences. **Extrinsic**: projected point cloud to an image representing alignment quality. Red arrows point to misalignment regions of interest.
network infers \(\theta_{pred}\), the detected corners and checkerboard poses are reused to calculate inference reprojection error \(\epsilon_{pred}\). We consider \(\epsilon_{pred}\) to be more representative of calibration quality relative to the comparison of the intrinsic part of \(\theta_{0}\) to \(\theta_{pred}\). We refer to [16] for more in-depth reasoning.
On the other hand, for extrinsic calibration, for the sake of comparability, we opt to follow the convention in prior works [6, 11, 22] by directly comparing the extrinsic part of \(\theta_{0}\) to that of \(\theta_{pred}\), that is the mean error of rotation and translation. In this case, the baseline extrinsic error is assumed to be zero.
### _KITTI Drives_
The evaluation using KITTI drives is intended to show CaLiCaNet accuracy and robustness when applied to diverse driving scenarios. Referring to Fig. 3, the mean error below 0.028 \(\mathrm{px}\) across all four days of driving shows that our approach is comparable to classical checkerboard-based intrinsic calibration in terms of reprojection error. The significance of this low error can be visually inspected in Fig. 2. When the DSNet network is trained in isolation, we noted an error increase of almost 20% relative to our proposed end-to-end approach (see drive 2011-09-28 in Fig. 3). Additionally, the undistortion result in Fig. 2 (second image from the left) corroborates the increased error. Up to 15 pixels differences can be seen on the edge region of the image, where the distortion effect is more present.
A similar effect can be found in the extrinsic calibration: the end-to-end approach results in better Lidar-Camera alignment. Due to the intended use of online calibration, we put a constraint that the inference must happen in a single pass (no iteration). Under this constraint, we achieve a noticeably better mean rotational accuracy compared to when the LCCNet is trained in isolation (0.154\({}^{\circ}\) vs. 0.203\({}^{\circ}\)).
We can infer some robustness by inspecting the statistical fluctuation of CaLiCaNet prediction in Fig. 3. Across four days of driving with notably diverse sceneries (city, campus, residential, and road, see [9] for more details) the maximum intrinsic and extrinsic calibration errors remain below 0.028 \(\mathrm{px}\) and 0.4\({}^{\circ}\)/0.2\(\mathrm{m}\), respectively. For extrinsic error, our outlier values are decisively lower than that of LCCNet, which we attribute to the Lidar-Camera measurement constraint realized by the weight sharing. Notwithstanding, we note the lowered accuracy in drive 2011-10-03 and bigger spread since the drive is dominated by highway sceneries (i.e., fewer edge and line-like features to extract). With this finding, we consider it beneficial if the car drives in a feature-rich environment (e.g., inner city) when the self-calibration functionality is active.
Overall, we consider the KITTI Drives evaluation result to appropriately reflects CaLiCaNet fitness for vehicle sensors self-calibration. It fulfils its purpose of avoiding a miscalibration, as illustrated in Fig. 2; which can and will compromise the road object localization capability of a road vehicle.
### _Ablation Study_
We conducted ablation studies to evaluate the influence of end-to-end training on Lidar-Camera accuracy. Essentially, this study attempts to pinpoint which part of the pipeline is responsible for the improved performance. The evaluation metrics used are the validation terminal loss.
We used the identical setup in Section V-A, but varied our training strategy and the corresponding network architecture. First, the feature extractors are no longer sharing weights. In experiment I, we froze the layers of the RGB extractor before training the network. In experiment II, we froze the layers of the depth extractor instead. In experiment III, all layers of the feature extractors are trainable, but the weight sharing is not activated. Finally, in Experiment IV, the normal end-to-end training with Siamese weight sharing was performed. The results can be seen in Table II.
Fig. 3: Intrinsic and extrinsic calibration error. For intrinsic, the baseline is the best achievable RMS reprojection error obtained with the checkerboard-based calibration. For extrinsic, we calculate the mean error of all 3D axes. DSNet [7] and LCCNet [6] were trained isolated and only single pass inference was used. See Fig. 2 for the accompanying visualization and Table I for mean values across all drives.
It is quite evident that end-to-end training results in lower losses overall. The extrinsic parameters regressor (especially rotation with a decrease of 21%) benefits the most when trained in both end-to-end and Siamese configurations (see \(\mathbf{R}\)-loss and \(\mathbf{t}\)-loss). However, we have observed that intrinsic calibration losses (\(f\)-loss and \(\xi\)-loss) do not appear to have a significant effect when trained end-to-end (loss decrease of less than 1%). However, when the weight sharing is activated, we see a modest decrease in intrinsic losses (3%). This may suggest that the feature-extracting networks cannot properly generalize the relevant feature for camera intrinsic parameters.
The behaviour of the intrinsic calibration network can be potentially explained by insufficient edge coverage from the KITTI checkerboard recording. The distortion on the far-edge region of the image is not fully modelled and may result in suboptimal baseline calibration \(\theta_{0}\), and by extension, \(\theta_{label}\). In addition, the USM model used does not consider the tangential and radial distortion [23]. Based on this study, we learned the limitation of our label generation strategy, which heavily relies on good checkerboard recording. For real applications and future works, the training label should be based on a checkerboard recording of sufficient coverage and tangential and distortion models should be considered.
## VI Conclusion
We have proposed a novel end-to-end learning strategy for jointly self-calibrating camera and Lidar sensor with the accompanying deep neural network CaLiCaNet. The pipeline proposed a label generation strategy with a normally operating vehicle and existing calibration infrastructure. It also ensures the quality of the inputs to the self-calibration pipeline by means of detecting SIFT key-points and detecting Lidar-Camera measurements overlap. Our main contribution lies in the design of CaLiCaNet, consisting of two feature extractor networks (one for camera image and one for Lidar point cloud) arranged in a Siamese structure. The design is devised to deal with the limitation of using single sensor modality (i.e., bottlenecked by a single sensor physical limitation) and mitigating the propagation of error when estimating extrinsic calibration using unverified intrinsic calibration.
Finally, we performed thorough verification and validation based on an ablation study and evaluation with realistic driving scenarios. Our approach is shown to perform well when compared to classical, infrastructure-bounded checkerboard calibration. Furthermore, compared to other learning-based approaches, our approach can outperform isolated intrinsic and extrinsic self-calibration with a single-pass inference, paving its way to adoption in real-world online applications.
|
2305.05247 | Leveraging Generative AI Models for Synthetic Data Generation in
Healthcare: Balancing Research and Privacy | The widespread adoption of electronic health records and digital healthcare
data has created a demand for data-driven insights to enhance patient outcomes,
diagnostics, and treatments. However, using real patient data presents privacy
and regulatory challenges, including compliance with HIPAA and GDPR. Synthetic
data generation, using generative AI models like GANs and VAEs offers a
promising solution to balance valuable data access and patient privacy
protection. In this paper, we examine generative AI models for creating
realistic, anonymized patient data for research and training, explore synthetic
data applications in healthcare, and discuss its benefits, challenges, and
future research directions. Synthetic data has the potential to revolutionize
healthcare by providing anonymized patient data while preserving privacy and
enabling versatile applications. | Aryan Jadon, Shashank Kumar | 2023-05-09T08:12:44Z | http://arxiv.org/abs/2305.05247v1 | Leveraging Generative AI Models for Synthetic Data Generation in Healthcare: Balancing Research and Privacy
###### Abstract
The widespread adoption of electronic health records and digital healthcare data has created a demand for data-driven insights to enhance patient outcomes, diagnostics, and treatments. However, using real patient data presents privacy and regulatory challenges, including compliance with HIPAA [1] and GDPR [2]. Synthetic data generation, using generative AI models like GANs [3] and VAEs [4], offers a promising solution to balance valuable data access and patient privacy protection. In this paper, we examine generative AI models for creating realistic, anonymized patient data for research and training [5], explore synthetic data applications in healthcare, and discuss its benefits, challenges, and future research directions. Synthetic data has the potential to revolutionize healthcare by providing anonymized patient data while preserving privacy and enabling versatile applications.
Generative AI Models, Synthetic Data Generation, Healthcare Research, Patient Privacy, Data Augmentation, Federated Learning, Differential Privacy, Data Anonymization, Transfer Learning, Health Informatics.
## I Introduction
The rapid digitalization of healthcare data and the increasing adoption of electronic health records (EHRs) have opened new opportunities for leveraging data-driven insights to improve patient care, diagnostics, and treatment [6]. However, the use of real patient data often raises concerns regarding privacy and compliance with data protection regulations such as the Health Insurance Portability and Accountability Act (HIPAA) [1] and the General Data Protection Regulation (GDPR) [2]. These concerns create barriers for researchers and healthcare professionals who require access to large amounts of data to develop and validate advanced algorithms and AI models [7].
Synthetic data generation, powered by generative AI models, offers a promising solution to balance the need for data-driven insights with patient privacy. Generative AI models, such as Generative Adversarial Networks (GANs) [3] and Variational Autoencoders (VAEs) [4], learn the underlying structure and distribution of real-world data to generate new, synthetic instances with similar characteristics. By creating realistic, anonymized patient data, these models ensure that sensitive patient information is protected while providing researchers with valuable data for analysis and training purposes [5].
In this paper, we explore the role of generative AI models in generating synthetic patient data and discuss their potential applications, benefits, and challenges in the healthcare domain. We also present future research directions and the potential impact of synthetic data on healthcare research and practice, including its integration with other privacy-preserving techniques such as differential privacy [9], federated learning [10] and data anonymization [11]. This comprehensive examination of synthetic data generation using generative AI models aims to provide a foundation for understanding the value of this approach in healthcare and its potential to revolutionize research, diagnostics, and treatment while safeguarding patient privacy. Our Github Repo can be found at [https://github.com/aryan-jadon/Synthetic-Data-Medical-Generative-AI](https://github.com/aryan-jadon/Synthetic-Data-Medical-Generative-AI).
## II Generative AI Models for Synthetic Data Generation
Generative AI models, which include Generative Adversarial Networks (GANs) [3] and Variational Autoencoders (VAEs) [4], are designed to learn the underlying structure and distribution of real-world data and generate new, synthetic instances with similar characteristics. These models have shown remarkable success in generating high-quality synthetic data across various domains, including healthcare.
Fig. 1: Generative adversarial networks (GAN) based efficient sampling [8]
1. **Generative Adversarial Networks (GANs):** GANs consist of two neural networks, a generator, and a discriminator, that compete with each other in a minimax game. The generator creates synthetic data samples, while the discriminator distinguishes between real and generated samples. The generator improves its output by attempting to deceive the discriminator, resulting in increasingly realistic synthetic data.
2. **Variational Autoencoders (VAEs):** VAEs are a class of generative models that combine autoencoders with variational inference. They learn a probabilistic mapping between data and latent space, enabling the generation of new data samples by sampling from the latent space. VAEs have been used to generate realistic synthetic data while maintaining a balance between data fidelity and diversity.
### _Synthetic Data Generation Process_
The process of generating synthetic data using generative AI models involves three main steps:
1. **Training generative models on real-world data:** The model is trained using a dataset of real patient data, which allows it to learn the underlying structure, relationships, and distributions present in the data.
2. **Generating new, synthetic instances with similar characteristics:** Once trained, the generative model can create synthetic data samples that closely resemble the real data while preserving the underlying relationships and patterns. This process ensures that the generated data is both realistic and anonymized.
3. **Evaluating the quality and utility of synthetic data:** The generated synthetic data should be evaluated based on its resemblance to real data, its ability to maintain the underlying relationships and patterns, and its utility for the intended application, such as research or AI model training.
By leveraging generative AI models for synthetic data generation, healthcare researchers and professionals can access realistic, anonymized patient data while addressing concerns related to patient privacy and regulatory compliance.
## III Applications of Synthetic Data in Healthcare
Synthetic data generated using generative AI models have a wide range of applications in healthcare [14], enabling researchers and professionals to access realistic, anonymized patient data while maintaining privacy and compliance with data protection regulations. Some of the key applications are:
### _AI Model Training and Validation_
Access to large, diverse datasets is crucial for training and validating AI models in healthcare. Synthetic data provides an alternative to real patient data, enabling researchers to develop and evaluate algorithms without the risk of exposing sensitive information.
* **Data augmentation:** Synthetic data can be used to augment existing real-world datasets, particularly when the available data is scarce or imbalanced. This augmentation can improve the performance and generalizability of AI models across different patient populations and clinical scenarios [15].
* **Model validation:** By generating synthetic data that closely resembles real-world data, researchers can validate the performance of AI models and ensure that they are robust and reliable in real-world settings [16].
### _Simulations for Medical Training and Decision Support_
Synthetic data can be used to create realistic simulations for medical training and decision support systems, allowing healthcare professionals to practice and improve their skills without the risk of harming real patients.
* **Medical training:** Synthetic data can be used to develop virtual patient cases and scenarios for training medical students, residents, and other healthcare professionals. These simulations provide a safe and controlled environment for learning and practicing clinical skills, diagnosis, and treatment planning [17].
Fig. 3: Application of Variational Autoencoders in Medical Experiments [13]
Fig. 2: Representative Architecture of wide variety of GAN’s [12]
* **Decision support systems:** Synthetic data can be integrated into clinical decision support systems to provide real-time guidance and recommendations based on the analysis of anonymized patient data. This approach can help healthcare professionals make more informed and personalized treatment decisions, ultimately leading to better patient outcomes. [18]
### _Healthcare Research_
Synthetic data allows researchers to conduct large-scale studies and analyses without the need for accessing real patient data, reducing the risk of privacy breaches and ensuring compliance with data protection regulations.
* **Epidemiological studies:** Researchers can use synthetic data to study the distribution, determinants, and outcomes of health-related conditions and diseases. This approach can provide valuable insights into the risk factors and preventive measures for various health issues, ultimately informing public health policies and interventions [19].
* **Clinical trials:** Synthetic data can be employed to simulate patient populations, treatment groups, and outcomes in clinical trials. This approach can help researchers optimize trial designs, estimate the potential efficacy and safety of interventions, and identify potential biases and confounders [20].
By leveraging synthetic data in these various applications, healthcare researchers and professionals can gain valuable insights, develop advanced AI models, and improve clinical practice while safeguarding patient privacy and maintaining compliance with data protection regulations.
## IV Benefits and Challenges of Synthetic Data in Healthcare
The use of synthetic data in healthcare offers several benefits but also presents challenges that must be considered and addressed to ensure its effective and responsible application.
### _Benefits_
1. **Privacy preservation:** Synthetic data helps protect patient privacy by generating anonymized data instances that closely resemble real-world data without exposing sensitive information. This approach ensures compliance with data protection regulations such as HIPAA [21] and GDPR [22].
2. **Cost and time savings:** Accessing and sharing real-world patient data often requires significant resources and time due to the need for data anonymization, consent management, and compliance with legal and ethical requirements. Synthetic data can alleviate these burdens, enabling researchers and professionals to access and share data more efficiently.
3. **Data utility and quality:** Synthetic data can maintain the underlying relationships and patterns present in real-world data, ensuring its utility and quality for research, AI model training, and other applications. In addition, synthetic data can be used for data augmentation to enhance the performance and generalizability of AI models.
### _Challenges_
1. **Maintaining data fidelity and diversity:** Generating synthetic data that accurately represents real-world data while preserving diversity is a complex task. Overfitting or generating unrealistic data may limit the utility of synthetic data for research and AI model training [23].
2. **Potential biases:** Synthetic data generated from real-world data may inadvertently reproduce or amplify existing biases, which could negatively impact the fairness and effectiveness of AI models and research outcomes. It is crucial to identify and mitigate potential biases in both real and synthetic data to ensure equitable and accurate results [24].
3. **Model complexity and computational resources:** Generative AI models, such as GANs and VAEs, can be computationally expensive and complex, requiring substantial resources for training and optimization. Researchers and professionals must carefully consider the trade-offs between model complexity, data quality, and computational resources.
By addressing these challenges and harnessing the benefits of synthetic data, healthcare researchers and professionals can effectively use generative AI models to access realistic, anonymized patient data, facilitating advances in research, diagnostics, and treatment while maintaining patient privacy and compliance with data protection regulations.
## V Future Research Directions and Impact
There are several promising future research directions and potential impacts of synthetic data generation in healthcare:
1. **Advances in generative AI models:** As generative AI models continue to evolve, improvements in the quality, diversity, and fidelity of synthetic data can be expected. This will enable more accurate, versatile applications in healthcare research, AI model training, and clinical practice.
2. **Integration with privacy-preserving techniques:** Combining synthetic data generation with other privacy-preserving techniques, such as differential privacy and federated learning, can further enhance the privacy and utility of data for healthcare applications while minimizing the risk of privacy breaches.
3. **Expanding applications in healthcare:** Synthetic data generation can be applied to a wide range of healthcare domains, including personalized medicine, telemedicine, and public health surveillance. By providing access to realistic, anonymized patient data, synthetic data can help accelerate research and improve patient outcomes in these areas.
The successful development and application of synthetic data generation in healthcare have the potential to revolutionize the way researchers and professionals access and use patient
data, ultimately leading to significant advancements in diagnostics, treatment, and overall patient care, while safeguarding privacy and ensuring compliance with data protection regulations.
## VI Conclusion
In this paper, we explored the role of generative AI models, such as GANs and VAEs, in generating realistic, anonymized synthetic patient data for research and training purposes in healthcare. By addressing the challenges associated with privacy and regulatory compliance, synthetic data can facilitate advancements in AI model development, medical training, healthcare research, and decision support systems. As generative AI models continue to evolve, future research directions include improving the quality and diversity of synthetic data, integrating privacy-preserving techniques, and expanding applications across various healthcare domains.
The potential impact of synthetic data generation in healthcare is immense, with the capability to revolutionize research, diagnostics, and treatment while maintaining patient privacy and compliance with data protection regulations. The successful application of synthetic data can ultimately lead to improved patient outcomes, more efficient healthcare systems, and a better understanding of the complex factors that influence human health.
|
2303.07906 | A hybrid quantum-classical classifier based on branching multi-scale
entanglement renormalization ansatz | Label propagation is an essential semi-supervised learning method based on
graphs, which has a broad spectrum of applications in pattern recognition and
data mining. This paper proposes a quantum semi-supervised classifier based on
label propagation. Considering the difficulty of graph construction, we develop
a variational quantum label propagation (VQLP) method. In this method, a
locally parameterized quantum circuit is created to reduce the parameters
required in the optimization. Furthermore, we design a quantum semi-supervised
binary classifier based on hybrid Bell and $Z$ bases measurement, which has
shallower circuit depth and is more suitable for implementation on near-term
quantum devices. We demonstrate the performance of the quantum semi-supervised
classifier on the Iris data set, and the simulation results show that the
quantum semi-supervised classifier has higher classification accuracy than the
swap test classifier. This work opens a new path to quantum machine learning
based on graphs. | Yan-Yan Hou, Jian Li, Xiu-Bo Chen, Chong-Qiang Ye | 2023-03-14T13:46:45Z | http://arxiv.org/abs/2303.07906v1 | # Quantum adversarial metric learning model based on triplet loss function
###### Abstract
Metric learning plays an essential role in image analysis and classification, and it has attracted more and more attention. In this paper, we propose a quantum adversarial metric learning (QAML) model based on the triplet loss function, where samples are embedded into the high-dimensional Hilbert space and the optimal metric is obtained by minimizing the triplet loss function. The QAML model employs entanglement and interference to build superposition states for triplet samples so that only one parameterized quantum circuit is needed to calculate sample distances, which reduces the demand for quantum resources. Considering the QAML model is fragile to adversarial attacks, an adversarial sample generation strategy is designed based on the quantum gradient ascent method, effectively improving the robustness against the functional adversarial attack. Simulation results show that the QAML model can effectively distinguish samples of MNIST and Iris datasets and has higher \(\epsilon\)-robustness accuracy over the general quantum metric learning. The QAML model is a fundamental research problem of machine learning. As a subroutine of classification and clustering tasks, the QAML model opens an avenue for exploring quantum advantages in machine learning.
Keywords:Metric learning hybrid quantum-classical algorithm quantum machine learning
## 1 Introduction
Machine learning has developed rapidly in recent years and is widely used in artificial intelligence and big data fields. Quantum computing can efficiently process data in exponentially sizeable Hilbert space and is expected to achieve dramatic speedups in solving some classical computational problems. Quantum machine learning, as the interplay between machine learning and quantum physics, brings unprecedented promise to both disciplines. On the one hand, machine learning methods have been extended to quantum world and applied to the data analysis in quantum physics [1]. On the other hand, quantum machine learning exploits quantum properties, such as entanglement and superposition, to revolutionize classical machine learning algorithms and achieves computational advantages over classical algorithms [2]. Metric Learning is the core problem of some machine learning tasks [3], such as \(k\)-nearest neighbor, support vector machines, radial basis function networks, and \(k\)-means clustering. Its core work is to construct an appropriate distance metric that maximizes the similarities of samples of the same class and minimizes the similarities of samples from different classes. Linear and nonlinear methods can be used to implement metric learning. In general, linear models have a limited number of parameters and are unsuitable for characterizing high-order features of samples. Recently, neural networks have been adopted to establish nonlinear metric learning models, and promising results have been achieved in face recognition and feature matching.
Classical metric learning models usually extract low-dimensional representations of samples, which will lose some details of samples. Quantum states are in high-dimensional Hilbert spaces, and their dimensions grow exponentially with the number of qubits. This quantum enables quantum models to learn high-dimensional representations of samples without explicitly invoking a kernel function. A parameterized quantum circuit is used to map samples in high-dimensional Hilbert space. The optimal metric model is obtained by optimizing the loss function based on Hilbert-Schmidt distances. With the increase of the the dimension, this speed-up advantage will become more and more pronounced, and it is expected to achieve exponential growth in computing speeds. In recent years, researchers began to study how to adopt quantum methods to implement metric learning. Lloyd [4] firstly proposed a quantum metric learning model based on hybrid quantum-classical algorithms. A parameterized quantum circuit is used to map samples in high-dimensional Hilbert space. The optimal metric model is obtained by optimizing the loss function based on Hilbert-Schmidt distances. This model achieves better effects in classification tasks. Nhat [5] introduced quantum explicit and implicit metric learning approaches from the perspective of whether the target space is known or not. The research establishes the relationship between quantum metric learning and other quantum supervised learning models. The above two algorithms mainly focus on classification tasks. Metric learning is a fundamental problem in machine learning, which can be applied not only to classification but also to clustering, face recognition, and other issues. In our research, we are devoted to constructing a quantum metric learning model that can serve various machine learning tasks.
Angular distance is a vital metric that quantifies the included angle between normalized samples [6]. Angular distance focuses on the difference in the direction of samples and is more robust to the variation of local feature [7], [8]. Considering
the similarities between angular distances of classical data and inner products of quantum states, we design a quantum adversarial metric learning (QAML) model based on inner product distances, which is more suitable for image-related tasks. Unlike other quantum metric learning models, the QAML model maps samples from different classes into quantum superposition states and utilizes simple interface circuits to compute metric distances for multiple sample pairs in parallel. Furthermore, quantum systems in high-dimensional Hilbert space have counter-intuitive geometrical properties [9]. The QAML model using only natural samples is vulnerable to adversarial attacks, under which some samples are closer to the false class, so the model is easy to make wrong judgements [10]. To solve this issue, we construct adversarial samples based on natural samples. The model's robustness is improved by the alternative train of natural and adversarial samples. Our work has two main contributions:(i) We explore a quantum method to compute the triplet loss function, which utilizes quantum superposition states to calculate sample distances in parallel and reduce the demand for quantum resources. (ii) We design an adversarial samples generation strategy based on the quantum gradient ascent, and the robustness of the QAML model is significantly improved by alternatively training generated adversarial samples and natural samples. Simulation results show that the QAML model separates samples by a larger margin and has better robustness for functional adversarial attacks than general quantum metric learning models.
The paper is organized as follows. Section 2 gives the basic method of the QAML model. Section 3 verifies the performances of the QAML model. Finally, we get a conclusion and discuss the future research directions.
## 2 Quantum adversarial metric learning
### Preliminary theory
Triplet loss function is a widely used strategy for metric learning [11], commonly used in image retrieval and face recognition. A triplet set \((x_{i}^{a},x_{i}^{p},x_{i}^{n})\) consists of three samples from two classes, where anchor sample \(x_{i}^{a}\) and positive sample \(x_{i}^{p}\) belong to the same class, and negative sample \(x_{i}^{n}\) comes from another class. The goal of metric learning based on triplet loss function is to find the optimal embedded representation space, in which positive sample pairs \((x_{i}^{a},x_{i}^{p})\) are pulled together and negative sample pairs \((x_{i}^{a},x_{i}^{n})\) are pushed away. Fig.1 shows sample space change in the metric learning process. As we can see, samples from different classes become linearly separable through metric learning. Fig.2 shows the schematic of the metric learning model based on triplet loss function. Firstly, the model prepares multiple triplet sets, and one triplet set \((x_{i}^{a},x_{i}^{p},x_{i}^{n})\) is sent to convolutional neural networks (CNN), where three CNN with the same structure and parameters are needed. Each CNN acts on one sample of the triplet set to extract its features. The triplet loss function is obtained by computing metric distances for multiple sample pairs of triplet sets. In the learning process, the optimal parameters of CNN are obtained by minimizing the triplet loss function. Let one batch samples include \(N_{1}\) triplet sets. The triplet loss function is
\[L=\frac{1}{N_{1}}\sum_{i=1}^{N_{1}}[D(g(x_{i}^{a}),g(x_{i}^{p}))-D(g(x_{i}^{a }),g(x_{i}^{n}))+\mu]_{+}, \tag{1}\]
where \(g(\cdot)\) represents the function mapping input samples to the embedded representation space, \(D(\cdot,\cdot)\) denotes the distance between a sample pair in the embedded representation space, and \([~{}\cdot~{},~{}\cdot~{}]_{+}=max(0,~{}\cdot~{})\) represents the hinge loss function. The goal of metric learning is to learn a metric that makes the distances between negative sample pairs greater than the distance between the corresponding posi
Figure 1: Sample space change in metric learning process. Before metric learning, the distances between negative sample pairs are smaller, and samples from different classes are difficult to separate by linear functions. After metric learning, the distances between negative sample pairs become larger, and a large margin separates samples from different classes. Linear functions can easily separate positive and negative samples.
Figure 2: The schematic of the metric learning model based on triplet loss function. A triplet set includes an anchor sample, a positive sample, and a negative sample. The input consists of a batch of triplet sets, and only one triplet set serves as input in each iteration. Three CNN with the same structure and parameters are used to map the triplet set into the embedded representation space. CNN, consisting of multiple convolutions, pooling, and fully connected layers, is responsible for extracting the features of samples. The triplet loss function is further constructed based on the extracted features.
tive sample pairs and satisfies the specified margin \(\mu\in\mathbb{R}^{+}\)[6]. In the triplet loss function, \(D(g(x_{i}^{a}),g(x_{i}^{p}))\) penalizes the positive sample pair \((x_{i}^{a},x_{i}^{p})\) that is too far apart, and \(D(g(x_{i}^{a}),g(x_{i}^{n}))\) penalizes the negative sample pair \((x_{i}^{a},x_{i}^{p})\) whose distance is less than the margin \(\mu\).
Metric learning can adopt various distance metric methods. Angular distance metric is robust to image illumination and contrast variation [7], which is an efficient way for metric learning tasks. In this method, samples need to be normalized to unit vectors in advance. The distance between a positive sample pair is
\[D(g(x_{i}^{a}),g(x_{i}^{p}))=1-\frac{|g(x_{i}^{a})\cdot g(x_{i}^{p})|}{||g(x_{i }^{a})||_{2}||g(x_{i}^{a})||_{2}}, \tag{2}\]
where \(|\ |\) and \(||\ ||_{2}\) represent \(l_{1}\)-norm and \(l_{2}\)-norm, respectively, and \(\cdot\) denotes the inner product operation for two vectors. The distance between negative sample pairs can be calculated in the same way.
### Framework of quantum metric learning model
For most machine learning tasks, it is often challenging to adopt simple linear functions to distinguish samples of different classes. According to kernel theory [12], samples in high-dimensional feature space have better distinguishability. Classical machine learning algorithms usually adopt kernel methods to map samples to high-dimensional feature space, where the mapped samples can be separated by simple linear functions. Quantum states with \(n\)-qubits are in \(2^{n}\)-dimensional Hilbert space, where quantum systems characterize the nonlinear features of data and efficiently process data through a series of linear unitary operations.
In the QAML model, samples should be firstly mapped into quantum systems by qubit encoding. The Hilbert space after encoding usually does not correspond to the optimal space for separating samples of different classes. To search for the optimal Hilbert space, the QAML model performs parameterized quantum circuits \(W(\theta)\) on the encoded states [13]. As different variable parameters \(\theta\) correspond to different mapping spaces, we can search the optimal space by modifying parameters \(\theta=(\theta_{1}^{1},...,\theta_{i}^{j})\). As long as \(W(\theta)\) has strong expressivity, we can find the optimal Hilbert space by optimizing the loss function of metric learning [14; 15]. \(W(\theta)\) with different structures and layers have different expressivity. The more layers \(W(\theta)\) has, the stronger the expressivity, and the easier it is to find the optimal metric space.
The classical metric learning model based on triplet loss function requires three identical CNN to map triplet sets \((x_{i}^{a},x_{i}^{p},x_{i}^{n})\) into the novel Hilbert space. To reduce the demand for quantum resources, we construct a quantum superposition state to represent one triplet set so that a triplet set only needs one \(W(\theta)\) to transform it into Hilbert space. The core work of the building loss function is to compute inner products between sample pairs, but \(W(\theta)\) and subsequent conjugate operation \(W^{\dagger}(\theta)\) counteract each other's effects. To solve this issue, we add a repeated encoding operation after \(W(\theta)\). It is worth mentioning that the repeated encoding operation is also conducive to the construction of high-dimensional features of samples.
The QAML model is mathematically represented as the minimization of the loss function with respect to the parameters \(\theta\). The triplet loss function consists of
metric distances for positive and negative sample pairs, so the kernel work of the QAML model is constructing the metric distances for sample pairs in the transformed Hilbert space. The mapping samples \(h(x_{i}^{a})/||h(x_{i}^{a})||_{2}\) and \(h(x_{i}^{p})/||h(x_{i}^{p})||_{2}\) of Equ.2 are replaced by the quantum states of \(x_{i}^{a}\) and \(x_{i}^{p}\), then the second term of Equ.2 is converted to the inner product between quantum states of the positive sample pair \((x_{i}^{a},x_{i}^{p})\), which can be got by the method of the Hadamard classifier [12]. The triplet loss function can be viewed as the weighted sum of the inner product of sample pairs \((x_{i}^{a},x_{i}^{p})\) and the inner product of sample pairs \((x_{i}^{a},x_{i}^{n})\). With the help of ancilla registers, the triplet set can be prepared in superposition states form. According to the entanglement property of superposition states, the triplet loss function can be implemented with one parameterized quantum circuit. Then, the triplet loss function value is transmitted to a classical optimizer, and parameters are optimized until the optimal metric is obtained. The QAML model constructs adversarial samples according to the gradient of natural samples and trains alternatively natural and adversarial samples to improve the model's robustness against adversarial attacks. The schematic of the QAML model is shown in Fig.3.
|
2305.15054 | A Mechanistic Interpretation of Arithmetic Reasoning in Language Models
using Causal Mediation Analysis | Mathematical reasoning in large language models (LMs) has garnered
significant attention in recent work, but there is a limited understanding of
how these models process and store information related to arithmetic tasks
within their architecture. In order to improve our understanding of this aspect
of language models, we present a mechanistic interpretation of
Transformer-based LMs on arithmetic questions using a causal mediation analysis
framework. By intervening on the activations of specific model components and
measuring the resulting changes in predicted probabilities, we identify the
subset of parameters responsible for specific predictions. This provides
insights into how information related to arithmetic is processed by LMs. Our
experimental results indicate that LMs process the input by transmitting the
information relevant to the query from mid-sequence early layers to the final
token using the attention mechanism. Then, this information is processed by a
set of MLP modules, which generate result-related information that is
incorporated into the residual stream. To assess the specificity of the
observed activation dynamics, we compare the effects of different model
components on arithmetic queries with other tasks, including number retrieval
from prompts and factual knowledge questions. | Alessandro Stolfo, Yonatan Belinkov, Mrinmaya Sachan | 2023-05-24T11:43:47Z | http://arxiv.org/abs/2305.15054v2 | # Understanding Arithmetic Reasoning in Language Models
###### Abstract
Mathematical reasoning in large language models (LLMs) has garnered attention in recent research, but there is limited understanding of how these models process and store information related to arithmetic tasks. In this paper, we present a mechanistic interpretation of LLMs for arithmetic-based questions using a causal mediation analysis framework. By intervening on the activations of specific model components and measuring the resulting changes in predicted probabilities, we identify the subset of parameters responsible for specific predictions. We analyze two pre-trained language models with different sizes (2.8B and 6B parameters). Experimental results reveal that a small set of mid-late layers significantly affect predictions for arithmetic-based questions, with distinct activation patterns for correct and wrong predictions. We also investigate the role of the attention mechanism and compare the model's activation patterns for arithmetic queries with the prediction of factual knowledge. Our findings provide insights into the mechanistic interpretation of LLMs for arithmetic tasks and highlight the specific components involved in arithmetic reasoning.
## 1 Introduction
Mathematical reasoning is a capability that has been shown to emerge in large language models (LLMs) (Wei et al., 2022; Chowdhery et al., 2022; Bubeck et al., 2023). Recent literature shows a multitude of works proposing methods to improve the performance of LLMs on math benchmark datasets through enhanced pre-training (Geva et al., 2020; Spokoyny et al., 2021) or specific prompting techniques (Wei et al., 2022; Kojima et al., 2022, _inter alia_). However, there has been little effort in analyzing the inner workings of these models and the way they store and process information relative to math and arithmetic tasks.
In this paper, we present a set of analyses aimed at mechanistically interpreting large language models on the task of answering arithmetic-based questions (e.g., _"What is the product of 11 and 17?"_). We hypothesize that the computations involved in reasoning about arithmetic problems are carried out by a specific subset of the network. We test our hypothesis by adopting a causal mediation analysis framework (Vig et al., 2020), where the model is seen as a causal graph going from inputs to outputs and the model components (e.g., neurons or layers) are seen as mediators (Pearl, 2001). Within this framework, we assess the impact of a mediator on
Figure 1: By intervening on the activation values of specific components within a language model and computing the corresponding effects, we are able to identify the subset of parameters responsible for specific predictions.
the observed output behavior by conducting controlled interventions on the activations of specific subsets of the model and examine the resulting changes in the probabilities assigned to different numerical predictions (an illustration of this process is provided in Figure 1).
We test our hypothesis on two pre-trained language models with different sizes: 2.8B and 6B parameters. Our experimental results show that the MLP of a small set of mid-late layers has a large effect specifically on the predictions of arithmetic-based questions, with activation patterns that differ between correct and wrong predictions. We observe similar activation behaviors when representing numbers using the standard Arabic notation (e.g., the token "1") and numeral words ("one"). We additionally investigate the role of the attention mechanism at each layer of the model. Finally, we compare the dynamics of the model's activation on answering arithmetic queries to the prediction of factual knowledge, highlighting some common elements and differences in the components involved.
## 2 Related Work
Mechanistic Interpretability.The objective of mechanistic interpretability is to reverse engineer model computation into human-understandable components, aiming to discover, comprehend, and validate the algorithms (circuits) implemented by the model weights (Rauker et al., 2023). Early work in this area analyzed the activation values of single neurons when generating text using LSTMs (Karpathy et al., 2015). A multitude of studies have later focused on interpreting weights and intermediate representations in neural networks (Olah et al., 2017, 2018, 2020; Voss et al., 2021; Goh et al., 2021) and on how information is processed by Transformer-based (Vaswani et al., 2017) language models (Geva et al., 2021, 2022; Olsson et al., 2022; Nanda et al., 2023). Although not strictly mechanistic, other recent studies have analyzed the hidden representations and behavior of inner components of LLMs (Belrose et al., 2023; Gurnee et al., 2023; Bills et al., 2023).
Causality-based Interpretability.Causal mediation analysis (CMA) is an important tool that is used to effectively attribute causal effect of mediators on an outcome variable (Pearl, 2001). This paradigm was applied to mechanistically interpret language models by Vig et al. (2020), who propose a CMA-based framework to investigate gender bias. Variants of this approach were later applied to investigate the inner working of pre-trained language models on other tasks such as subject-verb agreement (Finlayson et al., 2021), natural language inference (Geiger et al., 2021), indirect object identification (Wang et al., 2022), and to study their retention of factual knowledge (Meng et al., 2022).
Math and Arithmetic Reasoning.A growing body of work proposed methods to analyze the performance and robustness of LLMs on tasks involving mathematical reasoning (Pal and Baral, 2021; Piekos et al., 2021; Razeghi et al., 2022; Cobbe et al., 2021; Mishra et al., 2022). In this area, Stolfo et al. (2022) uses a causally-grounded approach to quantify robustness of LLMs. However, the proposed formulation is limited to behavioral investigation: the models is treated as a black box and with no insight to the models' inner mechanisms.
## 3 Methodology
### Background and Task
We denote an autoregressive language model as \(\mathcal{M}:\mathcal{X}\rightarrow\mathcal{P}\). The model operates over a vocabulary \(V\) and takes a token sequence \(x=[x_{1},...,x_{T}]\in\mathcal{X}\), where each \(x_{i}\in V\). \(\mathcal{M}\) generates a probability distribution \(\mathbb{P}\in\mathcal{P}:\mathbb{R}^{|V|}\rightarrow[0,1]\) that predicts possible next tokens following the sequence \(x\). We focus our study on decoder-only Transformer-based models (Vaswani et al., 2017), within which we analyze the model's states at the last token of the input. Specifically, we consider models that represent a slight variation of the standard GPT paradigm, as they utilize parallel attention and rotary positional encodings. The internal computation of the model's hidden states is carried out as follows:
\[h_{T}^{(l)}=h_{T}^{(l-1)} +a_{T}^{(l)}+m_{T}^{(l)} \tag{1}\] \[a_{T}^{(l)} =\operatorname{attn}^{(l)}\left(h_{1}^{(l-1)},\ldots,h_{T}^{(l-1) }\right)\] \[m_{T}^{(l)} =W_{proj}^{(T)}\,\sigma\left(W_{fc}^{(l)}\gamma\left(h_{T}^{(l-1) }\right)\right),\]
where at layer \(l\), \(\sigma\) is the sigmoid nonlinearity, \(\gamma\) is a normalizing nonlinearity, \(W_{fc}^{(l)}\) and \(W_{proj}^{(l)}\) are two matrices that parameterize the MLP of the Transformer block and \(\operatorname{attn}^{(l)}\) is the attention mechanism.
We consider the task of computing the result of bi-variate arithmetic operations. Each arithmetic query consists of two operands \(n_{1},n_{2}\) and an operator \(o\in(+,-,\times,\div)\). We denote as \(r\) the result
obtained by applying the operator \(o\) to the operand pair. Each query is rendered as a natural language question through a prompt \(p(n_{1},n_{2},o)\in\mathcal{X}\) such as _"Q: How much is \(n_{1}\) plus \(n_{2}\)? A:"_. The prompt is then fed to the language model to produce a probability distribution \(\mathbb{P}\) over \(V\). Our aim is to investigate whether certain hidden state variables are more important than others during the process of computing the result \(r\).
### Experimental Procedure
We see the model \(M\) as a causal graph Pearl (2009), framing internal model components, such as specific neurons, as mediators positioned along the causal path connecting model inputs and outputs. Following a causal mediation analysis procedure, we then quantify the contribution of particular model components by intervening on their activation values and measuring the change in the model's output. Previous work has isolated the effect of each single neuron within a model Vig et al. (2020); Finlayson et al. (2021). However, this approach becomes impractical for models with billions of parameters. Therefore, for our main experiments, what we consider as variables along the causal path described by the model are the layer outputs \(m^{(l)}\) and \(a^{(l)}\).
To quantify the importance of a specific component (e.g., the MLP module \(m\)) in mediating the model's predictions, we use the following procedure.
1. Two input questions with only the operands differing, \(p_{1}=p(n_{1},n_{2},o)\) and \(p_{2}=p(n^{\prime}_{1},n^{\prime}_{2},o)\), are passed through the model. During this process, for the last token of the prompt, we store the activation values \((m^{(1)},m^{(1)}_{*}),\ldots,(m^{(L)},m^{(L)}_{*})\) of the model components in which we are interested.
2. For each component, we perform an additional forward pass using \(p_{1}\), but this time we _intervene_ on component \(m^{(i)}\), setting its activation values to \(m^{(i)}_{*}\). This process is illustrated in Figure 1.
3. We measure the causal effect of the intervention on component \(m^{(i)}\) on the model's prediction by computing the change in the probability values assigned to the results \(r\) and \(r^{\prime}\).
More specifically, we compute the **indirect effect** (IE) of a specific mediating component by quantifying its contribution in skewing \(\mathbb{P}\) towards the correct result. We denote the post-intervention model probability by \(\mathbb{P}_{*}\). Then, we compute IE as:
\[\mathrm{IE}=\frac{1}{2}\bigg{[}\frac{\mathbb{P}_{*}(r^{\prime})- \mathbb{P}(r^{\prime})}{\mathbb{P}(r^{\prime})}+\frac{\mathbb{P}(r)-\mathbb{P }_{*}(r)}{\mathbb{P}_{*}(r)}\bigg{]}, \tag{2}\]
where the two terms in the sum represent the relative change caused by the performed intervention in the probability assigned by the model to \(r^{\prime}\) and \(r\), respectively. The larger the measured IE, the larger is the contribution of the component in shifting probability mass from the clean-run result to the result corresponding to the alternative input \(p_{2}\). Additionally, we verify whether the intervention leads to a change in the model's prediction. That is, we compute
\[\mathds{1}\{\arg\max_{x\in\mathcal{S}}\mathbb{P}_{*}(x)\neq \operatorname*{arg\,max}_{x\in\mathcal{S}}\mathbb{P}(x)\}, \tag{3}\]
where \(\mathcal{S}\subset V\cap\mathbb{N}\).1
Footnote 1: Unless otherwise specified, for our experiments we use \(\mathcal{S}=\{1,2,\ldots,300\}\), as larger integers get split into multiple tokens by the tokenizer.
### Experimental Setup
We present the results of our analyses in the main paper for GPT-J Wang and Komatsuzaki (2021), a 6B-parameter pre-trained language model trained on the Pile Gao et al. (2020). Additionally, we validate our findings on Pythia 2.8B Biderman et al. (2023), for which we report the results in Appendix E. Similar to previous work Razeghi et al. (2022); Karpas et al. (2022), to prompt the model, we use a set of six diverse templates representing each of the four arithmetic operations (reported in Appendix A). For each operation \(o\in\{+,-,\times,\div\}\) and for each of the templates, we generate 50 pairs of prompts by sampling two pairs of operands \((n_{1},n_{2})\in\mathcal{S}^{2}\) and \((n^{\prime}_{1},n^{\prime}_{2})\in\mathcal{S}^{2}\), making sure that the result \(r\) falls within \(\mathcal{S}\). In order to make sure that the model achieves a good task performance, we use a two-shot prompt in which we include two exemplars of question-answer for the same operation that is being queried.
## 4 Results
With our analyses we aim to verify the following hypothesis:
1. A specific subset of the model's components is responsible for predictions involving arithmetic computations.
We report the indirect effect measured for the MLP modules in GPT-J in Figure 2. The results for each of the four operators show a common spike in the effect at layers 20-21. This indicates the presence of a specific part of the model relevant for the numerical predictions to the bi-variate arithmetic questions, irrespective of the operator involved. We also notice a difference in the magnitude of the effects, which we hypothesize to be linked to the capability of the model to correctly answer the query: the accuracy of the prediction for computations involving division is \(\sim\)40%, while it is \(\sim\)70% for the other three operators. We report the accuracy results in Appendix B.
### Desired vs Undesired Indirect Effect
The effects measured so far represent a _desired_ change in the output probability: that is, after the intervention, the probability of \(r^{\prime}\) increases and the probability of \(r\) decreases. However, if we have that \(r=r^{\prime}\), the effect measured can be interpreted as a _noisy_ contribution that affects the model's output when the prediction should not be affected. We want to verify whether or not there is a relationship between the neurons that are responsible for the desired and undesired changes in the model's output. More precisely, we want to test the hypothesis:
**H2**: The subset of the model components that have high desired indirect effect have high effect on the output **only when the change in the model prediction is desired**.
To test this, we formulate a variant of our previous experimental procedure. In particular, we condition the sampling of the second set of operands \((n^{\prime}_{1},n^{\prime}_{2})\) on the constraint \(r=r^{\prime}\). That is, we generate the two input questions \(p_{1}\) and \(p_{2}\), such that their result is the same (e.g., _"What is the sum of 25 and 7?"_ and _"What is the sum of 14 and 18?"_). This way, the measurement of the indirect effect as in Eq. 2 now quantifies an _undesired_ change in the probability that the model assigns to the correct result. In this case, we expect the components responsible for the computation of the numerical results to have a lower effect relatively to other parts of the model: if the neurons in the MLPs at layers 20 and 21 are responsible for the computation of the correct result of the arithmetic query, then they should not lead to a large change in the model output, as in this case there is no change in the correct result of \(p_{1}\) and \(p_{2}\).
We report the results in Figure 3. Interestingly, in this case we notice that the indirect effect of the MLPs spikes at earlier layers. This observation supports our hypothesis that the activations within layers 20-21 in GPT-J incorporate some information relative to the result of the computation required to answer the arithmetic query that the model was prompted with.
Changes in the Actual Prediction.So far, we have been measuring the influence of the model components in terms of probability changes. Now, we study the dynamics of the actual model predictions. In particular, in the scenario for which \(r=r^{\prime}\), we compute the result to which the model assigns the highest probability as in Eq. 3, distinguish
Figure 3: Indirect effect of the MLPs at each layer in GPT-J, averaged across all four operators. The upper plot shows the desired effect leading to a correct change in the output probability. The lower plot depicts a _noisy_ change in the probability assigned to the correct result.
Figure 2: Indirect effect of the MLPs at each layer in GPT-J, for each of the four arithmetic operators. We observe a peak in the effect at layer 20 for all four types of operation.
ing between desired (\(\arg\max_{x\in\mathcal{S}}\mathbb{P}_{*}(x)=r\)) and undesired (\(\arg\max_{x\in\mathcal{S}}\mathbb{P}(x)=r\)) changes. The results reported in Figure 4 show an increase in the desired change in prediction at layers 20-21, while the undesired change in prediction is higher for layer 15-18. This means that interventions on the MLPs at layers 20-21 are more likely to lead to a correct adjustment of the prediction, while the opposite is true for earlier layer (15-16 in particular). These results are consistent with our previous observations and we see this as an additional evidence indicating that neurons in the MLP at layers 20-21 encode information about \(r\).
### Numerical Representation
We hypothesize that, if there are some parts of the model that implement arithmetic procedures, these components should be activated also when the representation of the numerical quantities involved in the computation are represented in a different way. We thus formulate our third hypothesis:
* The dynamics in the effect observed for the components are **independent of the representation of the numerical quantities**.
To verify this hypothesis, we replicate the experiments described in Section 3, this time substituting the Arabic representation of the numbers in the query with the corresponding numeral words (e.g., _"What is twelve minus three?"_ instead of _"What is 12 minus 3?"_).2
Footnote 2: For this type of experiment, we used the set of possible solutions \(\mathcal{S}=[\) “one”, “two”,..., “twenty”\(]\), as the numeral words corresponding to larger numbers get split into multiple tokens by the tokenizer.
As in Section 4.1, we report the measurement of the desired and undesired indirect effect averaged across the four operators (Figure 5). Notably, we observe a behavior that exhibits similarities to the experiments conducted with the Arabic representation. In particular, the desired indirect effect is again observed to peak at layer 20, while the contribution of earlier layers (15-19) grows (with respect to the effect of the other layers) when measuring the undesired effect. This trend is observed again in the comparison between desired and undesired change in the model's prediction, which we report in Appendix C.
### Indirect Effect of Attention Blocks
We extend our analyses to consider the attention mechanism within each Transformer block. Following the experimental setup outlined in Section 3, we intervene on the attention weights at every layer of the model. We measure the indirect effects of the model components, distinguishing between the desired and undesired scenarios. We present the findings in Figure 6.
In this scenario, two notable observations emerge. Firstly, unlike in the case of MLPs, we find that the early-to-mid layers exhibit a more substantial influence on the model's prediction. Secondly,
Figure 4: Desired (wrong to correct) and undesired (correct to wrong) change in the prediction induced by the intervention on the MLP at different layers of GPT-J. The layers at which the two types of prediction change peak correspond to the layers with largest corresponding indirect effect.
Figure 5: Indirect effect of the MLPs at each layer in GPT-J, when the quantities are represented as numeral words (e.g., “one”). The relative increase in the effect of layers 15-19 in the undesired case is consistent to the case of Arabic representation of numbers (“1”).
there is not a significant qualitative discrepancy in the indirect effect of different layers between the desired and undesired cases. However, it is worth noting that layers 1 and 2 demonstrate a relatively greater effect in the desired case. These findings align with the existing theory that attributes to the attention mechanism the responsibility of moving and copying information within Transformer models Elhage et al. (2021), while the feed-forward layers are associated with performing computations, retrieving facts and information Geva et al. (2022); Din et al. (2023); Meng et al. (2022).
### Behavior on Factual Predictions
In order to understand whether the patterns in the effect of the model components that we observed so far are specific to arithmetic queries, we compare our observations on arithmetic queries to a distinct task involving the prediction of factual knowledge. Specifically, we utilize data from the LAMA benchmark Petroni et al. (2019), which consists of natural language templates representing knowledge-base relations, such as "[subject] is the capital of [object]". By instantiating a template with a specific subject (e.g., "Paris"), we prompt the model to predict the correct object ("France"). Similar to our approach with arithmetic questions, we create pairs of factual queries that differ solely in the subject. In particular, we sample pairs of entities from the set of entities compatible for a given relation (e.g., cities for the relation "is the capital of"). Details about the data used for this procedure are provided in Appendix D. We then measure the indirect effect following the formulation in Equation 2, where the correct object corresponds to correct numerical outcome in the arithmetic scenario.
In the results (Figure 7), we notice a substantial effect in layers 18-21. Although it is not as pronounced as in the case of arithmetic operations, layer 20 exhibits a non-negligible effect. This indicates that the model components responsible for this case might share some commonality with the subset of parameters that influence arithmetic predictions. To delve deeper into this overlap, we conduct interventions at the neuron level, thereby investigating the specific neurons that contribute to both tasks.
### Neuron-level Interventions
To further investigate the components with highest effect on the model's predictions in different settings, we carry out a finer-grained analysis in which we consider each neuron in the MLP (i.e., each dimension in the output vector of the MLP) at a specific layer independently. In particular, following the same procedure as for layer-level experiments, we intervene on each neuron by setting its activation to the value it would take if the input query would contain different operands (or a different subject). We then compute the corresponding indirect effect. Next, we divide the arithmetic data into four subsets depending on the operator involved (\(+,-\), \(\times\), \(\div\)) and consider the factual data from the LAMA benchmark. We rank the neurons according to the average measured effect for each of these subsets and compute the overlap in the top 400 neurons (roughly 10%, as GPT-J has an hidden dimension of 4096) for the five different types of data.
We carry out this procedure for layer 20. The heatmap in Figure 8 illustrates the results. We observe that overlaps between the top neurons for the different arithmetic operations are larger than the overlap for any of the arithmetic operations and
Figure 6: Indirect effect of the attention mechanism at each layer in GPT-J. Contrarily to the case of MLPs, here we do not observe any particular change in the impact of the different layers for the two types of effect.
Figure 7: Indirect effect measured on GPT-J for predictions to factual queries.
the factual predictions. Moreover, the overlaps between the top neurons for the arithmetic operations and the factual predictions are slightly below random: the expected overlap ratio between the top 400 indices in two random rankings of size 4096 is \(\sim\)9.8%. This result suggests that the model's circuits responsible for different kinds of prediction, though relying on similar subsets of layers, might differ. However, it is important to note that this measurement does not take into account the magnitude of the effect of the neurons.
## 5 Conclusion
In this paper, we proposed the use of causal mediation analysis to mechanistically investigate how LLMs process information related to math and arithmetics. Through controlled interventions on specific subsets of the model, we assessed the impact of these mediators on the model's predictions.
The experimental results, conducted on two pre-trained language models with different sizes (2.8B and 6B parameters), demonstrate that a small set of mid-late layers, specifically the MLP modules, have a significant effect on the predictions about arithmetic-based questions. The activation patterns in these layers differ between correct and wrong predictions. Furthermore, we investigated the role of the attention mechanism at each layer of the model, and we compared the dynamics of the model's activation on answering arithmetic queries to the prediction of factual knowledge, revealing both common elements and differences in the components involved.
## Acknowledgements
Alessandro Stolfo is supported by armasuisse Science and Technology through a CYD Doctoral Fellowship. We would like to thank Vilem Zouhar for the helpful discussions.
|
2302.11295 | Fair Correlation Clustering in Forests | The study of algorithmic fairness received growing attention recently. This
stems from the awareness that bias in the input data for machine learning
systems may result in discriminatory outputs. For clustering tasks, one of the
most central notions of fairness is the formalization by Chierichetti, Kumar,
Lattanzi, and Vassilvitskii [NeurIPS 2017]. A clustering is said to be fair, if
each cluster has the same distribution of manifestations of a sensitive
attribute as the whole input set. This is motivated by various applications
where the objects to be clustered have sensitive attributes that should not be
over- or underrepresented.
We discuss the applicability of this fairness notion to Correlation
Clustering. The existing literature on the resulting Fair Correlation
Clustering problem either presents approximation algorithms with poor
approximation guarantees or severely limits the possible distributions of the
sensitive attribute (often only two manifestations with a 1:1 ratio are
considered). Our goal is to understand if there is hope for better results in
between these two extremes. To this end, we consider restricted graph classes
which allow us to characterize the distributions of sensitive attributes for
which this form of fairness is tractable from a complexity point of view.
While existing work on Fair Correlation Clustering gives approximation
algorithms, we focus on exact solutions and investigate whether there are
efficiently solvable instances. The unfair version of Correlation Clustering is
trivial on forests, but adding fairness creates a surprisingly rich picture of
complexities. We give an overview of the distributions and types of forests
where Fair Correlation Clustering turns from tractable to intractable. The most
surprising insight to us is the fact that the cause of the hardness of Fair
Correlation Clustering is not the strictness of the fairness condition. | Katrin Casel, Tobias Friedrich, Martin Schirneck, Simon Wietheger | 2023-02-22T11:27:06Z | http://arxiv.org/abs/2302.11295v1 | # Fair Correlation Clustering in Forests
###### Abstract
The study of algorithmic fairness received growing attention recently. This stems from the awareness that bias in the input data for machine learning systems may result in discriminatory outputs. For clustering tasks, one of the most central notions of fairness is the formalization by Chierichetti, Kumar, Lattanzi, and Vassilvitskii [15]. A clustering is said to be fair, if each cluster has the same distribution of manifestations of a sensitive attribute as the whole input set. This is motivated by various applications where the objects to be clustered have sensitive attributes that should not be over- or underrepresented. Most research on this version of fair clustering has focused on centriod-based objectives.
In contrast, we discuss the applicability of this fairness notion to Correlation Clustering. The existing literature on the resulting Fair Correlation Clustering problem either presents approximation algorithms with poor approximation guarantees or severely limits the possible distributions of the sensitive attribute (often only two manifestations with a 1:1 ratio are considered). Our goal is to understand if there is hope for better results in between these two extremes. To this end, we consider restricted graph classes which allow us to characterize the distributions of sensitive attributes for which this form of fairness is tractable from a complexity point of view.
While existing work on Fair Correlation Clustering gives approximation algorithms, we focus on exact solutions and investigate whether there are efficiently solvable instances. The unfair version of Correlation Clustering is trivial on forests, but adding fairness creates a surprisingly rich picture of complexities. We give an overview of the distributions and types of forests where Fair Correlation Clustering turns from tractable to intractable.
As the most surprising insight, we consider the fact that the cause of the hardness of Fair Correlation Clustering is not the strictness of the fairness condition. We lift most of our results to also hold for the relaxed version of the fairness condition. Instead, the source of hardness seems to be the distribution of the sensitive attribute. On the positive side, we identify some reasonable distributions that are indeed tractable. While this tractability is only shown for forests, it may open an avenue to design reasonable approximations for larger graph classes.
correlation clustering, disparate impact, fair clustering, relaxed fairness Article
## 1 Introduction
In the last decade, the notion of fairness in machine learning has increasingly attracted interest, see for example the review by Pessach and Schmueli [32]. Feldman, Friedler, Moeller, Scheidegger, and Venkatasubramanian [26] formalize fairness based on a US Supreme Court decision on disparate impact from 1971. It requires that sensitive attributes like gender or skin color should neither be explicitly considered in decision processes like hiring but
also should the manifestations of sensitive attributes be proportionally distributed in all outcomes of the decision process. Feldman et al. formalize this notion for classification tasks. Chierichetti, Kumar, Lattanzi, and Vassilvitskii [19] adapt this concept for clustering tasks.
In this paper we employ the same disparate impact based understanding of fairness. Formally, the objects to be clustered have a color assigned to them that represents some sensitive attribute. Then, a clustering of these colored objects is called _fair_ if for each cluster and each color the ratio of objects of that color in the cluster corresponds to the total ratio of vertices of that color. More precisely, a clustering is _fair_, if it partitions the set of objects into _fair subsets_.
[Fair Subset] Let \(U\) be a finite set of objects colored by a function \(c\colon U\to[k]\) for some \(k\in\mathbb{N}_{>0}\). Let \(U_{i}=\{u\in U\mid c(u)=i\}\) be the set of objects of color \(i\) for all \(i\in[k]\). Then, a set \(S\subseteq U\) is fair if and only if for all colors \(i\in[k]\) we have \(\frac{|S\cap U_{i}|}{|S|}=\frac{|U_{i}|}{|U|}\).
To understand how this notion of fairness affects clustering decisions, consider the following example. Imagine that an airport security wants to find clusters among the travelers to assign to each group a level of potential risk with corresponding anticipating measures. There are attributes like skin color that should not influence the assignment to a risk level. A bias in the data, however, may lead to some colors being over- or underrepresented in some clusters. Simply removing the skin color attribute from the data may not suffice as it may correlate with other attributes. Such problems are especially likely if one of the skin colors is far less represented in the data than others. A fair clustering finds the optimum clustering such that for each risk level the distribution of skin colors is fair, by requiring the distribution of each cluster to roughly match the distribution of skin colors among all travelers.
The seminal fair clustering paper by Chierichetti et al. [19] introduced this notion of fairness for clustering and studied it for the objectives \(k\)-center and \(k\)-median. Their work was extended by Bera, Chakrabarty, Flores, and Negahbani [11], who relax the fairness constraint in the sense of requiring upper and lower bounds on the representation of a color in each cluster. More precisely, they define the following generalization of fair sets.
[Relaxed Fair Set] For a finite set \(U\) and coloring \(c\colon U\to[k]\) for some \(k\in\mathbb{N}_{>0}\) let \(p_{i},q_{i}\in\mathbb{Q}\) with \(0<p_{i}\leqslant\frac{|U_{i}|}{|U|}\leqslant q_{i}<1\) for all \(i\in[k]\), where \(U_{i}=\{u\in U\mid c(u)=i\}\). A set \(S\subseteq U\) is relaxed fair with respect to \(q_{i}\) and \(p_{i}\) if and only if \(p_{i}\leqslant\frac{|S\cap U_{i}|}{|S|}\leqslant q_{i}\) for all \(i\in[k]\).
Following these results, this notion of (relaxed) fairness was extensively studied for centroid-based clustering objectives with many positive results.
For example, Bercea et al. [12] give bicreteira constant-factor approximations for facility location type problems like \(k\)-center and \(k\)-median. Bandyapadhyay, Fomin and Simonov [7] use the technique of fair coresets introduced by Schmidt, Schwiegelshohn, and Sohler [34] to give constant factor approximations for many centroid-based clustering objectives; among many other results, they give a PTAS for fair \(k\)-means and \(k\)-median in Euclidean space. Fairness for centroid-based objectives seems to be so well understood, that most research already considers more generalized settings, like streaming [34], or imperfect knowledge of group membership [25].
In comparison, there are few (positive) results for this fairness notion applied to graph clustering objectives. The most studied with respect to fairness among those is Correlation Clustering, arguably the most studied graph clustering objective. For Correlation Clustering we are given a pairwise similarity measure for a set of objects and the aim is to find a clustering that minimizes the number of similar objects placed in separate clusters and the number of dissimilar objects placed in the same cluster. Formally, the input to
Correlation Clustering is a graph \(G=(V,E)\), and the goal is to find a partition \(\mathcal{P}\) of \(V\) that minimizes the Correlation Clustering cost defined as
\[\mathrm{cost}(G,\mathcal{P})=|\{\{u,v\}\in{V\choose 2}\setminus E\mid\mathcal{P}[u ]=\mathcal{P}[v]\}|+|\{\{u,v\}\in E\mid\mathcal{P}[u]\neq\mathcal{P}[v]\}|. \tag{1}\]
Fair Correlation Clustering then is the task to find a partition into _fair_ sets that minimizes the Correlation Clustering cost. We emphasize that this is the complete, unweighted, min-disagree form of Correlation Clustering. (It is often called _complete_ because every pair of objects is either similar or dissimilar but none is indifferent regarding the clustering. It is unweighted as the (dis)similarity between two vertices is binary. A pair of similar objects that are placed in separate clusters as well as a pair of dissimilar objects in the same cluster is called a _disagreement_, hence the naming of the min-disagree form.)
There are two papers that appear to have started studying Fair Correlation Clustering independently1. Ahmadian, Epasto, Kumar, and Mahdian [2] analyze settings where the fairness constraint is given by some \(\alpha\) and require that the ratio of each color in each cluster is at most \(\alpha\). For \(\alpha=\frac{1}{2}\), which corresponds to our fairness definition if there are two colors in a ratio of \(1:1\), they obtain a 256-approximation. For \(\alpha=\frac{1}{k}\), where \(k\) is the number of colors in the graph, they give a \(16.48k^{2}\)-approximation. We note that all their variants are only equivalent to our fairness notion if there are \(\alpha^{-1}\) colors that all occur equally often. Ahmadi, Galhotra, Saha, and Schwartz [1] give an \(\mathrm{O}(c^{2})\)-approximation algorithm for instances with two colors in a ratio of \(1:c\). In the special case of a color ratio of \(1:1\), they obtain a \(3\beta+4\)-approximation, given any \(\beta\)-approximation to unfair Correlation Clustering. With a more general color distribution, their approach also worsens drastically. For instances with \(k\) colors in a ratio of \(1:c_{2}:c_{3}:\ldots:c_{k}\) for positive integers \(c_{i}\), they give an \(\mathrm{O}(k^{2}\cdot\max_{2\leqslant i\leqslant k}c_{i})\)-approximation for the strict, and an \(\mathrm{O}(k^{2}\cdot\max_{2\leqslant i\leqslant k}q_{i})\)-approximation for the relaxed setting2.
Footnote 1: Confusingly, they both carry the title _Fair Correlation Clustering_.
Footnote 2: Their theorem states they achieve an \(\mathrm{O}(\max_{2\leqslant i\leqslant k}q_{i})\)-approximation but when looking at the proof it seems they have accidentally forgotten the \(k^{2}\) factor.
Following these two papers, Friggstad and Mousavi [28] provide an approximation to the \(1:1\) color ratio case with a factor of \(6.18\). To the best of our knowledge, the most recent publication on Fair Correlation Clustering is by Ahmadian and Negahbani [3] who give approximations for Fair Correlation Clustering with a slightly different way of relaxing fairness. They give an approximation with ratio \(\mathcal{O}(\varepsilon^{-1}k\max_{2\leqslant i\leqslant k}c_{i})\) for color distribution \(1:c_{2}:c_{3}:\ldots:c_{k}\), where \(\varepsilon\) relates to the amount of relaxation (roughly \(q_{i}=(1+\epsilon)c_{i}\) for our definition of relaxed fairness).
All these results for Fair Correlation Clustering seem to converge towards considering the very restricted setting of two colors in a ratio of \(1:1\) in order to give some decent approximation ratio. In this paper, we want to understand if this is unavoidable, or if there is hope to find better results for other (possibly more realistic) color distributions. In order to isolate the role of fairness, we consider "easy" instances for Correlation Clustering, and study the increase in complexity when adding fairness constraints. Correlation Clustering without the fairness constraint is easily solved on forests. We find that Fair Correlation Clustering restricted to forests turns NP-hard very quickly, even when additionally assuming constant degree or diameter. Most surprisingly, this hardness essentially also holds for relaxed fairness, showing that the hardness of the problem is not due to the strictness of the fairness definition.
On the positive side, we identify color distributions that allow for efficient algorithms. Not surprisingly, this includes ratio \(1:1\), and extends to a constant number of \(k\) colors with distribution \(c_{1}:c_{2}:c_{3}:\ldots:c_{k}\) for constants \(c_{1},\ldots,c_{k}\). Such distributions can be used to model sensitive attributes with a limited number of manifestation that are almost evenly distributed. Less expected, we also find tractability for, in a sense, the other extreme. We show that Fair Correlation Clustering on forests can be solved in polynomial time for two colors with ratio \(1:c\) with \(c\) being very large (linear in the number of overall vertices). Such a distribution can be used to model a scenario where a minority is drastically underrepresented and thus in dire need of fairness constraints. Although our results only hold for forests, we believe that they can offer a starting point for more general graph classes. We especially hope that our work sparks interest in the so far neglected distribution of ratio \(1:c\) with \(c\) being very large.
### Related Work
The study of clustering objectives similar or identical to Correlation Clustering dates back to the 1960s [10, 33, 37]. Bansal, Blum, and Chawla [8] were the first to coin the term Correlation Clustering as a clustering objective. We note that it is also studied under the name Cluster Editing. The most general formulation of Correlation Clustering regarding weights considers two positive real values for each pair of vertices, the first to be added to the cost if the objects are placed in the same cluster and the second to be added if the objects are placed in separate clusters [4]. The recent book by Bonchi, Garcia-Soriano, and Gullo [13] gives a broad overview of the current research on Correlation Clustering.
We focus on the particular variant that considers a complete graph with \(\{-1,1\}\) edge-weights, and the min disagreement objective function. This version is APX-hard [16], implying in particular that there is no algorithm giving an arbitrarily good approximation unless \(\mathsf{P}=\mathsf{NP}\). The best known approximation for Correlation Clustering is the very recent breakthrough by Cohen-Addad, Lee and Newman [20] who give a ratio of \((1.994+\epsilon)\).
We show that in forests, all clusters of an optimal Correlation Clustering solution have a fixed size. In such a case, Correlation Clustering is related to \(k\)-Balanced Partitioning. There, the task is to partition the graph into \(k\) clusters of equal size while minimizing the number of edges that are cut by the partition. Feldmann and Foschini [27] study this problem on trees and their results have interesting parallels with ours.
Aside from the results on Fair Correlation Clustering already discussed above, we are only aware of three papers that consider a fairness notion close to the one of Chierichetti et al. [19] for a graph clustering objective. Schwartz and Zats [35] consider incomplete Fair Correlation Clustering with the max-agree objective function. Dinitz, Srinivasan, Tsepenekas, and Vullikanti [23] study Fair Disaster Containment, a graph cut problem involving fairness. Their problem is not directly a fair clustering problem since they only require one part of their partition (the saved part) to be fair. Ziko, Yuan, Granger, and Ayed [38] give a heuristic approach for fair clustering in general that however does not allow for theoretical guarantees on the quality of the solution.
## 2 Contribution
We now outline our findings on Fair Correlation Clustering. We start by giving several structural results that underpin our further investigations. Afterwards, we present our algorithms and hardness results for certain graph classes and color ratios. We further show that the hardness of fair clustering does _not_ stem from the requirement of the clusters
exactly reproducing the color distribution of the whole graph. This section is concluded by a discussion of possible directions for further research.
### Structural Insights
We outline here the structural insights that form the foundation of all our results. We first give a close connection between the cost of a clustering, the number of edges "cut" by a clustering, and the total number of edges in the graph. We refer to this number of "cut" edges as the _inter-cluster_ cost as opposed to the number of non-edges inside clusters, which we call the _intra-cluster_ cost. Formally, the intra- and inter-cluster cost are the first and second summand of the Correlation Clustering cost in Equation (1), respectively. The following lemma shows that minimizing the inter-cluster cost suffices to minimize the total cost if all clusters are of the same size. This significantly simplifies the algorithm development for Correlation Clustering.
Let \(\mathcal{P}\) be a partition of the vertices of an \(m\)-edge graph \(G\). Let \(\chi\) denote the inter-cluster cost incurred by \(\mathcal{P}\) on \(G\). If all sets in the partition are of size \(d\), then \(\text{cost}(\mathcal{P})=\frac{(d-1)}{2}\,n-m+2\chi\). In particular, if \(G\) is a tree, \(\text{cost}(\mathcal{P})=\frac{(d-3)}{2}\,n+2\chi+1\).
The condition that all clusters need to be of the same size seems rather restrictive at first. However, we prove in the following that in bipartite graphs and, in particular, in forests and trees there is always a minimum-cost fair clustering such that indeed all clusters are equally large. This property stems from how the fairness constraint acts on the distribution of colors and is therefore specific to Fair Correlation Clustering. It allows us to fully utilize Lemma 3 both for building reductions in \(\NP\)-hardness proofs as well as for algorithmic approaches as we can restrict our attention to partitions with equal cluster sizes.
Consider two colors of ratio \(1:2\), then any fair cluster must contain at least \(1\) vertex of the first color and \(2\) vertices of the second color to fulfil the fairness requirement. We show that a minimum-cost clustering of a forest, due to the small number of edges, consists entirely of such minimal clusters. Every clustering with larger clusters incurs a higher cost.
Let \(F\) be a forest with \(k\geqslant 2\) colors in a ratio of \(c_{1}:c_{2}:\ldots:c_{k}\) with \(c_{i}\in\mathbb{N}_{>0}\) for all \(i\in[k]\), \(\gcd(c_{1},c_{2},\ldots,c_{k})=1\), and \(\sum_{i=1}^{k}c_{i}\geqslant 3\). Then, all clusters of every minimum-cost fair clustering are of size \(d=\sum_{i=1}^{k}c_{i}\).
Lemma 4 does not extend to two colors in a ratio of \(1:1\) as illustrated in Figure 1. In fact, this color distribution is the only case for forests where a partition with larger clusters can have the same (but no smaller) cost. We prove a slightly weaker statement than Lemma 4, namely, that _there is_ always a minimum-cost fair clustering whose cluster sizes are given by the color ratio. We find that this property, in turn, holds not only for forests but for every bipartite graph. Note that in general bipartite graphs there are more color ratios than only \(1:1\) that allow for these ambiguities.
Figure 1: Example forest where a cluster of size \(4\) and two clusters of size \(2\) incur the same cost. With one cluster of size \(4\) (left), the inter-cluster cost is \(0\) and the intra-cluster cost is \(4\). With two clusters of size \(2\) (right), both the inter-cluster and intra-cluster cost are \(2\).
**Lemma 5**.: _Let \(G=(A\cup B,E)\) be a bipartite graph with \(k\geqslant 2\) colors in a ratio of \(c_{1}:c_{2}:\ldots:c_{k}\) with \(c_{i}\in\mathbb{N}_{>0}\) for all \(i\in[k]\) and \(\gcd(c_{1},c_{2},\ldots,c_{k})=1\). Then, there is a minimum-cost fair clustering such that all its clusters are of size \(d=\sum_{i=1}^{k}c_{i}\). Further, each minimum-cost fair clustering with larger clusters can be transformed into a minimum-cost fair clustering such that all clusters contain no more than \(d\) vertices in linear time._
In summary, the results above show that the ratio of the color classes is the key parameter determining the cluster size. If the input is a bipartite graph whose vertices are colored with \(k\) colors in a ratio of \(c_{1}:c_{2}:\cdots:c_{k}\), our results imply that without loosing optimality, solutions can be restricted to contain only clusters of size \(d=\sum_{i=1}^{k}c_{i}\), each with exactly \(c_{i}\) vertices of color \(i\). Starting from these observations, we show in this work that the color ratio is also the key parameter determining the complexity of Fair Correlation Clustering. On the one hand, the simple structure of optimal solutions restricts the search space and enables polynomial-time algorithms, at least for some instances. On the other hand, these insights allow us to show hardness already for very restricted input classes. The technical part of most of the proofs consists of exploiting the connection between the clustering cost, total number of edges, and the number of edges cut by a clustering.
### Tractable Instances
We start by discussing the algorithmic results. The simplest case is that of two colors, each one occurring equally often. We prove that for bipartite graphs with a color ratio \(1:1\) Fair Correlation Clustering is equivalent to the maximum bipartite matching problem, namely, between the vertices of different color. Via the standard reduction to computing maximum flows, this allows us to benefit from the recent breakthrough by Chen, Kyng, Liu, Peng, Probst Gutenberg, and Sachdeva [18]. It gives an algorithm running in time \(m^{1+o(1)}\).
The remaining results focus on forests as the input, see Table 1. It should not come as a surprise that our main algorithmic paradigm is dynamic programming. A textbook version finds a maximum matching in linear time in a forests, solving the \(1:1\) case. For general color ratios, we devise much more intricate dynamic programs. We use the color ratio \(1:2\) as an introductory example. The algorithm has two phases. In the first, we compute a list of candidate _splittings_ that partition the forest into connected parts containing at most \(1\) blue and \(2\) red vertices each. In the second phase, we assemble the parts of each of the splittings to fair clusters and return the cheapest resulting clustering. The difficulty lies in the two phases not being independent from each other. It is not enough to minimize the "cut" edges in the two phases separately. We prove that the costs incurred by the merging additionally depends on the number of of parts of a certain type generated in the splittings. Tracking this along with the number of cuts results in a \(\mathrm{O}(n^{6})\)-time algorithm. Note that we did not optimize the running time as long as it is polynomial.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline Color Ratio & \(1:1\) & \(1:2\) & \(1:(n/p-1)\) & \(c_{1}:c_{2}:\ldots:c_{k}\) \\ Running Time & \(\mathrm{O}(n)\) & \(\mathrm{O}(n^{6})\) & \(\mathrm{O}\big{(}n^{f(p)}\big{)}\) & \(\mathrm{O}\big{(}n^{g(c_{1},\ldots,c_{k})}\big{)}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Running times of our algorithms for Fair Correlation Clustering on forests depending on the color ratio. Value \(p\) is any rational such that \(n/p-1\) is integral; \(c_{1},c_{2},\ldots,c_{k}\) are coprime positive integers, possibly depending on \(n\). Functions \(f\) and \(g\) are given in Theorems 23 and 27.
We generalize this to \(k\) colors in a ratio \(c_{1}:c_{2}:\cdots:c_{k}\).3 We now have to consider _all_ possible colorings of a partition of the vertices such that in each part the \(i\)-th color occurs at most \(c_{i}\) times. While assembling the parts, we have to take care that the merged colorings remain compatible. The resulting running time is \(\mathrm{O}(n^{g(c_{1},\dots,c_{k})})\) for some (explicit) polynomial \(g\). Recall that, by Lemma 4, the minimum cluster size is \(d=\sum_{i=1}^{k}c_{i}\). If this is a constant, then the dynamic program runs in polynomial time. If, however, the number of colors \(k\) or some color's proportion grows with \(n\), it becomes intractable. Equivalently, the running time gets worse if there are very large but sublinearly many clusters.
Footnote 3: The \(c_{i}\) are coprime, but they are not necessarily constants with respect to \(n\).
To mitigate this effect, we give a complementary algorithm at least for forests with two colors. Namely, consider the color ratio \(1:\frac{n}{p}-1\). Then, an optimal solution has \(p\) clusters each of size \(d=\nicefrac{{n}}{{p}}\). The key observation is that the forest contains \(p\) vertices of the color with fewer occurrences, say, blue, and any fair clustering isolates the blue vertices. This can be done by cutting at most \(p-1\) edges and results in a collection of (sub-)trees where each one has at most one blue vertex. To obtain the clustering, we split the trees with red excess vertices and distribute those among the remaining parts. We track the costs of all the \(\mathrm{O}(n^{\mathsf{poly}(p)})\) many cut-sets and rearrangements to compute the one of minimum cost. In total, the algorithm runs in time \(\mathrm{O}(n^{f(p)})\) for some polynomial in \(p\). In summary, we find that if the number of clusters \(p\) is constant, then the running time is polynomial. Considering in particular an integral color ratio \(1:c\),4, we find tractability for forests if \(c=\mathrm{O}(1)\) or \(c=\Omega(n)\). We will show next that Fair Correlation Clustering with this kind of a color ratio is \(\mathsf{NP}\)-hard already on trees, hence the hardness must emerge somewhere for intermediate \(c\).
Footnote 4: In a color ratio \(1:c\), \(c\) is not necessarily a constant, but ratios like \(2:5\) are not covered.
### A Dichotomy for Bounded Diameter
Table 2 shows the complexity of Fair Correlation Clustering on graphs with bounded diameter. We obtain a dichotomy for trees with two colors with ratio \(1:c\). If the diameter is at most \(3\), an optimal clustering is computable in \(\mathrm{O}(n)\) time, but for diameter at least \(4\), the problem becomes \(\mathsf{NP}\)-hard. In fact, the linear-time algorithm extends to trees with an arbitrary number of colors in any ratio.
The main result in that direction is the hardness of Fair Correlation Clustering already on trees with diameter at least \(4\) and two colors of ratio \(1:c\). This is proven by a reduction from the strongly \(\mathsf{NP}\)-hard 3-Partition problem. There, we are given positive integers \(a_{1},\dots,a_{\ell}\) where \(\ell\) is a multiple of \(3\) and there exists some \(B\) with \(\sum_{i=1}^{\ell}a_{i}=B\cdot\frac{\ell}{3}\). The task is to partition the numbers \(a_{i}\) into triples such that each one of those sums to \(B\). The problem remains \(\mathsf{NP}\)-hard if all the \(a_{i}\) are strictly between \(\nicefrac{{B}}{{4}}\) and \(\nicefrac{{B}}{{2}}\), ensuring that, if some subset of the numbers sums to \(B\), it contains exactly three elements.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Diameter & Color Ratio & Trees & General Graphs \\ \hline \(2,3\) & any & \(\mathrm{O}(n)\) & \(\mathsf{NP}\)-hard \\ \(\geqslant 4\) & \(1:c\) & \(\mathsf{NP}\)-hard & \(\mathsf{NP}\)-hard \\ \hline \hline \end{tabular}
\end{table}
Table 2: Complexity of Fair Correlation Clustering on trees and general graphs depending on the diameter. The value \(c\) is a positive integer, possibly depending on \(n\).
We model this problem as an instance of Fair Correlation Clustering as illustrated in Figure 2. We build \(\ell\) stars, where the \(i\)-th one consists of \(a_{i}\) red vertices, and a single star of \(\nicefrac{{\epsilon}}{{3}}\) blue vertices. The centers of the blue star and all the red stars are connected. The color ratio in the resulting instance is \(1:B\). Lemma 4 then implies that there is a minimum-costs clustering with \(\nicefrac{{\epsilon}}{{3}}\) clusters, each with a single blue vertex and \(B\) red ones. We then apply Lemma 3 to show that this cost is below a certain threshold if and only if each cluster consist of exactly three red stars (and an arbitrary blue vertex), solving 3-Partition.
### Maximum Degree
The reduction above results in a tree with a low diameter but arbitrarily high maximum degree. We have to adapt our reductions to show hardness also for bounded degrees. The results are summarized in Table 3. If the Fair Correlation Clustering instance is not required to be connected, we can represent 3-Partition with a forest of trees with maximum degree 2, that is, a forest of paths. The input numbers are modeled by paths with \(a_{i}\) vertices. The forest also contains \(\nicefrac{{\epsilon}}{{3}}\) isolated blue vertices, which again implies that an optimal fair clustering must have \(\nicefrac{{\epsilon}}{{3}}\) clusters each with \(B\) red vertices. By defining a sufficiently small cost threshold, we ensure that the fair clustering has cost below it if and only if none of the path-edges are "cut" by the clustering, corresponding to a partition of the \(a_{i}\).
There is nothing special about paths, we can arbitrarily restrict the shape of the trees, as long it is always possible to form such a tree with a given number of vertices. However, the argument crucially relies on the absence of edges between the \(a_{i}\)-paths/trees and does not transfer to connected graphs. This is due to the close relation between inter-cluster costs and the total number of edges stated in Lemma 3. The complexity of Fair Correlation Clustering on a single path with a color ratio \(1:c\) therefore remains open. Notwithstanding, we show hardness for trees in two closely related settings: keeping the color ratio at \(1:c\) but raising the maximum degree to 5, or having a single path but a total of \(\nicefrac{{n}}{{2}}\) colors and each color shared by exactly 2 vertices.
For the case of maximum degree 5 and two colors with ratio \(1:c\), we can again build on the 3-Partition machinery. The construction is inspired by how Feldmann and Foschini [27] used the problem to show hardness of computing so-called \(k\)-balanced partitions. We adapt it to our setting in which the vertices are colored and the clusters need to be fair.
For the single path with \(\nicefrac{{n}}{{2}}\) colors, we reduce from (the 1-regular 2-colored variant of) the Paint Shop Problem for Words[24]. There, a word is given in which every symbol
Figure 2: The tree with diameter 4 in the reduction from 3-Partition to Fair Correlation Clustering.
appears exactly twice. The task is to assign the values 0 and 1 to the letters of the word5 such that that, for each symbol, exactly one of the two occurrences receives a 1, but the number of blocks of consecutive 0s and 1s over the whole word is minimized. In the translation to Fair Correlation Clustering, we represent the word as a path and the symbols as colors. To remain fair, there must be two clusters containing exactly one vertex of each color, translating back to a 0/1-assignment to the word.
Footnote 5: The original formulation [24] assigns colors, aligning better with the paint shop analogy. We change the exposition here in order to avoid confusion with the colors in the fairness sense.
### Relaxed Fairness
One could think that the hardness of Fair Correlation Clustering already for classes of trees and forests has its origin in the strict fairness condition. After all, the color ratio in each cluster must precisely mirror that of the whole graph. This impression is deceptive. Instead, we lift most of our hardness results to Relaxed Fair Correlation Clustering considering the _relaxed fairness_ of Bera et al. [11]. Recall Definition 2. It prescribes two rationals \(p_{i}\) and \(q_{i}\) for each color \(i\) and allows, the proportion of \(i\)-colored elements in any cluster to be in the interval \([p_{i},q_{i}]\), instead of being precisely \(\nicefrac{{c}}{{d}}\), where \(d=\sum_{j=1}^{k}c_{j}\).
The main conceptual idea is to show that, in some settings but not all, the _minimum-cost_ solution under a relaxed fairness constraint is in fact _exactly_ fair. This holds for the settings described above where we reduce from 3-Partition. In particular, Relaxed Fair Correlation Clustering with a color ratio of \(1:c\) is NP-hard on trees with diameter 4 and forests of paths, respectively. Furthermore, the transferal of hardness is immediate for the case of a single path with \(\nicefrac{{n}}{{2}}\) colors and exactly 2 vertices of each color. Any relaxation of fairness still requires one vertex of each color in every cluster, maintaining the equivalence to the Paint Shop Problem for Words.
In contrast, algorithmic results are more difficult to extend if there are relaxedly fair solutions that have lower cost than any exactly fair one. We then no longer know the cardinality of the clusters in an optimal solution. As a proof of concept, we show that a slight adaption of our dynamic program for two colors in a ratio of \(1:1\) still works for what we call _\(\alpha\)-relaxed fairness_.6 There, the lower fairness ratio is \(p_{i}=\alpha\cdot\frac{c_{i}}{d}\) and the upper one is \(q_{i}=\frac{1}{\alpha}\cdot\frac{c_{i}}{d}\) for some parameter \(\alpha\in(0,1)\). We give an upper bound on the necessary cluster size depending on \(\alpha\), which is enough to find a good splitting of the forest. Naturally, the running time now also depends on \(\alpha\), but is of the form \(O(n^{h(1/\alpha)})\) for some polynomial \(h\). In particular, we get an polynomial-time algorithm for constant \(\alpha\). The proof of correctness
\begin{table}
\begin{tabular}{c c c c} \hline \hline Max. Degree & Color Ratio & Trees & Forests \\ \hline \multirow{2}{*}{2} & \(1:c\) & & NP-hard \\ & \(n/2\) colors, & & NP-hard \\ & 2 vertices each & NP-hard & NP-hard \\ & \(1:c\) & NP-hard & NP-hard \\ \hline \hline \end{tabular}
\end{table}
Table 3: Hardness of Fair Correlation Clustering on trees and forests depending on the maximum degree. The value \(c\) is a positive integer, possibly depending on \(n\). The complexity for paths (trees with maximum degree 2) with color ratio \(1:c\) is open.
consists of an exhaustive case distinction already for the simple case of \(1:1\). We are confident that this can be extended to more general color ratios, but did not attempt it in this work.
### Summary and Outlook
We show that Fair Correlation Clustering on trees, and thereby forests, is NP-hard. It remains so on trees of constant degree or diameter, and-for certain color distributions-it is also NP-hard on paths. On the other hand, we give a polynomial-time algorithm if the minimum size \(d\) of a fair cluster is constant. We also provide an efficient algorithm for the color ratio \(1:c\) if the total number of clusters is constant, corresponding to \(c\in\Theta(n)\). For our main algorithms and hardness results, we prove that they still hold when the fairness constraint is relaxed, so the hardness is not due to the strict fairness definition. Ultimately, we hope that the insights gained from these proofs as well as our proposed algorithms prove helpful to the future development of algorithms to solve Fair Correlation Clustering on more general graphs. In particular, fairness with color ratio \(1:c\) with \(c\) being very large seems to be an interesting and potentially tractable type of distribution for future study.
As first steps to generalize our results, we give a polynomial-time approximation scheme (PTAS) for Fair Correlation Clustering on forests. Another avenue for future research could be that Lemma 5, bounding the cluster size of optimal solutions, extends also to bipartite graphs. This may prove helpful in developing exact algorithms for bipartite graphs with other color ratios than \(1:1\).
Parameterized algorithms are yet another approach to solving more general instances. When looking at the decision version of Fair Correlation Clustering, our results can be cast as an XP-algorithm when the problem is parameterized by the cluster size \(d\), for it can be solved in time \(\operatorname{O}(n^{g(d)})\) for some function \(g\). Similarly, we get an XP-algorithm for the number of clusters as parameter. We wonder whether Fair Correlation Clustering can be placed in the class FPT of fixed-parameter tractable problems for any interesting structural parameters. This would require a running time of, e.g., \(g(d)\cdot\mathsf{poly}(n)\). There are FPT-algorithms for Cluster Editing parameterized by the cost of the solution [15]. Possibly, future research might provide similar results for the fair variant as well. A natural extension of our dynamic programming approach could potentially lead to an algorithm parameterizing by the treewidth of the input graph. Such a solution would be surprising, however, since to the best of our knowledge even for normal, unfair Correlation Clustering7 and for the related Max Dense Graph Partition[22] no treewidth approaches are known.
Footnote 7: In more detail, no algorithm for complete Correlation Clustering has been proposed. Xin [36] gives a treewidth algorithm for _incomplete_ Correlation Clustering for the treewidth of the graph of all positively and negatively labeled edges.
Finally, it is interesting how Fair Correlation Clustering behaves on paths. While we obtain NP-hardness for a particular color distribution from the Paint Shop Problem For Words, the question of whether Fair Correlation Clustering on paths with for example two colors in a ratio of \(1:c\) is efficiently solvable or not is left open. However, we believe that this question is rather answered by the study of the related (discrete) Necklace Splitting problem, see the work of Alon and West [6]. There, the desired cardinality of every color class is explicitly given, and it is non-constructively shown that there always exists a split of the necklace with the number of cuts meeting the obvious lower bound. A constructive splitting procedure may yield some insights for Fair Correlation Clustering on paths.
## 3 Preliminaries
We fix here the notation we are using for the technical part and give the formal definition of Fair Correlation Clustering.
### Notation
We refer to the set of natural numbers \(\{0,1,2,\ldots\}\) by \(\mathbb{N}\). For \(k\in\mathbb{N}\), let \([k]=\{1,2,\ldots,k\}\) and \(\mathbb{N}_{>k}=\mathbb{N}\setminus(\{0\}\cup[k])\). We write \(2^{[k]}\) for the power set of \([k]\). By \(\gcd(a_{1},a_{2},\ldots,a_{k})\) we denote the _greatest common divisor_ of \(a_{1},a_{2}\ldots,a_{k}\in\mathbb{N}\).
An _undirected graph_\(G=(V,E)\) is defined by a set of vertices \(V\) and a set of edges \(E\subseteq{V\choose 2}=\{\{u,v\}\mid u,v\in V,u\neq v\}\). If not stated otherwise, by the _size of \(G\)_ we refer to \(n+m\), where \(n=|V|\) and \(m=|E|\). A graph is called _complete_ if \(m=\frac{n(n-1)}{2}\). We call a graph \(G=(A\cup B,E)\)_bipartite_ if there are no edges in \(A\) nor \(B\), i.e., \(E\cap{A\choose 2}=E\cap{B\choose 2}=\emptyset\). For every \(S\subseteq V\), we let \(G[S]=\left(S,E\cap{S\choose 2}\right)\) denote the _subgraph induced by \(S\)_. The _degree_ of a vertex \(v\in V\) is the number of edges incident to that vertex, \(\delta(v)=|\{u\mid\{u,v\}\in E\}|\). The _degree_ of a graph \(G=(V,E)\) is the maximum degree of any of its vertices \(\delta(G)=\max_{v\in V}\delta(v)\). A _path_ of length \(k\) in \(G\) is a tuple of vertices \((v_{1},v_{2},\ldots,v_{k-1})\) such that for each \(1\leqslant i<k-1\) we have \(\{v_{i},v_{i+1}\}\in E\). We only consider simple paths, i.e., we have \(v_{i}\neq v_{j}\) for all \(i\neq j\). A graph is called _connected_ if for every pair of vertices \(u\), \(v\) there is a path connecting \(u\) and \(v\). The _distance_ between two vertices is the length of the shortest path connecting these vertices and the _diameter_ of a graph is the maximum distance between a pair of vertices. A _circle_ is a path \((v_{1},v_{2},\ldots,v_{k})\) such that \(v_{1}=v_{k}\) and \(v_{i}\neq v_{j}\) only for all other pairs of \(i\neq j\).
A _forest_ is a graph without circles. A connected forest is called a _tree_. There is exactly one path connecting every pair of vertices in a tree. A tree is _rooted_ by choosing any vertex \(r\in V\) as the root. Then, every vertex \(v\), except for the root, has a _parent_, which is the next vertex on the path from \(v\) to \(r\). All vertices that have \(v\) as a parent are referred to as the _children_ of \(v\). A vertex without children is called a _leaf_. Given a rooted tree \(T\), by \(T_{v}\) we denote the subtree induced by \(v\) and its descendants, i.e., the set of vertices such that there is a path starting in \(v\) and ending in that vertex without using the edge to \(v\)'s parent. Observe that each forest is a bipartite graph, for example by placing all vertices with even distance to the root of their respective tree on one side and the other vertices on the other side.
A finite set \(U\) can be _colored_ by a function \(c:U\to[k]\), for some \(k\in\mathbb{N}_{>0}\). If there are only two colors, i.e., \(k=2\), for convenience we call them _red_ and _blue_, instead by numbers.
For a _partition_\(\mathcal{P}=\{S_{1},S_{2},\ldots,S_{k}\}\) with \(S_{i}\cap S_{j}=\emptyset\) for \(i\neq j\) of some set \(U=S_{1}\cup S_{2}\cup\ldots\cup S_{k}\) and some \(u\in U\) we use \(\mathcal{P}[u]\) to refer to the set \(S_{i}\) for which \(u\in S_{i}\). Further, we define the term _coloring_ on sets and partitions. The _coloring of a set_ counts the number of occurrences of each color in the set.
[Coloring of Sets] Let \(S\) be a set colored by a function \(c\colon S\to[k]\). Then, the coloring of \(S\) is an array \(C_{S}\) such that \(C_{S}[i]=|\{s\in S\mid c(s)=i\}|\) for all \(i\in[k]\).
The _coloring of a partition_ counts the number of occurrences of set colorings in the partition.
[Coloring of Partitions] Let \(U\) be a colored set and let \(\mathcal{P}\) be a partition of \(U\). Let \(\mathcal{C}=\{C_{S}\mid S\subseteq U\}\) denote the set of set colorings for which there is a subset of \(U\) with that coloring. By an arbitrarily fixed order, let \(C_{1},C_{2},\ldots,C_{\ell}\) denote the elements of \(\mathcal{C}\). Then, the coloring of \(\mathcal{P}\) is an array \(C_{\mathcal{P}}\) such that \(C_{\mathcal{P}}[i]=|\{S\in\mathcal{P}\mid C_{S}=C_{i}\}|\) for all \(i\in[\ell]\).
### Problem Definitions
In order to define Fair Correlation Clustering, we first give a formal definition of the unfair clustering objective. Correlation Clustering receives a pairwise similarity measure for a set of objects and aims at minimizing the number of similar objects placed in separate clusters and the number of dissimilar objects placed in the same cluster. For the sake of consistency, we reformulate the definition of Bonchi et al. [13] such that the pairwise similarity between objects is given by a graph rather than an explicit binary similarity function. Given a graph \(G=(V,E)\) and a partition \(\mathcal{P}\) of \(V\), the Correlation Clustering cost is
\[\operatorname{cost}(G,\mathcal{P})=|\,\{\{u,v\}\in\binom{V}{2}\setminus E\mid \mathcal{P}[u]=\mathcal{P}[v]\}|+|\,\{\{u,v\}\in E\mid\mathcal{P}[u]\neq \mathcal{P}[v]\}\,|.\]
We refer to the first summand as the _intra-cluster cost_\(\psi\) and the second summand as the _inter-cluster cost_\(\chi\). Where \(G\) is clear from context, we abbreviate to \(\operatorname{cost}(\mathcal{P})\). Sometimes, we consider the cost of \(\mathcal{P}\) on an induced subgraph. To this end, we allow the same cost definition as above also if \(\mathcal{P}\) partitions some set \(V^{\prime}\supseteq V\). We define (unfair) Correlation Clustering as follows.
\begin{tabular}{|l l|} \hline \multicolumn{1}{|c}{Correlation Clustering} \\
**Input:** & Graph \(G=(V,E)\). \\
**Task:** & Find a partition \(\mathcal{P}\) of \(V\) that minimizes \(\operatorname{cost}(\mathcal{P})\). \\ \hline \end{tabular}
We emphasize that this is the complete, unweighted, min-disagree form of Correlation Clustering. It is complete as every pair of objects is either similar or dissimilar but none is indifferent regarding the clustering. It is unweighted as the (dis)similarity between two vertices is binary. A pair of similar objects that are placed in separate clusters as well as a pair of dissimilar objects in the same cluster is called a _disagreement_, hence the naming of the min-disagree form. An alternative formulation would be the max-agree form with the objective to maximize the number of pairs that do not form a disagreement. Note that both formulations induce the same ordering of clusterings though approximation factors may differ because of the different formulations of the cost function.
Our definition of the Fair Correlation Clustering problem loosely follows [2]. The fairness aspect limits the solution space to _fair_ partitions. A partition is fair if each of its sets has the same color distribution as the universe that is partitioned.
[Fair Subset] Let \(U\) be a finite set of elements colored by a function \(c:U\to[k]\) for some \(k\in\mathbb{N}_{>0}\). Let \(U_{i}=\{u\in U\mid c(u)=i\}\) be the set of elements of color \(i\) for all \(i\in[k]\). Then, some \(S\subseteq U\) is fair if and only if for all colors \(i\in[k]\) we have \(\frac{|S\cap U_{i}|}{|S|}=\frac{|U_{i}|}{|U|}\).
[Fair Partition] Let \(U\) be a finite set of elements colored by a function \(c:U\to[k]\) for some \(k\in\mathbb{N}_{>0}\). Then, a partition \(S_{1}\cup S_{2}\cup\ldots\cup S_{\ell}=U\) is fair if and only if all sets \(S_{1},S_{2},\ldots,S_{\ell}\) are fair.
We now define complete, unweighted, min-disagree variant of the Fair Correlation Clustering problem. When speaking of (Fair) Correlation Clustering, we refer to this variant, unless explicitly stated otherwise.
## 4 Structural Insights
We prove here the structural results outlined in Subsection 2.1. The most important insight is that in bipartite graphs, and in forests in particular, there is always a minimum-cost fair clustering such that all clusters are of some fixed size. This property is very useful, as it helps for building reductions in hardness proofs as well as algorithmic approaches that enumerate possible clusterings. Further, by the following lemma, this also implies that minimizing the inter-cluster cost suffices to minimize the Correlation Clustering cost, which simplifies the development of algorithms solving Fair Correlation Clustering on such instances.
Let \(\mathcal{P}\) be a partition of the vertices of an \(m\)-edge graph \(G\). Let \(\chi\) denote the inter-cluster cost incurred by \(\mathcal{P}\) on \(G\). If all sets in the partition are of size \(d\), then \(\text{cost}(\mathcal{P})=\frac{(d-1)}{2}\,n-m+2\chi\). In particular, if \(G\) is a tree, \(\text{cost}(\mathcal{P})=\frac{(d-3)}{2}\,n+2\chi+1\).
Proof.: Note that in each of the \(\frac{n}{d}\) clusters there are \(\frac{d(d-1)}{2}\) pairs of vertices, each incurring an intra-cost of \(1\) if not connected by an edge. Let the total intra-cost be \(\psi\). As there is a total of \(m\) edges, we have
\[\text{cost}(\mathcal{P})=\chi+\psi=\chi+\frac{n}{d}\cdot\frac{d(d-1)}{2}-(m- \chi)=\frac{(d-1)n}{2}-m+2\chi.\qed\]
In particular, if \(G\) is a tree, this yields \(\text{cost}(\mathcal{P})=\frac{(d-3)n}{2}+2\chi+1\) as there \(m=n-1\).
### Forests
We find that in forests in every minimum-cost partition all sets in the partition are of the minimum size required to fulfill the fairness requirement.
Let \(F\) be a forest with \(k\geqslant 2\) colors in a ratio of \(c_{1}:c_{2}:\ldots:c_{k}\) with \(c_{i}\in\mathbb{N}_{>0}\) for all \(i\in[k]\), \(\gcd(c_{1},c_{2},\ldots,c_{k})=1\), and \(\sum_{i=1}^{k}c_{i}\geqslant 3\). Then, all clusters of every minimum-cost fair clustering are of size \(d=\sum_{i=1}^{k}c_{i}\).
Proof.: Let \(d=\sum_{i=1}^{k}c_{i}\). For any clustering \(\mathcal{P}\) of \(V\) to be fair, all clusters must be at least of size \(d\). We show that if there is a cluster \(S\) in the clustering with \(|S|>d\), then we decrease the cost by splitting \(S\). First note that in order to fulfill the fairness constraint, we have \(|S|=ad\) for some \(a\in\mathbb{N}_{\geqslant 2}\). Consider a new clustering \(\mathcal{P}^{\prime}\) obtained by splitting \(S\) into \(S_{1},S_{2}\), where \(S_{1}\subset S\) is an arbitrary fair subset of \(S\) of size \(d\) and \(S_{2}=S\setminus S_{1}\). Note that the cost incurred by every edge and non-edge with at most one endpoint in \(S\) is the same in both clusterings. Let \(\psi\) be the intra-cluster cost of \(\mathcal{P}\) on \(F[S]\). Regarding the cost incurred by the edges and non-edges with both endpoints in \(S\), we know that
\[\text{cost}(F[S],\mathcal{P})\geqslant\psi\geqslant\frac{ad(ad-1)}{2}-(ad-1)= \frac{a^{2}d^{2}-3ad+2}{2}\]
since the cluster is of size \(ad\) and as it is part of a forest it contains at most \(ad-1\) edges. In the worst case, the \(\mathcal{P}^{\prime}\) cuts all the \(ad-1\) edges. However, we profit from the smaller cluster
sizes. We have \[\operatorname{cost}(F[S],\mathcal{P}^{\prime})=\chi+\psi \leqslant ad-1+\frac{d(d-1)}{2}+\frac{(a-1)d\cdot((a-1)d-1)}{2}\] \[=\frac{2d^{2}+a^{2}d^{2}-2ad^{2}+ad-2}{2}.\] Hence, \(\mathcal{P}^{\prime}\) is cheaper by \[\operatorname{cost}(F[S],\mathcal{P})-\operatorname{cost}(F[S],\mathcal{P}^{ \prime})\geqslant\frac{2ad^{2}-2d^{2}-4ad+4}{2}=ad(d-2)-d^{2}+2.\] This term is increasing in \(a\). As \(a\geqslant 2\), by plugging in \(a=2\), we hence obtain a lower bound of \(\operatorname{cost}(F[S],\mathcal{P})-\operatorname{cost}(F[S],\mathcal{P}^{ \prime})\geqslant d^{2}-4d+2\). For \(d\geqslant 2\), the bound is increasing in \(d\) and it is positive for \(d>3\). This means, if \(d>3\) no clustering with a cluster of size more than \(d\) has minimal cost implying that all optimum clusterings only consist of clusters of size \(d\).
Last, we have to argue the case \(d=3\), i.e., we have a color ratio of \(1:2\) or \(1:1:1\). In this case \(d^{2}-4d+2\) evaluates to \(-1\). However, we obtain a positive change if we do not split arbitrarily but keep at least one edge uncut. Note that this means that one edge less is cut and one more edge is present, which means that our upper bound on \(\operatorname{cost}(T[S],\mathcal{P}^{\prime})\) decreases by 2, so \(\mathcal{P}\) is now cheaper. Hence, assume there is an edge \(\{u,v\}\) such that \(c(u)\neq c(v)\). Then by splitting \(S\) into \(\{u,v,w\}\) and \(S\setminus\{u,v,w\}\) for some vertex \(w\in S\setminus\{u,v\}\) that makes the component \(\{u,v,w\}\) fair, we obtain a cheaper clustering. If there is no such edge \(\{u,v\}\), then \(T[S]\) is not connected. This implies there are at most \(3a-3\) edges if the color ratio is \(1:1:1\) since no edge connects vertices of different colors and there are \(a\) vertices of each color, each being connected by at most \(a-1\) edges due to the forest structure. By a similar argument, there are at most \(3a-2\) edges if the color ratio is \(1:2\). Hence, the lower bound on \(\operatorname{cost}(T[S],\mathcal{P})\) increases by 1. At the same time, even if \(\mathcal{P}^{\prime}\) cuts all edges it cuts at most \(3a-2\) times, so it is at least 1 cheaper than anticipated. Hence, in this case \(\operatorname{cost}(T[S],\mathcal{P}^{\prime})<\operatorname{cost}(T[S],\mathcal{ P})\) no matter how we cut.
Note that Lemma 4 makes no statement about the case of two colors in a ratio of \(1:1\).
### Bipartite Graphs
We are able to partially generalize our findings for trees to bipartite graphs. We show that there is still always a minimum-cost fair clustering with cluster sizes fixed by the color ratio. However, in bipartite graphs there may also be minimum-cost clusterings with larger clusters. We start with the case of two colors in a ratio of \(1:1\) and then generalize to other ratios.
Let \(G=(A\cup B,E)\) be a bipartite graph with two colors in a ratio of \(1:1\). Then, there is a minimum-cost fair clustering of \(G\) that has no clusters with more than 2 vertices. Further, each minimum-cost fair clustering can be transformed into a minimum-cost fair clustering such that all clusters contain no more than 2 vertices in linear time. If \(G\) is a forest, then no cluster in a minimum-cost fair clustering is of size more than 4.
Proof.: Note that, due to the fairness constraint, each fair clustering consists only of evenly sized clusters. We prove both statements by showing that in each cluster of at least 4 vertices there are always two vertices such that by splitting them from the rest of the cluster the cost does not increase and fairness remains.
Let \(\mathcal{P}\) be a clustering and \(S\in\mathcal{P}\) be a cluster with \(|S|\geqslant 4\). Let \(S_{A}=S\cap A\) and \(S_{B}=S\cap B\). Assume there is \(a\in S_{a}\) and \(b\in S_{b}\) such that \(a\) and \(b\) have not the same color.
Then, the clustering \(\mathcal{P}^{\prime}\) obtained by splitting \(S\) into \(\{a,b\}\) and \(S\setminus\{a,b\}\) is fair. We now analyze for each pair of vertices \(u,v,u\neq v\) how the incurred Correlation Clustering cost changes. The cost does not change for every pair of vertices of which at most one vertex of \(u\) and \(v\) is in \(S\). Further, it does not change if either \(\{u,v\}=\{a,b\}\) or \(\{u,v\}\subseteq S\setminus\{a,b\}\). There are at most \(|S_{A}|-1+|S_{B}|-1=|S|-2\) edges with one endpoint in \(\{a,b\}\) and the other in \(S\setminus\{a,b\}\). Each of them is cut in \(\mathcal{P}^{\prime}\) but not in \(\mathcal{P}\), so they incur an extra cost of at most \(|S|-2\). However, due to the bipartite structure, there are \(|S_{A}|-1\) vertices in \(S\setminus\{a,b\}\) that have no edge to \(a\) and \(|S_{B}|-1\) vertices in \(S\setminus\{a,b\}\) that have no edge to \(b\). These \(|S|-2\) vertices incur a total cost of \(|S|-2\) in \(\mathcal{P}\) but no cost in \(\mathcal{P}^{\prime}\). This makes up for any cut edge in \(\mathcal{P}\), so splitting the clustering never increases the cost.
If there is no \(a\in S_{a}\) and \(b\in S_{b}\) such that \(a\) and \(b\) have not the same color, then either \(S_{A}=\emptyset\) or \(S_{B}=\emptyset\). In both cases, there are no edges inside \(S\), so splitting the clustering in an arbitrary fair way never increases the cost.
By iteratively splitting large clusters in any fair clustering, we hence eventually obtain a minimum-cost fair clustering such that all clusters consist of exactly two vertices.
Now, assume \(G\) is a forest and there would be a minimum-cost clustering \(\mathcal{P}\) with some cluster \(S\in\mathcal{P}\) such that \(|S|>2a\) for some \(a\in\mathbb{N}_{>2}\). Consider a new clustering \(\mathcal{P}^{\prime}\) obtained by splitting \(S\) into \(\{u,v\}\) and \(S\setminus\{u,v\}\), where \(u\) and \(v\) are two arbitrary vertices of different color that have at most \(1\) edge towards another vertex in \(S\). There are always two such vertices due to the forest structure and because there are \(\frac{S}{2}\) vertices of each color. Then, \(\mathcal{P}^{\prime}\) is still a fair clustering. Note that the cost incurred by each edge and non-edge with at most one endpoint in \(S\) is the same in both clusterings. Let \(\psi\) denote the intra-cluster cost of \(\mathcal{P}\) in \(G[S]\). Regarding the edges and non-edges with both endpoints in \(S\), we know that
\[\operatorname{cost}(G[S],\mathcal{P})\geqslant\psi\geqslant\frac{2a(2a-1)}{2}- (2a-1)=2a^{2}-3a+1\]
as the cluster consists of \(2a\) vertices and has at most \(2a-1\) edges due to the forest structure. In the worst case, \(\mathcal{P}^{\prime}\) cuts \(2\) edges. However, we profit from the smaller cluster sizes. We have
\[\operatorname{cost}(G[S],\mathcal{P}^{\prime})\leqslant 2+\psi\leqslant 2+1+\frac{2 (a-1)(2(a-1)-1)}{2}-(2a-1-2)=2a^{2}-5a+6.\]
Hence, \(\mathcal{P}\) costs at least \(2a-5\) more than \(\mathcal{P}^{\prime}\), which is positive as \(a>2\). Thus, in every minimum-cost fair clustering all clusters are of size \(4\) or \(2\).
We employ an analogous strategy if there is a different color ratio than \(1:1\) in the graph. However, then we have to split more than \(2\) vertices from a cluster. To ensure that the clustering cost does not increase, we have to argue that we can take these vertices in some balanced way from both sides of the bipartite graph.
Let \(G=(A\cup B,E)\) be a bipartite graph with \(k\geqslant 2\) colors in a ratio of \(c_{1}:c_{2}:\ldots:c_{k}\) with \(c_{i}\in\mathbb{N}_{>0}\) for all \(i\in[k]\) and \(\gcd(c_{1},c_{2},\ldots,c_{k})=1\). Then, there is a minimum-cost fair clustering such that all its clusters are of size \(d=\sum_{i=1}^{k}c_{i}\). Further, each minimum-cost fair clustering with larger clusters can be transformed into a minimum-cost fair clustering such that all clusters contain no more than \(d\) vertices in linear time.
Proof.: Due to the fairness constraint, each fair clustering consists only of clusters that are of size \(ad\), where \(a\in\mathbb{N}_{>0}\). We prove the statements by showing that a cluster of size at least \(2d\) can be split such that the cost does not increase and fairness remains.
Let \(\mathcal{P}\) be a clustering and \(S\in\mathcal{P}\) be a cluster with \(|S|=ad\) for some \(a\geqslant 2\). Let \(S_{A}=S\cap A\) as well as \(S_{B}=S\cap B\) and w.l.o.g. \(|S_{A}|\geqslant|S_{B}|\). Our proof has three steps.
First, we show that there is a fair \(\widetilde{S}\subseteq S\) such that \(|\widetilde{S}|=d\) and \(|\widetilde{S}\cap A|\geqslant|\widetilde{S}\cap B|\).
* Then, we construct a fair set \(\widehat{S}\subseteq S\) by replacing vertices in \(\widetilde{S}\) with vertices in \(S_{B}\setminus\widetilde{S}\) such that still \(|\widehat{S}|=d,|\widehat{S}_{A}|\geqslant|\widehat{S}_{B}|\), with \(\widehat{S}_{A}=\widehat{S}\cap A\) and \(\widehat{S}_{B}=\widehat{S}\cap B\), and additionally \(|\widehat{S}_{A}|-|\hat{S}_{B}|\leqslant|S_{A}|-|S_{B}|\).
* Last, we prove that splitting \(S\) into \(\widehat{S}\) and \(S\setminus\widehat{S}\) does not increase the clustering cost.
We then observe that the resulting clustering is fair, so the lemma's statements hold because any fair clustering with a cluster of more than \(d\) vertices is transformed into a fair clustering with at most the same cost, and only clusters of size \(d\) by repeatedly splitting larger clusters.
For the first step, assume there would be no such \(\widetilde{S}\subseteq S\), i.e., that we only could take \(s<\frac{d}{2}\) vertices from \(S_{A}\) without taking more than \(c_{i}\) vertices of each color \(i\in[k]\). Let \(s_{i}\) be the number of vertices of color \(i\) among these \(s\) vertices for all \(i\in[k]\). Then, if \(s_{i}=0\) there is no vertex of color \(i\) in \(S_{A}\) as we could take the respective vertex into \(\widetilde{S}\), otherwise. Analogously, if \(s_{i}<c_{i}\), then there are no more then \(s_{i}\) vertices of color \(i\) in \(S_{A}\). If we take \(s_{i}=c_{i}\) vertices, then up to all of the \(ac_{i}=as_{i}\) vertices of that color are possibly in \(S_{A}\). Hence, \(|S_{A}|\leqslant\sum_{i=1}^{k}as_{i}=as<\frac{ad}{2}\). This contradicts \(S_{A}\geqslant S_{B}\) because \(|A|+|B|=ad\). Thus, there is a fair set \(\widetilde{S}\) of size \(d\) such that \(|\widetilde{S}\cap S_{A}|\geqslant|\widetilde{S}\cap S_{B}|\).
Now, for the second step, we transform \(\widetilde{S}\) into \(\widehat{S}\). Note that, if \(|S_{A}\setminus\widetilde{S}|\geqslant|S_{B}\setminus\widetilde{S}|\) it suffices to set \(\widetilde{S}=\widetilde{S}\). Otherwise, we replace some vertices from \(\widetilde{S}\cap S_{A}\) by vertices of the respective color from \(S_{B}\setminus\widetilde{S}\). We have to show that after this we still take at least as many vertices from \(S_{A}\) as from \(S_{B}\) and \(|S_{A}|-|\widehat{S}_{A}|\geqslant|S_{B}|-|\widehat{S}_{B}|\). Let
\[\delta=|S_{B}\setminus\widetilde{S}|-|S_{A}\setminus\widetilde{S}|>0.\]
Recall that \(|S_{A}|\geqslant|S_{B}|\), so \(\delta\leqslant|\widetilde{S}\cap A|-|\widetilde{S}\cap B|\). Then, we build \(\widehat{S}\) from \(\widetilde{S}\) by replacing \(\frac{\delta}{2}\leqslant\frac{d}{2}\) vertices from \(\widetilde{S}\cap S_{A}\) with vertices of the respective color from \(S_{B}\setminus\widetilde{S}\). If there are such \(\frac{\delta}{2}\) vertices, we have \(|S_{A}\setminus\widehat{S}_{A}|=|S_{B}\setminus\widehat{S}_{B}|\) and \(|\widehat{S}_{A}|\geqslant|\widehat{S}_{B}|\). Consequently, \(\widehat{S}\) fulfills the requirements.
Assume there would be no such \(\frac{\delta}{2}\) vertices but that we could only replace \(s<\frac{\delta}{2}\) vertices. Let \(s_{i}\) be the number of vertices of color \(i\) among these vertices for all \(i\in[k]\). By a similar argumentation as above and because there are only \((a-1)c_{i}\) vertices of each color \(i\) in \(S\setminus\widetilde{S}\), we have
\[|S_{B}\setminus\widehat{S}|\leqslant\sum_{i=1}^{k}(a-1)s_{i}=(a-1)s<\frac{(a- 1)d}{2}.\]
This contradicts \(|S_{B}\setminus\widetilde{S}|>|S_{A}\setminus\widetilde{S}|\) as \(|(S_{A}\cup S_{B})\setminus\widetilde{S}|=(a-1)d\). Hence, there are always enough vertices to create \(\widehat{S}\).
For the last step, we show that splitting \(S\) into \(\widehat{S}\) and \(S\setminus\widehat{S}\) does not increase the cost by analyzing the change for each pair of vertices \(\{u,v\}\in{V\choose 2}\). If not \(u\in S\) and \(v\in S\), the pair is not affected. Further, it does not change if either \(\{u,v\}\subseteq\widehat{S}\) or \(\{u,v\}\subseteq(S\setminus\widehat{S})\). For the remaining pairs of vertices, there are at most
\[|\widehat{S}_{A}|\cdot|S_{B}\setminus\widehat{S}_{B}|+|\widehat{S}_{B}|\cdot| S_{A}\setminus\widehat{S}_{A}|=|\widehat{S}_{A}|\cdot|S_{B}|+|\widehat{S}_{B}| \cdot|S_{A}|-2\left(|\widehat{S}_{A}|\cdot|\widehat{S}_{B}|\right)\]
edges that are cut when splitting \(S\) into \(\widehat{S}\) and \(S\setminus\widehat{S}\). At the same time, there are
\[|\widehat{S}_{A}|\cdot|S_{A}\setminus\widehat{S}_{A}|+|\widehat{S}_{B}|\cdot| S_{B}\setminus\widehat{S}_{B}|=|\widehat{S}_{A}|\cdot|S_{A}|+|\widehat{S}_{B}| \cdot|S_{B}|-|\widehat{S}_{A}|^{2}-|\widehat{S}_{B}|^{2}\]
pairs of vertices that are not connected and placed in separate clusters in \(\mathcal{P}^{\prime}\) but not in \(\mathcal{P}\). Hence, we have \(\mathcal{P}\) is more expansive than \(\mathcal{P}^{\prime}\) by at least
\[\begin{split}\operatorname{cost}(\mathcal{P})-\operatorname{cost}( \mathcal{P}^{\prime})&\geqslant|\widehat{S}_{A}|\cdot|S_{A}|+| \widehat{S}_{B}|\cdot|S_{B}|-|\widehat{S}_{A}|\cdot|S_{B}|-|\widehat{S}_{B}| \cdot|S_{A}|\\ &\quad-\left(|\widehat{S}_{A}|^{2}-2\left(|\widehat{S}_{A}|\cdot |\widehat{S}_{B}|\right)+|\widehat{S}_{B}|^{2}\right)\\ &\geqslant\left(|\widehat{S}_{A}|-|\widehat{S}_{B}|\right)\cdot \left(|S_{A}|-|S_{B}|\right)-\left(|\widehat{S}_{A}|-|\widehat{S}_{B}|\right)^ {2}.\end{split}\]
This is non-negative as \(|\widehat{S}_{A}|\geqslant|\widehat{S}_{B}|\) and \(|\widehat{S}_{A}|-|\widehat{S}_{B}|\leqslant|S_{A}|-|S_{B}|\). Hence, splitting a cluster like this never increases the cost.
Unlike in forests, however, the color ratio yields no bound on the maximum cluster size in minimum-cost fair clusterings on bipartite graphs but just states there is a minimum-cost fair clustering with bounded cluster size. Let \(G=(R\cup B,\{\{r,b\}\mid r\in R\wedge b\in B\})\) be a complete bipartite graph with \(|R|=|B|\) such that all vertices in \(R\) are red and all vertices in \(B\) are blue. Then, all fair clusterings in \(G\) have the same cost, including the one with a single cluster \(S=R\cup B\). This holds because of a similar argument as employed in the last part of Lemma 10 since every edge that is cut by a clustering is compensated for with exactly one pair of non-adjacent vertices that is then no longer in the same cluster.
## 5 Hardness Results
This section provides \(\NP\)-hardness proofs for Fair Correlation Clustering under various restrictions.
### Forests and Trees
With the knowledge of the fixed sizes of clusters in a minimum-cost clustering, we are able to show that the problem is surprisingly hard, even when limited to certain instances of forests and trees.
To prove the hardness of Fair Correlation Clustering under various assumptions, we reduce from the strongly \(\NP\)-complete 3-Partition problem [29].
\begin{tabular}{|l l|} \hline
3-Partition & \\
**Input:** & \(n=3p\) with \(p\in\mathbb{N}\), positive integers \(a_{1},a_{2},\ldots,a_{n}\), and \(B\in\mathbb{N}\) such that \(\frac{B}{4}<a_{i}<\frac{B}{2}\) as well as \(\sum_{i=1}^{n}a_{i}=pB\). \\
**Task:** & Decide if there is a partition of the numbers \(a_{i}\) into triples such that the sum of each triple is \(B\). \\ \hline \end{tabular}
Our first reduction yields hardness for many forms of forests.
Fair Correlation Clustering_on forests with two colors in a ratio of \(1:c\) is \(\NP\)-hard. It remains \(\NP\)-hard when arbitrarily restricting the shape of the trees in the forest as long as for every \(a\in\mathbb{N}\) it is possible to form a tree with \(a\) vertices._
Proof.: We reduce from 3-Partition. For every \(a_{i}\), we construct an arbitrarily shaped tree of \(a_{i}\) red vertices. Further, we let there be \(p\) isolated blue vertices. Note that the ratio
between blue and red vertices is \(1:B\). We now show that there is a fair clustering \(\mathcal{P}\) such that
\[\text{cost}(\mathcal{P})=p\cdot\frac{B(B+1)}{2}-p(B-3)\]
if and only if the given instance is a yes-instance for 3-Partition.
If we have a yes-instance of 3-Partition, then there is a partition of the set of trees into \(p\) clusters of size \(B\). By assigning the blue vertices arbitrarily to one unique cluster each, we hence obtain a fair partition. As there are no edges between the clusters and each cluster consists of \(B+1\) vertices and \(B-3\) edges, this partition has a cost of \(p\cdot\frac{B(B+1)}{2}-p(B-3)\).
For the other direction, assume there is a fair clustering of cost \(\frac{B(B+1)}{2}-p(B-3)\). By Lemma 4, each of the clusters consists of exactly one blue and \(B\) red vertices. Each cluster requires \(\frac{B(B+1)}{2}\) edges, but the graph has only \(p(B-3)\) edges. The intra-cluster cost alone is hence at least \(p\cdot\frac{B(B+1)}{2}-p(B-3p)\). This means that the inter-cluster cost is \(0\), i.e., the partition does not cut any edges inside the trees. Since all trees are of size greater than \(\frac{B}{4}\) and less than \(\frac{B}{2}\), this implies that each cluster consists of exactly one blue vertex and exactly three uncut trees with a total of \(B\) vertices. This way, such a clustering gives a solution to 3-Partition, so our instance is a yes-instance.
As the construction of the graph only takes polynomial time in the instance size, this implies our hardness result.
Note that the hardness holds in particular for forests of paths, i.e., for forests with maximum degree \(2\).
With the next theorem, we adjust the proof of Theorem 3.1 to show that the hardness remains if the graph is connected.
Fair Correlation Clustering on trees with diameter 4 and two colors in a ratio of \(1:c\) is NP-hard.
Proof.: We reduce from 3-Partition. For every \(a_{i}\), we construct a star of \(a_{i}\) red vertices. Further, we let there be a star of \(p\) blue vertices. We obtain a tree of diameter \(4\) by connecting the center \(v\) of the blue star to all the centers of the red stars. The construction is depicted in Figure 3. Note that the ratio between blue and red vertices is \(1:B\). We now show that there is a fair clustering \(\mathcal{P}\) such that
\[\text{cost}(\mathcal{P})\leqslant\frac{pB^{2}-pB}{2}+7p-7\]
Figure 3: The tree with diameter \(4\) in the reduction from 3-Partition to Fair Correlation Clustering. The notation follows that of Theorem 3.1.
if and only if the given instance is a yes-instance for 3-Partition.
If we have a yes-instance of 3-Partition, then there is a partition of the set of stars into \(p\) clusters of size \(B\), each consisting of three stars. By assigning the blue vertices arbitrarily to one unique cluster each, we hence obtain a fair partition. We first compute the inter-cluster cost \(\chi\). We call an edge _blue_ or _red_ if it connects two blue or red vertices, respectively. We call an edge _blue-red_ if it connects a blue and a red vertex. All \(p-1\) blue edges are cut. Further, all edges between \(v\) (the center of the blue star) and red vertices are cut except for the three stars to which \(v\) is assigned. This causes \(3p-3\) more cuts, so the inter-cluster cost is \(\chi=4p-4\). Each cluster consists of \(B+1\) vertices and \(B-3\) edges, except for the one containing \(v\) which has \(B\) edges. The intra-cluster cost is hence
\[\psi=p\left(\frac{B(B+1)}{2}-B+3\right)-3=\frac{pB^{2}-pB}{2}+3p-3.\]
Combining the intra- and inter-cluster costs yields the desired cost of
\[\operatorname{cost}(\mathcal{P})=\chi+\psi=\frac{pB^{2}-pB}{2}+7p-7.\]
For the other direction, assume there is a fair clustering of cost at most \(\frac{pB^{2}-pB}{2}+7p-7\). As there are \(p(B+1)\) vertices, Lemma 4 gives that there are exactly \(p\) clusters, each consisting of exactly one blue and \(B\) red vertices. Let \(a\) denote the number of red center vertices in the cluster of \(v\). We show that \(a=3\). To this end, let \(\chi_{r}\) denote the number of cut red edges. We additionally cut \(p-1\) blue and \(3p-a\) blue-red edges. The inter-cluster cost of the clustering hence is \(\chi=\chi_{r}+4p-a-1\). Regarding the intra-cluster cost, there are no missing blue edges and as \(v\) is the only blue vertex with blue-red edges, there are \((p-1)B+B-a=pB-a\) missing blue-red edges. Last, we require \(p\cdot\frac{B(B-1)}{2}\) red edges, but the graph has only \(pB-3p\) red edges and \(\chi_{r}\) of them are cut. Hence, there are at least \(p\cdot\frac{B(B-1)}{2}-pB+3p+\chi_{r}\) missing red edges, resulting in a total intra-cluster cost of \(\psi\geqslant p\cdot\frac{B(B-1)}{2}+3p+\chi_{r}-a\). This results in a total cost of
\[\operatorname{cost}(\mathcal{P})=\chi+\psi\geqslant\frac{pB^{2}-pB}{2}+7p+2 \chi_{r}-2a-1.\]
As we assumed \(\operatorname{cost}(\mathcal{P})\leqslant\frac{pB^{2}-pB}{2}+7p-7\), we have \(2\chi_{r}-2a+6\leqslant 0\), which implies \(a\geqslant 3\) since \(\chi_{r}\geqslant 0\). Additionally, \(\chi_{r}\geqslant\frac{aB}{4}-(B-a)\), because there are at least \(\frac{B}{4}\) red vertices connected to each of the \(a\) chosen red centers but only a total of \(B-a\) of them can be placed in their center's cluster. Thus, we have \(\frac{aB}{2}-2B+6=\frac{(a-4)B}{2}+6\leqslant 0\), implying \(a<4\) and proving our claim of \(a=3\). Further, as \(a=3\), we obtain \(\chi_{r}\leqslant 0\), meaning that no red edges are cut, so each red star is completely contained in a cluster. Given that every red star is of size at least \(\frac{B}{4}\) and at most \(\frac{B}{2}\), this means each cluster consists of exactly three complete red stars with a total number of \(B\) red vertices each and hence yields a solution to the 3-Partition instance.
As the construction of the graph only takes polynomial time in the instance size and the constructed tree is of diameter 4, this implies our hardness result.
The proofs of Theorems 11 and 12 follow the same idea as the hardness proof of [27, Theorem 2], which also reduces from 3-Partition to prove a hardness result on the \(k\)-Balanced Partitioning problem. There, the task is to partition the vertices of an uncolored graph into \(k\) clusters of equal size [27].
\(k\)-Balanced Partitioning is related to Fair Correlation Clustering on forests in the sense that the clustering has to partition the forest into clusters of equal sizes by Lemmas 4 and 10. Hence, on forests we can regard Fair Correlation Clustering as the fair variant of \(k\)-Balanced Partitioning. By [27, Theorem 8], \(k\)-Balanced Partitioning is NP-hard on trees of degree 5. In their proof, Feldmann and Foschini [27] reduce from 3-Partition. We slightly adapt their construction to transfer the result to Fair Correlation Clustering.
Fair Correlation Clustering on trees of degree at most 5 with two colors in a ratio of \(1:c\) is NP-hard.
Proof.: We reduce from 3-Partition, which remains strongly NP-hard when limited to instances where \(B\) is a multiple of 4 since for every instance we can create an equivalent instance by multiplying all integers by 4. Hence, assume a 3-Partition instance such that \(B\) is a multiple of 4. We construct a graph for Fair Correlation Clustering by representing each \(a_{i}\) for \(i\in[n]\) by a gadget \(T_{i}\). Each gadget has a center vertex that is connected to the end of five paths: one path of length \(a_{i}\), three paths of length \(\frac{B}{4}\), and one path of length \(\frac{B}{4}-1\). Then, for \(i\in[n-1]\), we connect the dangling ends of the paths of length \(\frac{B}{4}-1\) in the gadgets \(T_{i}\) and \(T_{i+1}\) by an edge. So far, the construction is similar to the one by Feldmann and Foschini [27]. We color all vertices added so far in red. Then, we add a path of \(\frac{4n}{3}\) blue vertices and connect it by an edge to an arbitrary vertex of degree 1. The resulting graph is depicted in Figure 4.
Note that the construction takes polynomial time and we obtain a graph of degree 5. We now prove that it has a fair clustering \(\mathcal{P}\) such that
\[\operatorname{cost}(\mathcal{P})\leqslant\frac{(B-2)n}{2}+\frac{20n}{3}-3\]
if and only if the given instance is a yes-instance for 3-Partition.
Figure 4: Tree with maximum degree 5 in the reduction from 3-Partition to Fair Correlation Clustering (Theorem 13).
Assume we have a yes-instance for 3-Partition. We cut the edges connecting the different gadgets as well as the edges connecting the \(a_{i}\)-paths to the center of the stars. Then, we have \(n\) components of size \(B\) and \(1\) component of size \(a_{i}\) for each \(i\in[n]\). The latter ones can be merged into \(p=\frac{n}{3}\) clusters of size \(B\) without further cuts. Next, we cut all edges between the blue vertices and assign one blue vertex to each cluster. Thereby, note that the blue vertex that is already connected to a red cluster should be assigned to this cluster. This way, we obtain a fair clustering with inter-cluster cost \(\chi=n-1+n+\frac{4n}{3}-1=\frac{10n}{3}-2\), which, by Lemma 3, gives \(\operatorname{cost}(\mathcal{P})=\frac{(B-2)n}{2}+\frac{20n}{3}-3\).
For the other direction, let there be a minimum-cost fair clustering \(\mathcal{P}\) of cost at most \(\frac{(B-2)n}{2}+\frac{20n}{3}-3\). As \(\sum_{i=1}^{n}a_{i}=\frac{nB}{3}\), the graph consists of \(\frac{4n}{3}\cdot B\) red and \(\frac{4n}{3}\) blue vertices. By Lemma 4, \(\mathcal{P}\) hence consists of \(\frac{4n}{3}\) clusters, each consisting of one blue vertex and \(B\) red vertices. Thus, \(\mathcal{P}\) has to cut the \(\frac{4n}{3}-1\) edges on the blue path. Also, \(\mathcal{P}\) has to partition the red vertices into sets of size \(B\). By [27, Lemma 9] this requires at least \(2n-1\) cuts. This bounds the inter-cluster cost by \(\chi\geqslant 2n-1+\frac{4n}{3}-1=\frac{10n}{3}-2\), leading to a Correlation Clustering cost of \(\frac{(B-2)n}{2}+\frac{20n}{3}-3\) as seen above, so we know that no more edges are cut. Further, the unique minimum-sized set of edges that upon removal leaves no red components of size larger than \(B\) is the set of the \(n-1\) edges connecting the gadgets and the \(n\) edges connecting the \(a_{i}\) paths to the center vertices [27, Lemma 9]. Hence, \(\mathcal{P}\) has to cut exactly these edges. As no other edges are cut, the \(a_{i}\) paths can be combined to clusters of size \(B\) without further cuts, so the given instance has to be a yes-instance for 3-Partition.
### Paths
Theorem 11 yields that Fair Correlation Clustering is NP-hard even in a forest of paths. The problem when limited to instances of a single connected path is closely related to the Necklace Splitting problem [5, 6].
Discrete Necklace Splitting
**Input:**:
The only difference to Fair Correlation Clustering on paths, other than the naming, is that the number of clusters \(k\) is explicitly given. From Lemmas 4 and 10 we are implicitly given this value also for Fair Correlation Clustering, though. However, Alon and West [6] do not constructively minimize the number of cuts required for a fair partition but non-constructively prove that there is always a partition of at most \((k-1)\cdot t\) cuts, if there are \(t\) colors and the partition is required to consist of exactly \(k\) sets with the same amount of vertices of each color. Thus, it does not directly help us when solving the optimization problem.
Moreover, Fair Correlation Clustering on paths is related to the 1-regular 2-colored variant of the Paint Shop Problem for Words (PPW). For PPW, a word is given as well as a set of colors, and for each symbol and color a requirement of how many such symbols should be colored accordingly. The task is to find a coloring that fulfills all requirements and minimizes the number of color changes between adjacent letters [24].
Let for example \(w=aabab\) and \(r(a,1)=2,r(a,2)=r(b,1)=r(b,2)=1\). Then, the assignment \(f\) with \(f(1)=f(2)=f(3)=1\) and \(f(4)=f(5)=2\) fulfills the requirement and has 1 color change.
PPW instances with a word containing every symbol exactly twice and two PPW-colors, each requiring one of each symbol, are called _1-regular 2-colored_ and are shown to be \(\mathsf{NP}\)-hard and even \(\mathsf{APX}\)-hard [14]. With this, we prove \(\mathsf{NP}\)-hardness of Fair Correlation Clustering even on paths.
Fair Correlation Clustering on paths is \(\mathsf{NP}\)-hard, even when limited to instances with exactly 2 vertices of each color.
Proof.: We reduce from 1-regular 2-colored PPW. Let \(w=s_{1}s_{2},\ldots,s_{\ell}\). We represent the \(\frac{\ell}{2}\) different symbols by \(\frac{\ell}{2}\) colors and construct a path of length \(\ell\), where each type of symbol is represented by a unique color. By Lemma 4, any optimum Fair Correlation Clustering solution partitions the paths into two clusters, each containing every color exactly once, while minimizing the number of cuts (the inter-cluster cost) by Lemma 3. As this is exactly equivalent to assigning the letters in the word to one of two colors and minimizing the number of color changes, we obtain our hardness result.
\(\mathsf{APX}\)-hardness however is not transferred since though there is a relationship between the number of cuts (the inter-cluster cost) and the Correlation Clustering cost, the two measures are not identical. In fact, as Fair Correlation Clustering has a PTAS on forests by Theorem 4.1, \(\mathsf{APX}\)-hardness on paths would imply \(\mathsf{P}=\mathsf{NP}\).
On a side note, observe that for every Fair Correlation Clustering instance on paths we can construct an equivalent PPW instance (though not all of them are 1-regular 2-colored) by representing symbols by colors and PPW-colors by clusters.
We note that it may be possible to efficiently solve Fair Correlation Clustering on paths if there are e.g. only two colors. There is an \(\mathsf{NP}\)-hardness result on PPW with just two letters in [24], but a reduction from these instances is not as easy as above since its requirements imply an unfair clustering.
### Beyond Trees
By Theorem 4.1, Fair Correlation Clustering is \(\mathsf{NP}\)-hard even on trees with diameter 4. Here, we show that if we allow the graph to contain circles, the problem is already \(\mathsf{NP}\)-hard for diameter 2. Also, this nicely contrasts that Fair Correlation Clustering is solved on trees of diameter 2 in linear time, as we will see in Subsection 6.1.
Fair Correlation Clustering on graphs of diameter 2 with two colors in a ratio of \(1:1\) is \(\mathsf{NP}\)-hard.
Proof.: Cluster Editing, which is an alternative formulation of Correlation Clustering, is NP-hard on graphs of diameter \(2\)[9]. Further, Ahmadi et al. [1] give a reduction from Correlation Clustering to Fair Correlation Clustering with a color ratio of \(1:1\). They show that one can solve Correlation Clustering on a graph \(G=(V,E)\) by solving Fair Correlation Clustering on the graph \(G^{\prime}=(V\cup V^{\prime},E\cup E^{\prime}\cup\widetilde{E})\) that mirrors \(G\). The vertices in \(V\) are colored blue and the vertices in \(V^{\prime}\) are colored red. Formally, \(V^{\prime}=\{u^{\prime}\mid u\in V\}\) and \(E^{\prime}=\{\{u^{\prime},v^{\prime}\}\mid\{u,v\}\in E\}\). Further, \(\widetilde{E}\) connects every vertex with its mirrored vertex as well as the mirrors of adjacent vertices, i.e., \(\widetilde{E}=\{\{u,u^{\prime}\}\mid u\in V\}\cup\{\{u,v^{\prime}\}\mid u\in V \wedge v^{\prime}\in V^{\prime}\wedge\{u,v\}\in E\}\), see Figure 5.
Observe that if \(G\) has diameter \(2\) then \(G^{\prime}\) also has diameter \(2\) as follows. As every pair of vertices \(\{u,v\}\in\binom{V}{2}\) is of maximum distance \(2\) and the vertices as well as the edges of \(G\) are mirrored, every pair of vertices \(\{u^{\prime},v^{\prime}\}\in\binom{V^{\prime}}{2}\) is of maximum distance \(2\). Further, every vertex and its mirrored vertex have a distance of \(1\). For every pair of vertices \(u\in V,v^{\prime}\in V^{\prime}\) we distinguish two cases. If \(\{u,v\}\in E\), then \(\{u,v^{\prime}\}\in\widetilde{E}\), so the distance is \(1\). Otherwise, as the distance between \(u\) and \(v\) is at most \(2\) in \(G\), there is \(w\in V\) such that \(\{u,w\}\in E\) and \(\{v,w\}\in E\). Thus, \(\{u,w^{\prime}\}\in\widetilde{E}\) and \(\{w^{\prime},v^{\prime}\}\in E^{\prime}\), so the distance of \(u\) and \(v^{\prime}\) is at most \(2\).
As Correlation Clustering on graphs with diameter \(2\) is NP-hard and the reduction by Ahmadi et al. [1] constructs a graph of diameter \(2\) if the input graph is of diameter \(2\), we have proven the statement.
Further, we show that on general graphs Fair Correlation Clustering is NP-hard, even if the colors of the vertices allow for no more than \(2\) clusters in any fair clustering. This contrasts our algorithm in Subsection 6.4 solving Fair Correlation Clustering on forests in polynomial time if the maximum number of clusters is constant. To this end, we reduce from the NP-hard Bisection problem [29], which is the \(k=2\) case of \(k\)-Balanced Partitioning.
```
\(\mathbf{Input:}\) \(\mathbf{Task:}\) \(\mathbf{Find}\) \(\mathcal{P}=\{A,B\}\) of \(V\) that minimizes \(|\{\{u,v\}\in E\mid u\in A\wedge v\in B\}|\) under the constraint that \(|A|=|B|\).
```
**Algorithm 1**Fair Correlation Clustering on graphs with two colors in a ratio of \(1:c\) is NP-hard, even if \(c=\frac{n}{2}-1\) and the graph is connected.
Figure 5: Graph as constructed by Ahmadi et al. [1] for the reduction from Correlation Clustering to Fair Correlation Clustering. The blue vertices and edges correspond to the original graph \(G=(V,E)\), red vertices and edges to its mirror, i.e., \(V^{\prime}\) and \(E^{\prime}\), and black edges to \(\widetilde{E}\).
Proof.: We reduce from Bisection. Let \(G=(V,E)\) be a Bisection instance and assume it has an even number of vertices (otherwise it is a trivial no-instance). The idea is to color all of the vertices in \(V\) red and add two cliques, each consisting of one blue and \(|V|\) red vertices to enforce that a minimum-cost Fair Correlation Clustering consists of exactly two clusters and thereby partitions the vertices of the original graph in a minimum-cost bisection. The color ratio is \(2:3|V|\) which equals \(1:\frac{|V^{\prime}|}{2}-1\) with \(V^{\prime}\) being the set of the newly constructed graph. We have to rule out the possibility that a minimum-cost Fair Correlation Clustering is just one cluster containing the whole graph. We do this by connecting the new blue vertices \(v_{1},v_{2}\) to only one arbitrary red vertex \(v\in V\). We illustrate the scheme in Figure 6. We first argue that every clustering with two clusters is cheaper than placing all vertices in the same cluster. Let \(n=|V|\) as well as \(m=|E|\). Let \(\mathcal{P}\) be a clustering that places all vertices in a single cluster. Then,
\[\mathrm{cost}(\mathcal{P})=\frac{(3n+2)(3n+1)}{2}-\left(m+2+2\cdot\frac{n(n+1 )}{2}\right)=\frac{7n^{2}}{2}+\frac{7n}{2}-m-1,\]
as the cluster is of size \(3n+2\), there is a total of \(m+2\) plus the edges of the cliques, and no edge is cut. Now assume we have a clustering \(\mathcal{P}^{\prime}\) with an inter-cluster cost of \(\chi^{\prime}\) that puts each clique in a different cluster. Then,
\[\mathrm{cost}(\mathcal{P}^{\prime}) =\chi^{\prime}+2\cdot\frac{(\frac{3n}{2}+1)(\frac{3n}{2})}{2}- \left(m-\chi^{\prime}+\frac{n(n+1)}{2}\right)\] \[=\frac{7n^{2}}{4}+n-m+2\chi^{\prime}\leqslant\frac{9n^{2}}{4}+n- m+2,\]
since there are at most \(\frac{n}{2}\cdot\frac{n}{2}\) inter-cluster edges between vertices of \(V\) and one inter-cluster edge from \(v\) to either \(v_{1}\) or \(v_{2}\), so \(\chi\leqslant\frac{n^{2}}{4}+1\). Placing all vertices in the same cluster is hence more expensive by
\[\mathrm{cost}(\mathcal{P})-\mathrm{cost}(\mathcal{P}^{\prime})\geqslant\frac{7 n^{2}}{2}+\frac{7n}{2}-m-1-\left(\frac{9n^{2}}{4}+n-m+2\right)=\frac{5n^{2}}{4}+ \frac{5n}{2}-3\]
than any clustering with two clusters. This is positive for \(n\geqslant 2\). Thus, Fair Correlation Clustering will always return at least two clusters. Also, due to the fairness constraint and there being only two blue vertices, it creates exactly two clusters.
Further, it does not cut vertices from one of the two cliques for the following reason. As the clusters are of fixed size, by Lemma 3 we can focus on the inter-cluster cost to argue that a minimum-cost Fair Correlation Clustering only cuts edges in \(E\). First, note
Figure 6: Graph constructed for the reduction from Bisection to a Fair Correlation Clustering instance with just 2 large clusters. The middle part corresponds to the input graph \(G\) and is colored red. \(Clique_{1}\) and \(Clique_{2}\) are both cliques of \(|V|\) red vertices and one blue vertex each.
that it is never optimal to cut vertices from both cliques as just cutting the difference from one clique cuts fewer edges. This also implies that at most \(\frac{n}{2}\) red vertices are cut from the clique as otherwise, the other cluster would have more than the required \(\frac{3n}{2}\) red vertices. So, assume \(0<a\leqslant\frac{n}{2}\) red vertices are cut from one clique. Any such solution has an inter-cluster cost of \(a\cdot(n+1-a)+\chi_{E}\), where \(\chi_{E}\) is the number of edges in \(E\) that are cut to split \(V\) into two clusters of size \(\frac{n}{2}+a\) and \(\frac{n}{2}-a\) as required to make a fair partition. We note that by not cutting the cliques and instead cutting off \(a\) vertices from the cluster of size \(\frac{n}{2}+a\), we obtain at most \(a\cdot\frac{n}{2}+\chi_{E}\) cuts. As \(\frac{n}{2}<n+1-a\), this implies that no optimal solution cuts the cliques. Hence, each optimal solution partitions the \(V\) in a minimum-cost bisection.
Thus, by solving Fair Correlation Clustering on the constructed graph we can solve Bisection in \(G\). As further, the constructed graph is of polynomial size in \(|V|\), we obtain our hardness result.
## 6 Algorithms
The results from Section 5 make it unlikely that there is a general polynomial time algorithm solving Fair Correlation Clustering on trees and forests. However, we are able to give efficient algorithms for certain classes of instances.
### Simple Cases
First, we observe that Fair Correlation Clustering on bipartite graphs is equivalent to the problem of computing a maximum bipartite matching if there are just two colors that occur equally often. This is due to there being a minimum-cost fair clustering such that each cluster is of size \(2\).
Computing a minimum-cost fair clustering with two colors in a ratio of \(1:1\) is equivalent to the maximum bipartite matching problem under linear-time reductions, provided that the input graph has a minimum-cost fair clustering in which each cluster has cardinality at most \(2\).
Proof.: Let the colors be red and blue. By assumption, there is an optimum clustering for which all clusters are of size at most \(2\). Due to the fairness constraint, each such cluster consists of exactly \(1\) red and \(1\) blue vertex. By Lemma 3, the lowest cost is achieved by the lowest inter-cluster cost, i.e., when the number of clusters where there is an edge between the two vertices is maximized. This is exactly the matching problem on the bipartite graph \(G^{\prime}=(R\cup B,E^{\prime})\), with \(R\) and \(B\) being the red and blue vertices, respectively, and \(E^{\prime}=\{\{u,v\}\in E\mid u\in R\wedge v\in B\}\). After computing an optimum matching, each edge of the matching defines a cluster and unmatched vertices are packed into fair clusters arbitrarily.
For the other direction, if we are given an instance \(G^{\prime}=(R\cup B,E^{\prime})\) for bipartite matching, we color all the vertices in \(R\) red and the vertices in \(B\) blue. Then, a minimum-cost fair clustering is a partition that maximizes the number of edges in each cluster as argued above. As each vertex is part of exactly one cluster and all clusters consist of one vertex in \(R\) and one vertex in \(B\), this corresponds to a maximum bipartite matching in \(G^{\prime}\).
By Lemma 10, the condition of Theorem 17 is met by all bipartite graphs. The recent maxflow breakthrough [18] also gives an \(m^{1+o(1)}\)-time algorithm to compute bipartite matchings, this then transfers also to Fair Correlation Clustering with color ratio \(1:1\). For Fair Correlation Clustering on forests, we can do better as the reduction in Theorem 17 again results in a forest, for which bipartite matching can be solved in linear time by standard techniques. We present the algorithm here for completeness.
**Theorem 18**.: Fair Correlation Clustering _on forests with a color ratio \(1:1\) can be solved in time \(\mathcal{O}(n)\)._
Proof.: We apply Theorem 17 to receive a sub-forest of the input for which we have to compute a maximum matching. We do so independently for each of the trees by running the following dynamic program. We visit all vertices, but each one only after we have already visited all its children (for example by employing topological sorting). For each vertex \(v\), we compute the maximum matching \(M_{v}\) in the subtree rooted at \(v\) as well as the maximum matching \(M_{v}^{\prime}\) in the subtree rooted at \(v\) assuming \(v\) is not matched. We directly get that \(M_{v}^{\prime}\) is simply the union of the matchings \(M_{u}\) for each child \(u\) of \(v\). Further, either \(M_{v}=M_{v}^{\prime}\) or in \(M_{v}\) there is an edge between \(v\) and some child \(u\). In the latter case, \(M_{v}\) is the union of \(\{u,v\},M_{u}^{\prime}\), and the union of all \(M_{w}\) for all children \(w\neq u\). Trying out all possible choices of \(u\) and comparing them among another and to \(M_{v}^{\prime}\) yields \(M_{v}\). In the end, the maximum matching in the tree with root \(r\) is \(M_{r}\).
Each vertex is visited once. If the matchings are not naively merged during the process but only their respective sizes are tracked and the maximum matching is retrieved after the dynamic program by using a back-tracking approach, the time complexity per vertex is linear in the number of its children. Thus, the dynamic program runs in time in \(\mathcal{O}(n)\).
Next, recall that Theorem 12 states that Fair Correlation Clustering on trees with a diameter of at least 4 is \(\mathsf{NP}\)-hard. With the next theorem, we show that we can efficiently solve Fair Correlation Clustering on trees with a diameter of at most 3, so our threshold of 4 is tight unless \(\mathsf{P}=\mathsf{NP}\).
**Theorem 19**.: Fair Correlation Clustering _on trees with a diameter of at most 3 can be solved in time \(O(n)\)._
Proof.: Diameters of 0 or 1 are trivial and the case of two colors in a ratio of \(1:1\) is handled by Theorem 17. So, assume \(d>2\) to be the minimum size of a fair cluster. A diameter of two implies that the tree is a star. In a star, the inter-cluster cost equals the number of vertices that are not placed in the same cluster as the center vertex. By Lemma 4, every clustering of minimum cost has minimum-sized clusters. As in a star, all these clusterings incur the same inter-cluster cost of \(n-d+1\) they all have the same Correlation Clustering cost by Lemma 3. Hence, outputting any fair clustering with minimum-sized clusters solves the problem. Such a clustering can be computed in time in \(\mathrm{O}(n)\).
If we have a tree of diameter 3, it consists of two adjacent vertices \(u,v\) such that every vertex \(w\in V\setminus\{u,v\}\) is connected to either \(u\) or \(v\) and no other vertex, see Figure 7. This is due to every graph of diameter 3 having a path of four vertices. Let the two in the middle be \(u\) and \(v\). The path has to be an induced path or the graph would not be a tree. We can attach other vertices to \(u\) and \(v\) without changing the diameter but as soon as we attach
Figure 7: Shape of every tree with diameter 3.
a vertex elsewhere, the diameter increases. Further, there are no edges between vertices in \(V\setminus\{u,v\}\) as the graph would not be circle-free.
For the clustering, there are now two possibilities, which we try out separately. Either \(u\) and \(v\) are placed in the same cluster or not. In both cases, Lemma 4 gives that all clusters are of minimal size \(d\). If \(u\) and \(v\) are in the same cluster, all clusterings of fair minimum sized clusters incur an inter-cluster cost of \(n-d+2\) as all but \(d-2\) vertices have to be cut from \(u\) and \(v\). In \(\mathrm{O}(n)\), we greedily construct such a clustering \(\mathcal{P}_{1}\). If we place \(u\) and \(v\) in separate clusters, the minimum inter-cluster is achieved by placing as many of their respective neighbors in their respective clusters as possible. After that, all remaining vertices are isolated and are used to make these two clusters fair and if required form more fair clusters. Such a clustering \(\mathcal{P}_{2}\) is also computed in \(\mathrm{O}(n)\). We then return the cheaper clustering. This is a fair clustering of minimum cost as either \(u\) and \(v\) are placed in the same cluster or not, and for both cases, \(\mathcal{P}_{1}\) and \(\mathcal{P}_{2}\) are of minimum cost, respectively.
### Color Ratio 1:2
We now give algorithms for Fair Correlation Clustering on forests that do not require a certain diameter or degree. As a first step to solve these less restricted instances, we develop an algorithm to solve Fair Correlation Clustering on forests with a color ratio of \(1:2\).
W.l.o.g., the vertices are colored blue and red with twice as many red vertices as blue ones. We call a connected component of size 1 a _\(b\)-component_ or _\(r\)-component_, depending on whether the contained vertex is blue or red. Analogously, we apply the terms _\(br\)-component, \(rr\)-component_, and _\(brr\)-component_ to components of size 2 and 3.
#### 6.2.1 Linear Time Attempt
Because of Lemma 4, we know that in every minimum-cost fair clustering each cluster contains exactly 1 blue and 2 red vertices. Our high-level idea is to employ two phases.
In the first phase, we partition the vertices of the forest \(F\) in a way such that in every cluster there are at most 1 blue and 2 red vertices. We call such a partition a _splitting_ of \(F\). We like to employ a standard tree dynamic program that bottom-up collects vertices to be in the same connected component and cuts edges if otherwise there would be more than 1 blue or 2 red vertices in the component. We have to be smart about which edges to cut, but as only up to 3 vertices can be placed in the topmost component, we have only a limited number of possibilities we have to track to find the splitting that cuts the fewest edges.
After having found that splitting, we employ a second phase, which finds the best way to assemble a fair clustering from the splitting by merging components and cutting as few additional edges as possible. As, by Lemma 3, a fair partition with the smallest inter-cluster cost has a minimum Correlation Clustering cost, this would find a minimum-cost fair clustering.
Unfortunately, the approach does not work that easily. We find that the number of cuts incurred by the second phase also depends on the number of \(br\)- and \(r\)-components.
Let \(F=(V,E)\) be an \(n\)-vertex forest with colored vertices in blue and red in a ratio of \(1:2\). Suppose in each connected component (in the above sense) there is at most 1 blue vertex and at most 2 red vertices. Let \(\#(br)\) and \(\#(r)\) be the number of \(br\)- and \(r\)-components, respectively. Then, after cutting \(\max(0,\frac{\#(br)-\#(r)}{2})\) edges, the remaining connected components can be merged such that all clusters consist of exactly 1 blue and 2 red vertices. Such a set of edges can be found in time in \(\mathrm{O}(n)\). Further, when cutting less than \(\max(0,\frac{\#(br)-\#(r)}{2})\) edges, such merging is not possible.
Proof.: As long as possible, we arbitrarily merge \(b\)-components with \(rr\)-components as well as \(br\)-components with \(r\)-components. For this, no edges have to be cut. Then, we split the remaining \(rr\)-components and merge the resulting \(r\)-components with one \(br\)-component each. This way, we incur \(\max(0,\frac{\#(br)-\#(r)}{2})\) more cuts and obtain a fair clustering as now each cluster contains two red and one blue vertex. This procedure is done in time in \(\mathrm{O}(n)\).
Further, there is no cheaper way. For each \(br\)-component to be merged without further cuts we require an \(r\)-component. There are \(\#(r)\)\(r\)-components and each cut creates either at most two \(r\)-components or one \(r\)-component while removing a \(br\)-component. Hence, \(\max(0,\frac{\#(br)-\#(r)}{2})\) cuts are required.
For our approach to work, the first phase has to simultaneously minimize the number of cuts as well as the difference between \(br\)- and \(r\)-components. This is, however, not easily possible. Consider the tree in Figure 8.
There, with one additional cut edge we have three \(br\)-components less and one \(r\)-component more. Using a standard tree dynamic program, therefore, does not suffice as when encountering the tree as a subtree of some larger forest or tree, we would have to decide between optimizing for the number of cut edges or the difference between \(br\)- and \(r\)-components. There is no trivial answer here as the choice depends on how many \(br\)- and \(r\)-components are obtained in the rest of the graph. For our approach to work, we hence have to track both possibilities until we have seen the complete graph, setting us back from achieving a linear running time.
#### The Join Subroutine
In the first phase, we might encounter situations that require us to track multiple ways of splitting various subtrees. When we reach a parent vertex of the roots of these subtrees, we join these various ways of splitting. For this, we give a subroutine called Join. We first formalize the output by the following lemma, then give an intuition on the variables, and lastly prove the lemma by giving the algorithm.
Let \(R_{1},R_{2},\ldots,R_{\ell_{1}}\) for \(\ell_{1}\in\mathbb{N}_{>1}\) with \(R_{i}\in(\mathbb{N}\cup\{\infty\})^{\ell_{2}}\) for \(\ell_{2}\in\mathbb{N},i\in[\ell_{1}]\) and \(f\) be a computable function \(f\colon[\ell_{2}]\times[\ell_{2}]\to 2^{[\ell_{2}]}\). For \(x\in[\ell_{2}]\), let
\[A_{x}=\{M\in([\ell_{2}])^{\ell_{1}}\mid x\in\widehat{f}(M[1],M[2],\ldots,M[ \ell_{2}])\},\]
whereby for all \(x_{1},x_{2},\ldots\in[\ell_{2}]\)
\[\widehat{f}(x_{1},x_{2})=f(x_{1},x_{2})\]
Figure 8: A tree for which the splitting with the minimum number of cuts (right) has \(3\) more \(br\)-components and \(1\) less \(r\)-component than a splitting with one more edge cut (left).
and for all \(2\leq k\leq\ell_{2}\)_
\[\widehat{f}(x_{1},x_{2},\ldots,x_{k})=\bigcup_{x\in\widehat{f}(x_{1},x_{2}, \ldots,x_{k-1})}f(x,x_{k}).\]
_Then, an array \(R\in(\mathbb{N}\cup\{\infty\})^{\ell_{2}}\) such that \(R[x]=\min_{M\in A_{x}}\sum_{i=1}^{\ell_{1}}R_{i}[M[i]]\) for all \(x\in[\ell_{2}]\) can be computed in time in \(\mathrm{O}(\ell_{1}\cdot\ell_{2}^{2}\cdot T_{f})\), where \(T_{f}\) is the time required to compute \(f\)._
As we later reuse the routine, it is formulated more generally than required for this section. Here, for the \(1:2\) case, assume we want to join the splittings of the children \(u_{1},u_{2},\ldots,u_{\ell_{1}}\) of some vertex \(v\). For example, assume \(v\) has three children as depicted in Figure 9.
Then, for each child \(u_{i}\), let there be an array \(R_{i}\) such that \(R_{i}[x]\) is the minimum number of cuts required to obtain a splitting of the subtree \(T_{u_{i}}\) that has exactly \(x\) more \(br\)-components than \(r\)-components. For our example, assume all edges between \(v\) and its children have to be cut. We see, that \(R_{1}[-1]=1\) and \(R_{1}[x]=\infty\) for \(x\neq-1\), as the only possible splitting for the subtree of \(u_{1}\) cuts only the edge to \(v\) and has one more \(r\)-component than \(br\)-components. Further, we have \(R_{2}[1]=1\) (by only cutting \(\{v,u_{2}\}\)), \(R_{2}[-1]=2\) (by cutting both edges of \(u_{2}\)), and \(R_{2}[x]=\infty\) for \(x\notin\{-1,1\}\). Last, note that \(R_{3}=R_{2}\).
The function \(f\) returns the set of indices that should be updated when merging two possibilities. When a splitting of one child's subtree has \(x_{1}\) more \(br\)-components and a splitting of another child's subtree has \(x_{2}\) more \(br\)-components, then the combination of these splittings has \(x_{1}+x_{2}\) more \(br\)-components than \(r\)-components. Hence, the only index to update is \(f(x_{1},x_{2})=\{x_{1}+x_{2}\}\). Later, we will require to update more than a single index, so \(f\) is defined to return a set instead of a single index. Note that by the definition of \(f\) and \(\widehat{f}\), each value placed in \(R[x]\) by the routine corresponds to choosing exactly one splitting from each array \(R_{i}\) such that the total difference between \(br\)-components and \(r\)-components sums up to exactly \(x\).
In our example, assume any splitting is chosen for each of the three subtrees. Let \(x_{i}\) denote the difference of \(br\)- and \(r\)-components of the chosen splitting for the subtree rooted at \(u_{i}\) for \(1\leq i\leq 3\). Then, Join sets \(R[x]\) for \(x=x_{1}+x_{2}+x_{3}\). If there are multiple ways to achieve an index \(x\), the one with the minimum number of cuts is stored in \(R[x]\). In the example, we have \(4\) possibilities, as \(x_{1}=-1\) and \(x_{2},x_{3}\in\{-1,1\}\). Note that \(x_{1}=-1,x_{2}=-1,x_{3}=1\) and \(x_{1}=-1,x_{2}=1,x_{3}=-1\) both evaluate to \(x=-1\). Hence, only one of the two combinations is stored (the one with fewer cuts, here an arbitrary one as both variants imply \(4\) cuts). For the resulting array \(R\), we have \(R[-3]=5,R[-1]=4,R[1]=3\), and \(R[x]=\infty\) for \(x\notin\{-3,-1,1\}\). Observe that the numbers of cuts in \(R\) correspond to the sums of the numbers of cuts in the subtrees for the respective choice of \(x_{i}\).
Figure 9: Exemplary graph for a Join subroutine.
We now describe how the Join subroutine is computed.
Proof of Lemma 21.: The algorithm works in an iterative manner. Assume it has found the minimum value for all indices using the first \(i-1\) arrays and they are stored in \(R^{i-1}\). It then _joins_ the \(i\)-th array by trying every index \(x_{1}\) in \(R^{i-1}\) with every index \(x_{2}\) in \(R_{i}\). Each time, for all indices \(x\in f(x_{1},x_{2})\), it sets \(R^{i}[x]\) to \(R^{i-1}[x_{1}]+R_{i}[x_{2}]\) if it is smaller than the current element there. Thereby, it tries all possible ways of combining the interim solution with \(R_{i}\) and for each index tracks the minimum that can be achieved. Formally, we give the algorithm in Algorithm 1.
```
Input:\(R_{1},R_{2},\ldots,R_{\ell_{1}}\) for \(\ell_{1}\geqslant 2\) with \(R_{i}\in(\mathbb{N}\cup\{\infty\})^{\ell_{2}}\) for \(0\leqslant i<\ell_{1}\), and a computable function \(f\colon[\ell_{2}]\times[\ell_{2}]\to 2^{[\ell_{2}]}\). Output:\(R\in(\mathbb{N}\cup\{\infty\})^{\ell_{2}}\) such that, for all \(x\in[\ell_{2}]\), \(R[x]=\min_{M\in A_{x}}\sum_{i=1}^{\ell_{1}}R_{i}[M[i]]\) with \(A_{x}=\{M\in([\ell_{2}])^{\ell_{1}}\mid x\in\widehat{f}(M[1],M[2],\ldots,M[ \ell_{2}])\}\), \(\widehat{f}(x_{1},x_{2},\ldots,x_{k})=\bigcup_{x\in\widehat{f}(x_{1},x_{2}, \ldots,x_{k-1})}f(x,x_{k})\), and \(\widehat{f}(x_{1},x_{2})=f(x_{1},x_{2})\). \(R\gets R_{1}\) for\(i\gets 2\)to\(\ell_{1}\)do \(R^{\prime}\gets R\) foreach\((x_{1},x_{2})\in\left([\ell_{2}]\right)^{2}\)do foreach\(x\in f(x_{1},x_{2})\)do \(R^{\prime}[x]\leftarrow\min\left(R^{\prime}[x],R[x_{1}]+R_{i}[x_{2}]\right)\) \(R\gets R^{\prime}\)
```
**Algorithm 1** The Join subroutine.
The algorithm terminates after \(\mathrm{O}(k\cdot\ell^{2}\cdot T_{f})\) iterations due to the nested loops. We prove by induction that \(R\) is a solution of Join over the arrays \(R_{1},\ldots,R_{i}\) after each iteration \(i\). The first one simply tries all allowed combinations of the arrays \(R_{1},R_{2}\) and tracks the minimum value for each index, matching our definition of Join. Now assume the statement holds for some \(i\). Observe that we only update a value \(R[x]\) if there is a respective \(M\in A_{x}\), so none of the values is too small. To show that no value is too large, take any \(x\in[\ell_{2}]\) and let \(a\) be the actual minimum value that can be obtained for \(R[x]\) in this iteration. Let \(j_{1},j_{2},\ldots,j_{i+1}\) with \(x\in\widehat{f}(j_{1},j_{2},\ldots,j_{i+1})\) be the indices that obtain \(a\). Then, there is \(y\in[\ell_{2}]\) such that after joining the first \(i\) arrays the value at index \(y\) is \(a-R_{i+1}[j_{i+1}]\) and \(y\in\widehat{f}(j_{1},j_{2},\ldots,j_{i})\). This implies \(R[y]\leqslant a-R_{i+1}\) by our induction hypothesis. Further, as both \(x\in\widehat{f}(j_{1},j_{2},\ldots,j_{i+1})\) and \(y\in\widehat{f}(j_{1},j_{2},\ldots,j_{i})\), we have \(x\in f(y,j_{i+1})\). Thus, in this iteration, \(R[x]\) is set to at most \(R[y]+R_{i+1}[j_{i+1}]\leqslant a\). With this, all values are set correctly.
Observe that in the case of \(f(x_{1},x_{2})=\{x_{1}+x_{2}\}\), which is relevant to this section, the loop in lines 4-6 computes the \((\min,+)\)-convolution of the arrays \(R\) and \(R_{i}\). Simply trying all possible combinations as done in the algorithm has a quadratic running time. This cannot be improved without breaking the MinConv Conjecture, which states there is no algorithm computing the \((\min,+)\)-convolution of two arrays of length \(n\) in time in \(\mathrm{O}(n^{2-\varepsilon})\) for any constant \(\varepsilon>0\)[21].
#### The Tracking Algorithm
With the Join subroutine at hand, we are able to build a dynamic program solving Fair Correlation Clustering on forests with two colors in a ratio of \(1:2\). We first describe how to apply the algorithm to trees and then generalize it to work on forests.
In the first phase, for each possible difference between the number of \(br\)-components and \(r\)-components, we compute the minimum number of cuts to obtain a splitting with that difference. In the second phase, we find the splitting for which the sum of edges cut in the first phase and the number of edges required to turn this splitting into a fair partition is minimal. This sum is the inter-cluster cost of that partition, so by Lemma 3 this finds a fair partition with the smallest Correlation Clustering cost.
#### Splitting the tree.
In the first phase, our aim is to compute an array \(D\), such that, for all integers \(-n\leqslant x\leqslant\frac{n}{3}\), \(D[x]\subseteq E\) is a minimum-sized set of edges such that \(x=br(T-D[x])-r(T-D[x])\), where \(br(T-D[x])\) and \(r(T-D[x])\) are the number of \(br\)- and \(r\)-components in \(T-D[x]\), respectively. To mark the case if no such set exists, we expect \(D[x]=\mathbb{N}\) to have an infinitely large entry. We fill the array in a dynamic programming way, by computing an array \(D_{v}^{h}\) for each vertex \(v\), and every possible _head_\(h\in\{\emptyset,r,b,rr,br\}\). Here, \(D_{v}^{h}[x]\), is a minimum-sized set of edges such that in the subtree \(T_{v}\) rooted at \(v\) upon removal we have exactly \(x\) more \(br\)-components than \(r\)-components. The head \(h\) refers to the colors in the topmost component, which is of particular interest as it might later contain vertices from outside \(T_{v}\) as well. Head \(h=r\) refers to a component with a red vertex, \(h=br\) with a blue and a red vertex so on. This component is empty (\(h=\emptyset\)) if the edge above \(v\) is cut. The head is not counted as an \(br\)-component or \(r\)-component for the computation of \(x\). Figure 10 gives examples of how a head is composed from the splittings of the children.
In the following, we only show how to compute \(\Delta_{v}^{h}[x]=|D_{v}^{h}[x]|\), the size of the set of edges to obtain a respective splitting. The set \(D_{v}^{h}[x]\) is, however, obtained by a simple backtracking approach in the same asymptotic running time. If \(D_{v}^{h}[x]=\mathbb{N}\), we have \(\Delta_{v}^{h}[x]=\infty\). We initialize all values with \(\Delta_{v}^{h}[x]=\infty\), meaning we know of no set of edges which upon removal give that head and that difference between \(br\)- and \(r\)-components. Then, for every red leaf \(v\) we set \(\Delta_{v}^{v}[0]=0\) and \(\Delta_{v}^{\emptyset}[-1]=1\). For every blue leaf \(v\) we set \(\Delta_{v}^{h}[0]=0\) and \(\Delta_{v}^{\emptyset}[0]=1\). This concludes the computations for the leaves, as the only possibilities are to cut the edge above the leaf or not. Now suppose we have finished the computation for all children \(u_{1},u_{2},\ldots,u_{k}\) of some vertex \(v\). Observe that at most two children of \(v\) are placed in a head with \(v\). For every head \(h\in\{\emptyset,r,b,rr,br\}\) that is formable at vertex \(v\), we try all possibilities to obtain that head.
If \(h\in\{r,b\}\) and \(c(v)\) corresponds to \(h\), this is done by choosing \(\emptyset\) heads for all children. There is no unique splitting of the subtrees however, as for each subtree rooted at some child vertex \(u_{i}\) there is a whole array \(D_{u_{i}}^{\emptyset}\) of possible splittings with different numbers of \(br\)- and \(r\)-components. To find the best choices for all child vertices, we employ the Join subroutine that, when called with \(f(x_{1},x_{2})=\{x_{1}+x_{2}\}\) and a list of arrays, returns an array \(R\) such that, for all indices \(x\)\(R[x]\) is the minimum value obtained by summing up exactly one value from each of the input arrays such that the indices of the chosen values sum up to \(i\). We hence set \(\Delta_{v}^{h}=\textsc{Join}(\Delta_{u_{i}}^{\emptyset},\ldots,\Delta_{u_{k}}^ {\emptyset})\). Here and in the following, we only call the Join subroutine with at least two arrays. If we would only input a single array, we go on as if the Join subroutine returned that array. We note that here our indexing ranges from \(-n\) to \(\frac{n}{3}\) while the Join subroutine assumes positive indices. We hence implicitly assume that an
Figure 10: Exemplary subtree with various possibilities to obtain a head. Figures 10a and 10b show splittings with an \(rr\)-head (dark green). The choice for the heads of the children (light green) is unambiguous as the only way to obtain an \(rr\)-head is to choose the \(r\)-head for the left child and an \(\emptyset\)-head for the right one. Both the left and the right variants have to be considered as they differ in the number of \(br\)-components minus the number of \(r\)-components. The splittings in Figures 10c–10e create an \(\emptyset\)-head, as they cut the edge above the root of the subtree, so no vertices of the subtree can be part of a component with vertices outside the subtree. Out of these 3 splittings, however, only Figures 10c and 10d will be further considered as Figure 10e obtains the same difference between \(br\)- and \(r\)-components as Figure 10c but cuts one more edge. We note that other splittings obtain an \(\emptyset\)-head as well that are not listed here.
index of \(x\) here maps to an index \(x+n+1\) in the subroutine.
If \(h=br\) or both \(h=rr\) and \(c(v)\) corresponds to \(r\), then the heads for all children should be \(\emptyset\) except for one child that we place in the same component as \(v\). It then has a head \(h^{\prime}\in\{r,b\}\), depending on \(h\) and \(c(v)\). We have \(h^{\prime}=r\) if \(h=rr\) and \(c(v)\) corresponds to \(R\) or \(h=rb\) and \(c(v)\) corresponds to \(b\). Otherwise, \(h^{\prime}=b\). For all \(i\in[k]\), we compute an array \(\Delta^{\prime}_{u_{i}}=\textsc{Join}(\Delta^{\emptyset}_{u_{1}},\ldots, \Delta^{\emptyset}_{u_{i-1}},\Delta^{h^{\prime}}_{u_{i}},\Delta^{\emptyset}_{u_ {i+1}},\ldots,\Delta^{\emptyset}_{u_{k}})\), referring to \(u_{i}\) having the non-empty head. Lastly, for all \(-n\leqslant x\leqslant\frac{n}{3}\), we set \(\Delta^{\emptyset}_{v}[x]=\min_{i\in[k]}\Delta^{\prime}_{u_{i}}[x]\).
If \(h=\emptyset\), then we have to try out all different possibilities for the component \(v\) is in and, in each case, cut the edge above \(v\). First assume we want to place \(v\) in a \(brr\)-component. Then it has to be merged with to vertices, either by taking a head \(h^{\prime}\in\{br,rr\}\) at one child or by taking heads \(h_{1},h_{2}\in\{r,b\}\) at two children. The exact choices for \(h^{\prime},h_{1},h_{2}\) of course depend on \(c(v)\). We compute an array \(\Delta_{h^{\prime}}=\textsc{Join}(\Delta^{\emptyset}_{u_{1}},\ldots,\Delta^{ \emptyset}_{u_{i-1}},\Delta^{\emptyset}_{u_{i}},\Delta^{\emptyset}_{u_{i+1}}, \ldots,\Delta^{\emptyset}_{u_{k}})\) for the first option. For the second option, we compute the arrays \(\Delta_{i,j}=\textsc{Join}(\Delta^{\emptyset}_{u_{1}},\ldots,\Delta^{ \emptyset}_{u_{i-1}},\Delta^{\emptyset}_{u_{i}},\ldots,\Delta^{\emptyset}_{u_{ j-1}},\Delta^{h_{2}}_{u_{j}},\Delta^{\emptyset}_{u_{j+1}},\ldots,\Delta^{ \emptyset}_{u_{k}})\) for all pairs of children \(u_{i},u_{j}\) of \(v\) such that \(i<j\) and \(\{v,u_{i},u_{j}\}\) is a \(brr\)-component. We now have stored the minimum number of cuts for all ways to form a \(brr\)-component with \(v\) and for all possibilities for \(x\) in the arrays \(\Delta_{h^{\prime}}\) and \(\Delta_{i,j}\) for all possibilities of \(i,j\). However, \(v\) may also be in an \(r\)-, \(b\)-, \(rr\)-, or \(br\)-component. Hence, when computing \(\Delta^{\emptyset}_{v}[x]\) we take the minimum value at position \(x\) not only among the arrays \(\Delta_{h^{\prime}}\) and \(\Delta_{i,j}\) but also of the arrays \(\Delta^{r}_{v},\Delta^{br}_{v},\Delta^{rr}_{v}\), and \(\Delta^{br}_{v}\). Note that here we have to shift all values in \(\Delta^{r}_{v}\) to the left by one since by isolating \(v\) we create another \(r\)-component. An entry we have written into \(\Delta^{r}_{v}[x]\) hence should actually be placed in \(\Delta^{r}_{v}[x-1]\). Similarly, we have to shift \(\Delta^{br}_{v}\) to the right, since here we create a new \(br\)-component at the top of the subtree. Lastly, as long as \(v\) is not the root of \(T\), we have to increase all values in \(\Delta^{\emptyset}_{v}\) by one, reflecting the extra cut we have to make above \(v\).
After all computations are completed by the correctness of the Join subroutine and an inductive argument, \(\Delta^{h}_{v}\) is correctly computed for all vertices \(v\) and heads \(h\). Note that in the Join subroutine, as \(f(x_{1},x_{2})\) returns the correct index for merging two subtrees, \(\widehat{f}(x_{1},x_{2},\ldots,x_{k})\) gives the correct index of merging \(k\) subtrees. In particular, \(\Delta^{\emptyset}_{r}\) is the array containing for each \(-n\leqslant x\leqslant\frac{n}{3}\) the minimum number of edges to cut such that the there are exactly \(x\) more \(br\)-components than \(r\)-components, where \(r\) is the root of \(T\). By adjusting the Join subroutine to track the exact combination that leads to the minimum value at each position, we also obtain an array \(D\) that contains not only the numbers of edges but the sets of edges one has to cut or is marked with \(\mathbb{N}\) if no such set exists.
At each node, computing the arrays takes time \(\textsc{O}(n^{5})\), which is dominated by computing \(\textsc{O}(n^{2})\) arrays \(D_{u,\textsc{br}}\) in time \(\textsc{O}(n^{3})\) each by Lemma 21 since \(\ell_{1},\ell_{2}\in\textsc{O}(n)\). This phase hence takes time in \(\textsc{O}(n^{6})\).
### Assembling a fair clustering.
Let \(D\) be the set computed in the first phase. Note that each set of edges \(D[x]\) directly gives a splitting, namely the partition induced by the connected components in \(T-D[x]\).
By Lemma 20, the cheapest way to turn the splitting given by \(D[x]\) into a clustering of sets of \(1\) blue and \(2\) red vertices is found in linear time and incurs \(\frac{\max(0,x)}{2}\) more cuts. Hence, we find the \(-n\leqslant x\leqslant\frac{n}{3}\) for which \(|D[x]|+\max(0,\frac{x}{2})\) is minimal. We return the corresponding clustering as it has the minimum inter-cluster cost.
This phase takes only constant time per splitting if we tracked the number of components of each type in the first phase and is therefore dominated by the first phase.
### Forests.
Our algorithm is easily generalized to also solve Fair Correlation Clustering on unconnected forests with two colors in a ratio of \(1:2\) by slightly adapting the first phase. We run the dynamic program as described above for each individual tree. This still takes overall time in \(\mathrm{O}(n^{6})\). For each tree \(T_{i}\) in the forest and every \(h\in\{\emptyset,r,b,rr,br\}\), let then \(\Delta_{T_{i}}^{\emptyset}\) denote the array \(\Delta_{r}^{\emptyset}\) with \(r\) being the root of tree \(T_{i}\). To find a splitting of the whole forest and not just on the individual trees, we perform an additional run of the Join subroutine using these arrays \(\Delta_{T_{i}}\) and the function \(f(x_{1},x_{2})=\{x_{1}+x_{2}\}\). This gives us an array \(R\) such that \(R[x]\) is the minimum number of cuts required to obtain a splitting with exactly \(x\) more \(br\)-components than \(r\)-components for the whole tree rather than for the individual trees. Note that we choose the \(\emptyset\)-head at each tree as the trees are not connected to each other, so in order to find a splitting we do not yet have to consider how components of different trees are merged, this is done in the second phase. The first phase then outputs an array \(D\) that contains the set of edges corresponding to \(R\), which is obtained by a backtracking approach. As the additional subroutine call takes time in \(\mathrm{O}(n^{3})\), the asymptotic run time of the algorithm does not change. This gives the following result.
Fair Correlation Clustering_on forests with two colors in a ratio of \(1:2\) can be solved in time in \(\mathrm{O}(n^{6})\).
### Small Clusters
To obtain an algorithm that handles more colors and different color ratios, we generalize our approach for the \(1:2\) color ratio case from the previous section. We obtain the following.
Let \(F\) be a forest of \(n\) vertices, each colored in one of \(k\geqslant 2\) colors. Let the colors be distributed in a ratio of \(c_{1}:c_{2}:\ldots:c_{k}\) with \(c_{i}\in\mathbb{N}_{>0}\) for all \(i\in[k]\) and \(\gcd(c_{1},c_{2},\ldots,c_{k})=1\). Then \(\mathrm{Fair\,\,Correlation\,Clustering\,on\,\,}F\) can be solved in time in \(\mathrm{O}(n^{2\mathrm{sevars+seatmax+2}}:\mathrm{sevars+setax})\), where \(\mathrm{sevars}=\prod_{i=1}^{k}(c_{i}+1)\) and \(\mathrm{setax}=\sum_{i=1}^{k}c_{i}\).
Once more, the algorithm runs in two phases. First, it creates a list of possible splittings, i.e., partitions in which, for every color, every component has at most as many vertices of that color as a minimum-sized fair component has. In the second phase, it checks for these splittings whether they can be merged into a fair clustering. Among these, it returns the one of minimum cost. We first give the algorithm solving the problem on trees and then generalize it to also capture forests.
### Splitting the forest.
For the first phase in the 1:2 approach, we employed a dynamic program that kept track of the minimum number to obtain a splitting for each possible cost incurred by the reassembling in the second phase. Unfortunately, if we are given a graph with \(k\geqslant 2\) colors in a ratio of \(c_{1}:c_{2}:\ldots:c_{k}\), then the number of cuts that are required in the second phase is not always as easily bounded by the difference of the number of two component types like \(r\)- and \(br\)-components in the \(1:2\) case. However, we find that it suffices to track the minimum number of cuts required to obtain any possible coloring of a splitting.
We first bound the number of possible colorings of a splitting. As during the dynamic program we consider splittings of a subgraph of \(G\) most of the time, we also have to count all possible colorings of splittings of less than \(n\) vertices.
**Lemma 24**.: _Let \(U\) be a set of \(n\) elements, colored in \(k\in\mathbb{N}_{>1}\) colors, and let \(d_{1},d_{2},\ldots,d_{k}\in\mathbb{N}\). Let \(\mathcal{S}\) be the set of all possible partitions of subsets of \(U\) such that for every color \(i\) there are at most \(d_{i}\) vertices of that color in each cluster. Let \(\mathcal{C}\) be the set of all colorings of partitions in \(\mathcal{S}\). Then, \(|\mathcal{C}|\leqslant(n+1)^{\mathrm{setvars}-1}\), where \(\mathrm{setvars}=\prod_{i=1}^{k}(d_{i}+1)\)._
Proof.: The number of sets with different colorings is at most setvars as there are \(0\) to \(d_{i}\) many vertices of color \(i\) in each component. Thus, a coloring of a partition \(\mathcal{P}\) using only these sets is characterized by an array of size setvars with values in \([n]\cup\{0\}\) as no component occurs more than \(n\) times. There are \((n+1)^{\mathrm{setvars}}\) ways to fill such an array. However, as the set colorings together have to form a partition, the last entry is determined by the first \(\mathrm{setvars}-1\) entries, giving only \((n+1)^{\mathrm{setvars}-1}\) possibilities.
With this, we employ a dynamic program similar to the one presented in Subsection 6.2 but track the minimum cut cost for all colorings of splittings. It is given by the following lemma.
**Lemma 25**.: _Let \(F=(V,E)\) be a forest with vertices in \(k\) colors. Further, let \(d_{1},d_{2},\ldots,d_{k}\in\mathbb{N}\) and \(\mathcal{S}\) be the set of all possible partitions of \(V\) such that there are at most \(d_{i}\) vertices of color \(i\) in each cluster for \(i\in[k]\). Let \(\mathcal{C}\) be the set of all colorings of partitions in \(\mathcal{S}\). Then, in time in \(\mathrm{O}(n^{2\mathrm{setvars}+\mathrm{setmax}+2}\cdot\mathrm{setvars}^{ \mathrm{setmax}})\) with \(\mathrm{setvars}=\prod_{i=1}^{k}(d_{i}+1)\) and \(\mathrm{setmax}=\sum_{i=1}^{k}d_{i}\), for all \(C\in\mathcal{C}\), we find a minimum-sized set \(D_{C}\subseteq E\) such that the connected components in \(F-D_{C}\) form a partition of the vertices with coloring \(C\) or certify that there is no such set._
Proof.: We first describe how to solve the problem on a tree \(T\) and then generalize the approach to forests. We call a partition of the vertices such that for every color \(i\) there are at most \(d_{i}\) vertices of that color in each cluster a _splitting_.
We employ a dynamic program that computes the set \(D_{C}\) for the colorings of all possible splittings and all subtrees rooted at each vertex in \(T\). We do so iteratively, by starting to compute all possible splittings at the leaves and augmenting them towards the root. Thereby, the connected component that is connected to the parent of the current subtree's root is of particular importance as it is the only connected component that can be augmented by vertices outside the subtree. We call this component the _head_. Note that the head is empty if the edge between the root and its parent is cut. We do not count the head in the coloring of the splitting and only give it explicitly. Formally, for every \(v\in V\), every possible coloring of a splitting \(C\), and every possible coloring \(h\) of the head we compute \(D_{v}^{h}[C]\subseteq E\), the minimum-sized set of edges such that the connected components of \(T_{v}-D_{v}^{h}[C]\) form a splitting with coloring \(C\) and head \(h\). We set \(D_{v}^{h}[C]=\mathbb{N}\), an infinitely large set, if no such set exists.
Let all \(D_{v}^{h}[C]\) be initialized with \(\mathbb{N}\). Then, for every leaf \(v\) with parent \(w\), we set \(D_{v}^{h_{c(v)}}[C_{\emptyset}]=\emptyset\), where \(h_{c(v)}\) is the coloring of the component \(\{v\}\) and \(C_{\emptyset}\) the coloring of the partition over the empty set. Also, we set \(D_{v}^{h_{a}}[C_{c(v)}]=\{\{v,w\}\}\), where the vertex \(v\) is not placed in the head as the edge to its parent is cut. As to cut or not to cut the edge above are the only options for leaves, this part of the array is now completed.
Next, suppose we have finished the computation for all children of some vertex \(v\). For every possible coloring \(h\) of the head that is formable at vertex \(v\), we try all possibilities to obtain that coloring.
To this end, first assume \(h\) to be non-empty. Therefore, \(v\) has to be placed in the head. Let \(h_{-c(v)}\) denote the coloring obtained by decreasing \(h\) by one at color \(c(v)\). To obtain head \(h\), we hence have to choose colorings of splittings of the subtrees rooted at the children \(u_{1},u_{2},\ldots,u_{\ell}\) of \(v\) such that their respective heads \(h_{u_{1}},h_{u_{2}},\ldots,h_{u_{\ell}}\) combine to \(h_{-c(v)}\). A
combination_ of colorings \(C_{1},C_{2},\ldots,C_{\ell}\) refers to the coloring of the union of partitions \(M_{1},M_{2},\ldots,M_{\ell}\) that have the respective colorings and is defined as the element-wise sum over the arrays \(C_{1},C_{2},\ldots,C_{\ell}\). Often, there are multiple ways to choose heads for the child vertices that fulfill this requirement. As every head is of size at most setmax, \(h_{-c(v)}\) and contains \(v\), it is composed of less than setmax non-empty heads. As there are at most setvars possible heads and we have to choose less than setmax children, there are at most \({n\choose\mathrm{setmax}-1}\cdot\mathrm{setvars}^{\mathrm{setmax}-1}<n^{ \mathrm{setmax}-1}\cdot\mathrm{setvars}^{\mathrm{setmax}-1}\) possible ways to form \(h_{-c(v)}\) with the children of \(v\). Let each way be described by a function \(H\) assigning each child of \(v\) a certain, possibly empty, head. Then, even for a fixed \(H\), there are multiple splittings possible. This stems from the fact that even if the head \(H(u)\) for a child \(u\) is fixed, there might be multiple splittings of the subtree of \(u\) with different colorings resulting in that head. For each possible \(H\), we hence employ the Join subroutine with the arrays \(D_{u}^{H(u)}\) for all children \(u\) using the cardinality of the sets as input for the subroutine. For the sake of readability, we index the arrays here by some vector \(C\) instead of a single numerical index as used in the algorithmic description of the Join subroutine. We implicitly assume that each possible coloring is represented by a positive integer. By letting these indices enumerate the vectors in a structured way, converting between the two formats only costs an additional time factor in \(\mathrm{O}(n)\).
For \(f(x_{1},x_{2})\) we give the function returning a set containing only the index of the coloring obtained by combining the colorings indexed by \(x_{1}\) and \(x_{2}\), which is computable in time in \(\mathrm{O}(n)\). Combining the colorings means for each set coloring summing the occurrences in both partition colorings. Thereby, \(\widehat{f}(x_{1},x_{2},\ldots,x_{k})\) as defined in the Join subroutine returns the index of the combination of the colorings indexed by \(x_{1},x_{2},\ldots,x_{k}\). Note that there are at most \(n\) arrays and each is of length less than \((n+1)^{\mathrm{setvars}-1}\) as there are so many different colorings by Lemma 24. After executing the Join subroutine, by Lemma 21, we obtain an array \(D_{H}\) that contains the minimum cut cost required for all possible colorings that can be achieved by splitting according to \(H\). By modifying the Join subroutine slightly to use a simple backtracking approach, we also obtain the set \(D\subseteq E\) that achieves this cut cost. We conclude our computation of \(D_{v}^{h}\) by element-wisely taking the minimum-sized set over all computed arrays \(D_{H}\) for the possible assignments \(H\).
If \(h\) is the empty head, i.e., the edge above \(v\) is cut, then \(v\) is placed in a component that is either of size setmax or has a coloring corresponding to some head \(h^{\prime}\). In the first case, we compute an array \(D_{\mathrm{full}}\) in the same manner as described above by trying all suitable assignments \(H\) and employing the Join subroutine. In the second case, we simply take the already filled array \(D_{v}^{h^{\prime}}\). Note that in both cases we have to increment all values in the array by one to reflect cutting the edge above \(v\), except if \(v\) is the root vertex. Also, we have to move the values in the arrays around, in order to reflect that the component containing \(v\) is no longer a head but with the edge above \(v\) cut should also be counted in the coloring of the splitting. Hence, the entry \(D_{\mathrm{full}}[C]\) is actually stored at \(D_{\mathrm{full}}[C_{-\mathrm{full}}]\) with \(C_{-\mathrm{full}}\) being the coloring \(C\) minus the coloring of a minimum-sized fair cluster. If no such entry \(D_{\mathrm{full}}[C_{-\mathrm{full}}]\) exists, we assume it to be \(\infty\). The same goes for accessing the arrays \(D_{v}^{h^{\prime}}\) where we have to subtract the coloring \(h^{\prime}\) from the index. Taking the element-wise minimum-sized element over the such modified arrays \(D_{\mathrm{full}}\) and \(D_{v}^{h^{\prime}}\) for all possibilities for \(h^{\prime}\) yields \(D_{v}^{\emptyset}\).
By the correctness of the Join subroutine and as we try out all possibilities to build the specified heads and colorings at every vertex, we thus know that after completing the computation at the root \(r\) of \(T\), the array \(D_{r}^{\emptyset}\) contains for every possible coloring of a splitting of the tree the minimum cut cost to achieve that coloring.
For each of the \(n\) vertices and the setvars possible heads, we call the Join subroutine
at most \(n^{\mathrm{setmax}-1}\cdot\mathrm{setvars}^{\mathrm{setmax}-1}\) many times. Each time, we call it with at most \(n\) arrays and, as by Lemma 24 there are \(\mathrm{O}(n^{\mathrm{setmax}})\) possible colorings, all these arrays have that many elements. Hence, each subroutine call takes time in \(\mathrm{O}(n\cdot(n^{\mathrm{setvars}})^{2})=\mathrm{O}(n^{2\mathrm{setvars}+1})\), so the algorithm takes time in \(\mathrm{O}(n^{2\mathrm{setvars}+\mathrm{setmax}+2}\cdot\mathrm{setvars}^{ \mathrm{setmax}})\), including an additional factor in \(\mathrm{O}(n)\) to account for converting the indices for the Join subroutine.
When the input graph is not a tree but a forest \(F\), we apply the dynamic program on every tree in the forest. Then, we additionally run the Join subroutine with the arrays for the \(\emptyset\)-head at the roots of all trees in the forest. The resulting array contains all minimum-cost solutions from all possible combinations from colorings of splittings from the individual trees and is returned as output. The one additional subroutine does not change the asymptotic running time.
Because of Lemmas 4 and 10 it suffices to consider partitions as possible solutions that have at most \(c_{i}\) vertices of color \(i\) in each cluster, for all \(i\in[k]\). We hence apply Lemma 25 on the forest \(F\) and set \(d_{i}=c_{i}\) for all \(i\in[k]\). This way, for every possible coloring of a splitting we find the minimum set of edges to obtain a splitting with that coloring.
### Assembling a fair clustering.
Let \(D\) be the array produced in the first phase, i.e., for every coloring \(C\) of a splitting, \(D[C]\) is a minimum-sized set of edges such that the connected components in \(F-D[C]\) induce a partition with coloring \(C\). In the second phase, we have to find the splitting that gives the minimum Correlation Clustering cost. We do so by deciding for each splitting whether it is _assemblable_, i.e., whether its clusters can be merged such that it becomes a fair solution with all clusters being no larger than setmax. Among these, we return the one with the minimum inter-cluster cost computed in the first phase.
This suffices because of the following reasons. First, note that deciding assemblability only depends on the coloring of the splitting so it does not hurt that in the first phase we tracked only all possible colorings of splittings and not all possible splittings themselves. Second, we do not have to consider further edge cuts in this phase: Assume we have a splitting \(S\) with coloring \(C_{S}\) and we would obtain a better cost by further cutting \(a\) edges in \(S\), obtaining another splitting \(S^{\prime}\) of coloring \(C_{S^{\prime}}\). However, as we filled the array \(D\) correctly, there is an entry \(D[C_{S^{\prime}}]\) and \(|D[C_{S^{\prime}}]|\leqslant|D[C_{S}]|+a\). As we will consider this value in finding the minimum anyway, there is no need to think about cutting the splittings any further. Third, the minimum inter-cluster cost yields the minimum Correlation Clustering cost by Lemma 3. When merging clusters, the inter-cluster cost computed in the first phase may decrease but not increase. If it decreases, we overestimate the cost. However, this case implies that there is an edge between the two clusters and as they are still of size at most setmax when merged, in the first phase we will also have found another splitting considering this case.
We employ a dynamic program to decide the assemblability for all possible \(\mathrm{O}(n^{\mathrm{setvars}})\) colorings of splittings. Define the _size_ of a partition coloring to be the number of set colorings in that partition coloring (not necessarily the number of different set colorings). We decide assemblability for all possible colorings of splittings from smallest to largest. Note that each such coloring is of size at least \(\frac{n}{\mathrm{setmax}}\). If it is of size exactly \(\frac{n}{\mathrm{setmax}}\), then all contained set colorings are of size setmax, so this partition coloring is assemblable if and only if all set colorings are fair. Now assume we have found all assemblable colorings of splittings of size exactly \(j\geqslant\frac{n}{\mathrm{setmax}}\). Assume a partition coloring \(C\) of size \(j+1\) is assemblable. Then, at least two set colorings \(C_{1},C_{2}\) from \(C\) are merged together. Hence, let \(C^{\prime}\) be the partition
coloring obtained by removing the set colorings \(C_{1},C_{2}\) from \(C\) and adding the set coloring of the combined coloring of \(C_{1}\) and \(C_{2}\). Now, \(C^{\prime}\) is of size \(j\) and is assemblable. Thus, every assemblable splitting with \(j+1\) components has an assemblable splitting with \(j\) components. The other way round, if we split a set coloring of an assemblable partition coloring of size \(j\) we obtain an assemblable partition coloring of size \(j+1\). Hence, we find all assemblable colorings of splittings of size \(j+1\) by for each assemblable partition coloring of size \(j\) (less than \(n^{\mathrm{setvars}}\) many) trying each possible way to split one of its set colorings (less than \(i\cdot 2^{\mathrm{setmax}}\) as there are \(j\) set colorings each of size at most setmax). Thus, to compute all assemblable colorings of splittings of size \(j+1\), we need time in \(\mathrm{O}(n^{\mathrm{setvars}}\cdot j\cdot 2^{\mathrm{setmax}})\), which implies a total time for the \(n-\frac{n}{\mathrm{setmax}}\) iterations in the second phase in \(\mathrm{O}(n^{\mathrm{setvars}+2}\cdot 2^{\mathrm{setmax}})\). This is dominated by the running time of the first phase. The complete algorithm hence runs in time in \(\mathrm{O}(n^{2\mathrm{setvars}+\mathrm{setmax}+2}\cdot\mathrm{setvars}^{ \mathrm{setmax}})\), which implies Theorem 4.2.
This gives an algorithm that solves Fair Correlation Clustering on arbitrary forests. The running time however may be exponential in the number of vertices depending on the color ratio in the forest.
### Few Clusters
The algorithm presented in the previous section runs in polynomial time if the colors in the graph are distributed in a way such that each cluster in a minimum-cost solution is of constant size. The worst running time is obtained when there are very large but few clusters. For this case, we offer another algorithm, which runs in polynomial time if the number of clusters is constant. However, it is limited to instances where the forest is colored in two colors in a ratio of \(1:c\) for some \(c\in\mathbb{N}\).
The algorithm uses a subroutine that computes the minimum number of cuts that are required to slice off clusters of specific sizes from the tree. It is given by Lemma 4.2.
Let \(T=(V,E)\) be a tree rooted at \(r\in V\) and \(k\in\mathbb{N}\). Then, we can compute an array \(R\) such that, for each \(a_{0}\in[n]\) and \(a=a_{1},a_{2},\ldots,a_{k}\in\left([n-1]\cup\{0\}\right)^{k}\) with \(a_{i}\geqslant a_{i+1}\) for \(i\in[k-1]\) and \(\sum_{i=0}^{k}a_{i}=n\), we have that \(R[a_{0},a]\) is the partition \(\mathcal{P}=\{S_{0},S_{1},\ldots,S_{k}\}\) of \(V\) with minimum inter-cluster cost that satisfies \(r\in S_{0}\) and \(|S_{i}|=a_{i}\) for \(i\in[k]\). The computation time is in \(\mathrm{O}((k+3)!\cdot n^{2k+3})\).
Proof.: We give a construction such that \(R[a_{0},a]\) stores not the partition itself but the incurred inter-cluster cost. By a simple backtracking approach, the partitions are obtained as well.
We employ a dynamic program that involves using the Join subroutine. For the sake of readability, we index the arrays here by some vector \(a\in[n]^{k}\) and \(a_{0}\in[n]\) instead of a single numerical index as used in the algorithmic description of the Join subroutine. We implicitly assume that each possible \(a_{0},a\) is represented by some index in \([n^{k+1}]\). By letting these indices enumerate the vectors in a structured way, converting between the two formats only costs an additional time factor in \(\mathrm{O}(k)\).
Starting at the leaves and continuing at the vertices for which all children have finished their computation, we compute an array \(R_{v}\) with the properties described for \(R\) but for the subtree \(T_{v}\) for each vertex \(v\in V\). In particular, for every vertex \(v\) we do the following. Let \(R_{v}^{0}\) be an array with \(\infty\)-values at all indices except for \(R_{v}^{0}[1,(0,0,\ldots,0)]=0\), as this is the only possible entry for the tree \(T[\{v\}]\).
If \(v\) has no children, then \(R=R_{v}^{0}\). Otherwise, let the children of \(v\) be \(u_{1},u_{2},\ldots,u_{\ell}\). Then we call the Join subroutine with the arrays \(R_{v}^{0},R_{u_{1}},R_{u_{2}},\ldots,R_{u_{\ell}}\). We have to define \(f\) such that it gives all possibilities to combine the children's subtrees partitions and \(v\). For all
possible values of \(a_{0},a\) and \(a^{\prime}_{0},a^{\prime}\) recall that \(f((a_{0},a),(a^{\prime}_{0},a^{\prime}))\) should return a set of indices of the form \((a^{\prime\prime}_{0},a^{\prime\prime})\). Each such index describes a combination of all possibilities for \(v\) and the already considered children \((a_{0},a)\) and the possibilities for the next child \((a^{\prime}_{0},a^{\prime})\). First, we consider the possibility to cut the edge between \(v\) and the child \(u\) that is represented by \((a^{\prime}_{0},a^{\prime\prime})\). Then, we add all possible ways of merging the two sets with their \(k+1\) clusters each. As we cut the edge \(\{u,v\}\), there are \(k\) possible ways to place the cluster containing \(u\) (all but the cluster containing \(v\)) and then there are \(k!\) ways to assign the remaining clusters. All these are put into the set \(f((a_{0},a),(a^{\prime}_{0},a^{\prime}))\). Second, we assume the edge \(\{u,v\}\) is not cut. Then, the clusters containing \(v\) and \(u\) have to be merged, so there are only \(k!\) possible ways to assign the other clusters. In particular, for all indices \((a^{\prime\prime}_{0},a^{\prime\prime})\) put into \(f((a_{0},a),(a^{\prime}_{0},a^{\prime}))\) this way, we have \(a^{\prime\prime}_{0}=a_{0}+a^{\prime}_{0}\). Note that \(f\) can be computed in \(\mathrm{O}(k\cdot k!)\). Note that \(\widehat{f}(x_{1},x_{2},\ldots,x_{\ell})\) as defined in the Join subroutine lists all possibilities to cut the combined tree as it iteratively combines all possibilities for the first child and the vertex \(v\) and for the resulting tree lists all possible combinations with the next child and so on. The Join subroutine takes time in \(\mathrm{O}((k+1)\cdot\left(n^{k+1}\right)^{2}\cdot(k\cdot k!)\cdot k)\), which is in \(\mathrm{O}((k+3)!\cdot n^{2k+2})\). All \(\mathrm{O}(n)\) calls of the subroutine hence take time in \(\mathrm{O}((k+3)!\cdot n^{2k+3})\).
With this, we are able to give an algorithm for graphs with two colors in a ratio of \(1:c\), which runs in polynomial time if there is only a constant number of clusters, i.e., if \(c\in\Theta(n)\).
Let \(F\) be an \(n\)-vertex forest with two colors in a ratio of \(1:c\) with \(c\in\mathbb{N}_{>0}\) and let \(p=\frac{n}{c+1}\). Then, Fair Correlation Clustering on \(F\) can be solved in \(\mathrm{O}(n^{p^{3}+p^{2}+p})\).
Proof.: Note that, if there are \(c\) red vertices per \(1\) blue vertex, \(p=\frac{n}{c+1}\) is the number of blue vertices. By Lemma 4, any minimum-cost clustering consists of \(p\) clusters, each containing exactly one blue vertex, and from Lemma 3 we know that it suffices to minimize the number of edges cut by any such clustering. All blue vertices are to be placed in separate clusters. They are separated by cutting at most \(p-1\) edges, so we try all of the \(\mathrm{O}((p-1)\cdot\binom{n-1}{p-1})\) subsets of edges of size at most \(p-1\). Having cut these edges, we have \(\ell\) trees \(T_{1},T_{2},\ldots,T_{\ell}\), with \(p\) of them containing exactly one blue vertex and the others no blue vertices. We root the trees at the blue vertex if they have one or at an arbitrary vertex otherwise. For each tree \(T_{i}\), let \(r_{i}\) be the number of red vertices. If we have exactly \(p\) trees and \(r_{i}=c\) for all \(i\in[p]\), we have found a minimum-cost clustering, where the \(i\)-th cluster is simply the set of vertices of \(T_{i}\) for all \(i\in[p]\). Otherwise, we must cut off parts of the trees and assign them to other clusters in order to make the partition fair. To this end, for each tree \(T_{i}\) we compute an array \(R_{i}\) that states the cost of cutting up to \(p-1\) parts of certain sizes off. More precisely, \(R_{i}[(a_{1},a_{2},\ldots,a_{p-1})]\) is the number of cuts required to cut off \(p-1\) clusters of size \(a_{1},a_{2},\ldots,a_{p-1}\), respectively, and \(\infty\) if there is no such way as \(\sum_{i=1}^{p-1}>r_{i}\). It suffices to compute \(R_{i}[(a_{1},a_{2},\ldots,a_{p-1})]\) with \(0\leqslant a_{i}\leqslant a_{i+1}\leqslant n\) for \(i\in[p-2]\).
We compute these arrays employing Lemma 26. Note that here we omitted the \(a_{0}\) used in the lemma, which here refers to the number of vertices _not_ cut from the tree. However, \(a_{0}\) is still unambiguously defined over \(a\) as all the values sum up to the number of vertices in this tree. Further, by connecting all trees without blue vertices to some newly added auxiliary vertex \(z\) and using this tree rooted at \(z\) as input to Lemma 26, we reduce the number of subroutine calls to \(p+1\). Then, the only entries from the array obtained for the all-red tree we consider are the ones with \(a_{0}=1\) as we do not want to merge \(z\) in a cluster but every vertex except \(z\) from this tree has to be merged into another cluster. We call the array obtained from this tree \(R_{0}\) and the arrays obtained for the other trees \(R_{1},R_{2},\ldots,R_{p}\), respectively.
Note that every fair clustering is characterized by choosing one entry from each array \(R_{i}\) and assigning the cut-off parts to other clusters. As each array has less than \(\frac{n^{p}}{p!}\) entries and there are at most \((p!)^{p}\) ways to assign the cut-off parts to clusters, there are at most \(n^{p^{2}}\) possibilities in total. For each of these, we compute in linear time whether they result in a fair clustering. Among these fair clusterings, we return the one with the minimum inter-cluster cost, computed by taking the sum over the chosen entries from the arrays \(R_{i}\). By Lemma 3, this clustering has the minimum Correlation Clustering cost. We obtain a total running time of
\[\operatorname{O}((p-1)\cdot\binom{n-1}{p-1}\cdot\left((p+1)\cdot\left(n^{p+3} +n^{p^{2}+p-2}\right)+n^{p^{2}+1}\right))\subseteq\operatorname{O}(n^{p^{3}+p^ {2}+p}).\qed\]
Combining the results of Theorems 23 and 27, we see that for the case of a forest with two colors in a ratio of \(1:c\) for some \(c\in\mathbb{N}_{>0}\), there are polynomial-time algorithms when the clusters are either of constant size or have sizes in \(\Theta(n)\). As Theorem 11 states that Fair Correlation Clustering on forests is NP-hard, we hence know that this hardness evolves somewhere between the two extremes.
## 7 Relaxed Fairness
It might look like the hardness results for Fair Correlation Clustering are due to the very strict definition of fairness, which enforces clusters of a specific size on forests. However, in this section, we prove that even when relaxing the fairness requirements our results essentially still hold.
### Definitions
We use the relaxed fairness constraint as proposed by Bera et al. [11] and employed for Fair Correlation Clustering by Ahmadi et al. [1]. For the following definitions, given a set \(U\) colored by a function \(c:U\to k\), by \(U_{i}=\{u\in U\mid c(u)=i\}\) we denote the set of vertices of color \(i\) for all \(i\in[k]\).
[Relaxed Fair Set] Let \(U\) be a finite set of elements colored by a function \(c:U\to[k]\) for some \(k\in\mathbb{N}_{>0}\) and let \(p_{i},q_{i}\in\mathbb{Q}\) with \(0<p_{i}\leqslant\frac{|U_{i}|}{|U|}\leqslant q_{i}<1\) for all \(i\in[k]\). Then, some \(S\subseteq U\) is relaxed fair with regard to the \(q_{i}\) and \(p_{i}\) if and only if for all colors \(i\in[k]\) we have \(p_{i}\leqslant\frac{|S\cap U_{i}|}{|S|}\leqslant q_{i}\).
Note that we require \(p_{i}\) and \(q_{i}\) to be such that an exact fair solution is also relaxed fair. Further, we exclude setting \(p_{i}\) or \(q_{i}\) to \(0\) as this would allow clusters that do not include every color, which we do not consider fair.
[Relaxed Fair Partition] Let \(U\) be a finite set of elements colored by a function \(c:U\to[k]\) for some \(k\in\mathbb{N}_{>0}\) and let \(p_{i},q_{i}\in\mathbb{Q}\) with \(0<p_{i}\leqslant\frac{|U_{i}|}{|U|}\leqslant q_{i}<1\) for all \(i\in[k]\). Then, a partition \(S_{1}\cup S_{2}\cup\ldots\cup S_{\ell}=U\) is relaxed fair with regard to the \(q_{i}\) and \(p_{i}\) if and only if all sets \(S_{1},S_{2},\ldots,S_{\ell}\) are relaxed fair with regard to the \(q_{i}\) and \(p_{i}\).
While we use the above definition for our hardness results, we restrict the possibilities for the \(p_{i}\) and \(q_{i}\) for our algorithms.
[\(\alpha\)-relaxed Fair Set] Let \(U\) be a finite set of elements colored by a function \(c:U\to[k]\) for some \(k\in\mathbb{N}_{>0}\) and let \(0<\alpha<1\). Then, some \(S\subseteq U\) is \(\alpha\)-relaxed fair if and only if it is relaxed fair with regard to \(p_{i}=\frac{\alpha|U_{i}|}{|U|}\) and \(q_{i}=\frac{|U_{i}|}{\alpha|U|}\) for all \(i\in[k]\).
[\(\alpha\)-relaxed Fair Partition] Let \(U\) be a finite set of elements colored by a function \(c:U\to[k]\) for some \(k\in\mathbb{N}_{>0}\) and let \(0<\alpha<1\). Then, a partition \(S_{1}\cup S_{2}\cup\ldots\cup S_{\ell}=U\) is \(\alpha\)-relaxed fair if and only if all sets \(S_{1},S_{2},\ldots,S_{\ell}\) are \(\alpha\)-relaxed fair.
\begin{tabular}{|c c|} \hline \(\alpha\)-relaxed Fair Correlation Clustering & \\
**Input:** & Graph \(G=(V,E)\), coloring \(c\colon V\to[k]\), \(0<\alpha<1\). \\
**Task:** & Find a \(\alpha\)-relaxed fair partition \(\mathcal{P}\) of \(V\) that minimizes \(\operatorname{cost}(\mathcal{P})\). \\ \hline \end{tabular}
### Hardness for Relaxed Fairness
The hardness result for exact fairness on paths, see Theorem 4.2, directly carries over to the relaxed fairness setting. This is due to it only considering instances in which there are exactly two vertices of each color. As any relaxed fair clustering still requires at least one vertex of every color in each cluster, this means that every relaxed clustering either consists of a single cluster or two clusters, each with one vertex of every color. Thereby, relaxing fairness makes no difference in these instances.
Relaxed Fair Correlation Clustering on paths is -hard, even when limited to instances with exactly 2 vertices of each color.
Our other hardness proofs for relaxed fairness are based on the notion that we can use similar constructions as for exact fairness and additionally prove that in these instances the minimum-cost solution has to be exactly fair and not just relaxed fair. To this end, we require a lemma giving a lower bound on the intra-cluster cost of clusterings.
Let \(G=(V,E)\) be an \(n\)-vertex \(m\)-edge graph and \(\mathcal{P}\) a partition of \(V\) with an inter-cluster cost of \(\chi\). Then, the intra-cluster cost of \(\mathcal{P}\) is at least \(\frac{n^{2}}{2|\mathcal{P}|}-\frac{n}{2}-m+\chi\). If \(|S|=\frac{n}{|\mathcal{P}|}\) for all clusters \(S\in\mathcal{P}\), then the intra-cluster cost of \(\mathcal{P}\) is exactly \(\psi=\frac{n^{2}}{2|\mathcal{P}|}-\frac{n}{2}-m+\chi\).
Proof.: We first prove the lower bound. We employ the Cauchy-Schwarz inequality, stating that for every \(\ell\in\mathbb{N}\), \(x_{1},x_{2},\ldots,x_{\ell}\), and \(y_{1},y_{2},\ldots,y_{\ell}\), we have \(\left(\sum_{i=1}^{\ell}x_{i}y_{i}\right)^{2}\leqslant\left(\sum_{i=1}^{\ell}x _{i}^{2}\right)\cdot\left(\sum_{i=1}^{\ell}y_{i}^{2}\right)\). In particular, it holds that \(\left(\sum_{i=1}^{\ell}x_{i}\right)^{2}\leqslant\ell\cdot\sum_{i=1}^{\ell}x_{i} ^{2}\). Observe that we can write the intra-cluster cost \(\psi\) of \(\mathcal{P}\) as
\[\psi =\left(\sum_{S\in\mathcal{P}}\frac{|S|\cdot(|S|-1)}{2}\right)-(m- \chi)=\frac{1}{2}\left(\sum_{S\in\mathcal{P}}|S|^{2}\right)-\left(\sum_{S\in \mathcal{P}}\frac{|S|}{2}\right)-m+\chi\] \[=\frac{1}{2}\left(\sum_{S\in\mathcal{P}}|S|^{2}\right)-\frac{n}{2 }-m+\chi.\]
By Cauchy-Schwarz, we have \(\sum_{S\in\mathcal{P}}|S|^{2}\geqslant\frac{1}{|\mathcal{P}|}\cdot\left(\sum_{S \in\mathcal{P}}|S|\right)^{2}=\frac{n^{2}}{|\mathcal{P}|}\). This bounds the intra-cluster cost from below by \(\psi\geqslant\frac{n^{2}}{2|\mathcal{P}|}-\frac{n}{2}-m+\chi\).
For the second statement, assume all clusters of \(\mathcal{P}\) to be of size \(\frac{n}{|\mathcal{P}|}\). Then, there are \(\frac{1}{2}\cdot\frac{n}{|\mathcal{P}|}\cdot\left(\frac{n}{|\mathcal{P}|}-1\right)\) pairs of vertices in each cluster. Thereby, we have
\[\psi=|\mathcal{P}|\cdot\frac{1}{2}\cdot\frac{n}{|\mathcal{P}|}\cdot\left(\frac{ n}{|\mathcal{P}|}-1\right)-(m-\chi)=\frac{n^{2}}{2|\mathcal{P}|}-\frac{n}{2}-m+\chi.\qed\]
We further show that no clustering with clusters of unequal size achieves the lower bound given by Lemma 33.
Let \(G=(V,E)\) be an \(n\)-vertex \(m\)-edge graph and \(\mathcal{P}\) a partition of \(V\) with an inter-cluster cost of \(\chi\) such that there is a cluster \(S\in\mathcal{P}\) with \(|S|=\frac{n}{|\mathcal{P}|}+a\) for some \(a\geqslant 0\). Then, the intra-cluster cost of \(\mathcal{P}\) is \(\psi\geqslant\frac{a^{2}|\mathcal{P}|}{2|\mathcal{P}|-2}+\frac{n^{2}}{2| \mathcal{P}|}-\frac{n}{2}-m+\chi\).
Proof.: If \(a=0\), the statement is implied by Lemma 33. So, assume \(a>0\). We write the intra-cluster cost as
\[\psi=\frac{1}{2}\cdot\left(\frac{n}{|\mathcal{P}|}+a\right)\cdot\left(\frac{n} {|\mathcal{P}|}+a-1\right)+\psi_{\mathrm{rest}}\]
with \(\psi_{\mathrm{rest}}\) being the intra-cluster cost incurred by \(\mathcal{P}\setminus\{S\}\). By applying Lemma 33 on \(\mathcal{P}\setminus\{S\}\), we have
\[\psi \geqslant\frac{1}{2}\cdot\left(\frac{n}{|\mathcal{P}|}+a\right) \cdot\left(\frac{n}{|\mathcal{P}|}+a-1\right)+\frac{\left(n-(\frac{n}{| \mathcal{P}|}+a)\right)^{2}}{2(|\mathcal{P}|-1)}-\frac{n-(\frac{n}{|\mathcal{ P}|}+a)}{2}-m+\chi\] \[=\frac{n^{2}}{2|\mathcal{P}|^{2}}+\frac{an}{|\mathcal{P}|}+\frac {a^{2}}{2}-\frac{n}{2|\mathcal{P}|}-\frac{a}{2}+\frac{n^{2}-2n^{2}/|\mathcal{ P}|-2an+n^{2}/|\mathcal{P}|^{2}+2a\frac{n}{|\mathcal{P}|}+a^{2}}{2|\mathcal{P}|-2}\] \[\quad-\frac{n}{2}+\frac{n}{2|\mathcal{P}|}+\frac{a}{2}-m+\chi.\]
Bringing the first summands to a common denominator of \(2|\mathcal{P}|-2\) yields
\[\psi \geqslant\left(\frac{n^{2}(|\mathcal{P}|-1)}{|\mathcal{P}|^{2}}+ \frac{an(2|\mathcal{P}|-2)}{|\mathcal{P}|}+a^{2}(|\mathcal{P}|-1)+n^{2}-\frac {2n^{2}}{|\mathcal{P}|}-2an+\frac{n^{2}}{|\mathcal{P}|^{2}}+\frac{2an}{| \mathcal{P}|}+a^{2}\right)\] \[\quad\left/(2|\mathcal{P}|-2)-\frac{n}{2}-m+\chi\right.\] \[=\left(\frac{n^{2}|\mathcal{P}|}{|\mathcal{P}|^{2}}+\frac{2an| \mathcal{P}|}{|\mathcal{P}|}+a^{2}|\mathcal{P}|+n^{2}-\frac{2n^{2}}{|\mathcal{ P}|}-2an\right)/(2|\mathcal{P}|-2)-\frac{n}{2}-m+\chi\] \[=\left(-\frac{n^{2}}{|\mathcal{P}|}+a^{2}|\mathcal{P}|+n^{2} \right)/(2|\mathcal{P}|-2)-\frac{n}{2}-m+\chi.\]
We then add \(0=-\frac{n^{2}}{2|\mathcal{P}|}\cdot\frac{2|\mathcal{P}|-2}{2|\mathcal{P}|-2}+ \frac{n^{2}}{2|\mathcal{P}|}\) and obtain
\[\psi \geqslant\left(-\frac{n^{2}}{|\mathcal{P}|}+a^{2}|\mathcal{P}|+n^ {2}-\frac{n^{2}(|\mathcal{P}|-1)}{|\mathcal{P}|}\right)/(2|\mathcal{P}|-2)+ \frac{n^{2}}{2|\mathcal{P}|}-\frac{n}{2}-m+\chi\] \[=\frac{a^{2}|\mathcal{P}|}{2|\mathcal{P}|-2}+\frac{n^{2}}{2| \mathcal{P}|}-\frac{n}{2}-m+\chi.\qed\]
Observe that as \(|\mathcal{P}|>1\) and \(a\neq 0\) this means that such a clustering never achieves the lower bound given by Lemma 33. In particular, this means that for fixed inter-cluster costs in minimum-cost fair clusterings in forests all clusters are of equal size. This way, we are able to transfer some hardness results obtained for exact fairness to relaxed fairness.
**Theorem 35**.: _For every choice of \(0<p_{1}\leq\frac{1}{c+1}\leqslant q_{1}<1\) and \(0<p_{2}\leqslant\frac{c}{c+1}\leqslant q_{2}<1\), Relaxed Fair Correlation Clustering on forests with two colors in a ratio of \(1:c\) is NP-hard. It remains NP-hard when arbitrarily restricting the shape of the trees in the forest as long as for every \(a\in\mathbb{N}\) it is possible to form a tree with \(a\) vertices._
Proof.: We reduce from 3-Partition. Recall that there are \(3p\) values \(a_{1},a_{2},\ldots,a_{3p}\) and the task is to partition them in triplets that each sum to \(B\). We construct a forest \(F\) as follows. For every \(a_{i}\) we construct an arbitrary tree of \(a_{i}\) red vertices. Further, we let there be \(p\) isolated blue vertices. Note that the ratio between blue and red vertices is \(1:B\). We now show that there is a relaxed fair clustering \(\mathcal{P}\) such that
\[\operatorname{cost}(\mathcal{P})\leqslant p\cdot\frac{B(B+1)}{2}-p(B-3)\]
if and only if the given instance is a yes-instance for 3-Partition.
If we have a yes-instance of 3-Partition, then there is a partition of the set of trees into \(p\) clusters of size \(B\). By assigning the blue vertices arbitrarily to one unique cluster each, we hence obtain an exactly fair partition, which is thus also relaxed fair. As there are no edges between the clusters and each cluster consists of \(B+1\) vertices and \(B-3\) edges, this partition has a cost of \(p\cdot\frac{B(B+1)}{2}-p(B-3)\).
For the other direction, assume there is a relaxed fair clustering \(\mathcal{P}\) such that \(\operatorname{cost}(\mathcal{P})\leqslant p\cdot\frac{B(B+1)}{2}-p(B-3)\). We prove that this clustering has to be not just relaxed fair but exactly fair. Note that \(|V|=p(B+1)\) and \(|E|=p(B-3)\). As the inter-cluster cost \(\chi\) is non-negative, by Lemma 33 the intra-cluster cost has a lower bound of
\[\psi\geqslant\frac{(p(B+1))^{2}}{2|\mathcal{P}|}-\frac{p(B+1)}{2}-p(B-3).\]
As there are exactly \(p\) blue vertices and the relaxed fairness constraint requires putting at least one blue vertex in each cluster, we have \(|\mathcal{P}|\leqslant p\). Hence,
\[\psi\geqslant\frac{p(B+1)^{2}}{2}-\frac{p(B+1)}{2}-p(B-3)=p\cdot\frac{B(B+1)}{ 2}-p(B-3)\geqslant\operatorname{cost}(\mathcal{P}).\]
This implies that the inter-cluster cost of \(\mathcal{P}\) is \(0\) and \(|\mathcal{P}|=p\). Lemma 34 then gives that all clusters in \(\mathcal{P}\) consist of exactly \(B+1\) vertices. As each of the \(p\) clusters has at least \(1\) blue vertex and there are \(p\) blue vertices in total, we know that each cluster consists of \(1\) blue and \(B\) red vertices. Since all trees are of size greater than \(\frac{B}{4}\) and less than \(\frac{B}{2}\), this implies each cluster consists of exactly one blue vertex and exactly three uncut trees with a total of \(B\) vertices. This way, such a clustering gives a solution to 3-Partition, so our instance is a yes-instance.
As the construction of the graph only takes polynomial time in the instance size, this implies our hardness result.
Indeed, we note that we obtain our hardness result for any fairness constraint that allows the exactly fair solution and enforces at least \(1\) vertex of each color in every cluster. The same holds when transferring our hardness proof for trees of diameter \(4\).
**Theorem 36**.: _For every choice of \(0<p_{1}\leqslant\frac{1}{c+1}\leqslant q_{1}<1\) and \(0<p_{2}\leqslant\frac{c}{c+1}\leqslant q_{2}<1\), Relaxed Fair Correlation Clustering on trees with diameter 4 and two colors in a ratio of \(1:c\) is NP-hard._
Proof.: We reduce from 3-Partition. We assume \(B^{2}>16p\). We can do so as we obtain an equivalent instance of 3-Partition when multiplying all \(a_{i}\) and \(B\) by the same factor, here some value in \(\mathrm{O}(p)\). For every \(a_{i}\) we construct a star of \(a_{i}\) red vertices. Further, we let there be a star of \(p\) blue vertices. We obtain a tree of diameter 4 by connecting the center \(v\) of the blue star to all the centers of the red stars. Note that the ratio between blue and red vertices is \(1:B\). We now show that there is a relaxed fair clustering \(\mathcal{P}\) such that
\[\mathrm{cost}(\mathcal{P})\leqslant\frac{pB^{2}-pB}{2}+7p-7\]
if and only if the given instance is a yes-instance for 3-Partition.
If we have a yes-instance of 3-Partition, then there is a partition of the set of stars into \(p\) clusters of size \(B\), each consisting of three stars. By assigning the blue vertices arbitrarily to one unique cluster each, we hence obtain an exact fair partition, which is thus also relaxed fair. We first compute the inter-cluster cost. We call an edge _blue_ or _red_ if it connects two blue or red vertices, respectively. We call an edge _blue-red_ if it connects a blue and a red vertex. All \(p-1\) blue edges are cut. Further, all edges between \(v\) (the center of the blue star) and red vertices are cut except for the three stars to which \(v\) is assigned. This causes \(3p-3\) more cuts, so the inter-cluster cost is \(\chi=4p-4\). Each cluster consists of \(B+1\) vertices and \(B-3\) edges, except for the one containing \(v\) which has \(B\) edges. The intra-cluster cost is
\[\psi=p\left(\frac{B(B+1)}{2}-B+3\right)-3=\frac{pB^{2}-pB}{2}+3p-3.\]
Combining the intra- and inter-cluster costs yields the desired cost of
\[\mathrm{cost}(\mathcal{P})=\chi+\psi=\frac{pB^{2}-pB}{2}+7p-7.\]
For the other direction, assume there is a relaxed fair clustering \(\mathcal{P}\) such that \(\mathrm{cost}(\mathcal{P})\leqslant\frac{pB^{2}-pB}{2}+7p-7\). We prove that this clustering is not just relaxed fair but exactly fair.
To this end, we first show \(|\mathcal{P}|=p\). Because each cluster requires one of the \(p\) blue vertices, we have \(|\mathcal{P}|\leqslant p\). Now, let \(\chi\) denote the inter-cluster cost of \(\mathcal{P}\). Note that \(|V|=p(B+1)\) and \(|E|=p(B-3)+3p+p-1=p(B+1)-1\). Then, by Lemma 33, we have
\[\psi \geqslant\frac{\left(p(B+1)\right)^{2}}{2|\mathcal{P}|}-\frac{p(B +1)}{2}-\left(p(B+1)-1\right)+\chi\] \[=\frac{p^{2}B^{2}+2p^{2}B+p^{2}}{2|\mathcal{P}|}-\frac{3p(B+1)}{2 }+1+\chi. \tag{2}\]
Note that the lower bound is decreasing in \(|\mathcal{P}|\). If we had \(|\mathcal{P}|\leqslant p-1\), then
\[\psi\geqslant\frac{p^{2}B^{2}+2p^{2}B+p^{2}}{2(p-1)}-\frac{3p(B+1)}{2}+1+\chi.\]
As the inter-cluster cost \(\chi\) is non-negative, we would thereby get
\[\mathrm{cost}(\mathcal{P}) \geqslant\frac{p^{2}B^{2}+2p^{2}B+p^{2}}{2(p-1)}-\frac{3p(B+1)}{2 }+1+\chi\] \[\geqslant\frac{p^{2}B^{2}+2p^{2}B+p^{2}}{2(p-1)}-\frac{3p^{2}B-3 pB+3p^{2}-3p}{2(p-1)}+\frac{2p-2}{2(p-1)}\] \[\geqslant\frac{p^{2}B^{2}-p^{2}B-2p^{2}+3 pB+5p-2}{2(p-1)}.\]
However, we know
\[\mathrm{cost}(\mathcal{P}) \leqslant\frac{pB^{2}-pB}{2}+7p-7\] \[=\frac{p^{2}B^{2}-pB^{2}-p^{2}B+pB+14p^{2}-14p-14p+14}{2(p-1)}\] \[=\frac{p^{2}B^{2}-pB^{2}-p^{2}B+pB+14p^{2}-28p+14}{2(p-1)}.\]
Hence, \(|\mathcal{P}|\leqslant p-1\) holds only if \(-2p^{2}+3pB+5p-2\leqslant-pB^{2}+pB+14p^{2}-28p+14\) which is equivalent to \(pB^{2}-16p^{2}+2pB+33p-16\leqslant 0\). As we assume \(B^{2}>16p\), this is always false, so \(|\mathcal{P}|=p\). Plugging this into Equation 2 yields
\[\psi\geqslant\frac{pB^{2}+2pB+p}{2}-\frac{3p(B+1)}{2}+1+\chi=\frac{pB^{2}-pB}{ 2}-p+1+\chi.\]
As \(\mathrm{cost}(\mathcal{P})=\chi+\psi\), we have
\[\frac{pB^{2}-pB}{2}-p+1+2\chi\leqslant\mathrm{cost}(\mathcal{P})\leqslant \frac{pB^{2}-pB}{2}+7p-7, \tag{3}\]
which yields \(\chi\leqslant 4p-4\).
As no two blue vertices are placed in the same cluster, the cuts between blue vertices incur an inter-cluster cost of exactly \(p-1\). To estimate the number of cut blue-red edges, let \(a\) denote the number of red center vertices placed in the cluster of the blue center vertex \(v\). Then, there are \(3p-a\) of the \(3p\) red edges cut. Let \(\chi_{r}\) denote the number of cut red edges. Note that \(\chi=p-1+3p-a+\chi_{r}=4p-a-1+\chi_{r}\).
We prove that \(a=3\). As \(\chi\leqslant 4p-4\) we have \(\chi_{r}-a\leqslant-3\), whence \(a\geqslant 3\). Next, we bound \(\chi_{r}\) by \(a\). Let \(\delta\in\mathbb{Z}\) be such that \(B+\delta\) is the number of red vertices in the cluster containing the blue center vertex \(v\). Then,
\[\chi_{r}\geqslant\frac{aB}{4}-(B+\delta-a)=\frac{(a-4)B}{4}-\delta+a\]
as each red center vertex is connected to at least \(\frac{B}{4}\) red leaves but in the cluster of \(v\) there is only space for \(B+\delta-a\) of them. First, assume \(\delta\leqslant 0\). This implies \(\chi_{r}-a\geqslant\frac{(a-4)B}{4}\). As we required \(\chi_{r}-a\leqslant-3\), this gives \(a<4\), as desired.
The case \(\delta\geqslant 1\) is a bit more involved. From Lemma 34, \(p=|\mathcal{P}|\), and \(m=n-1=p(B+1)-1\), we get
\[\psi\geqslant\frac{\delta^{2}|\mathcal{P}|}{2|\mathcal{P}|-2}+\frac{\left(p(B+ 1)\right)^{2}}{2|\mathcal{P}|}-\frac{p(B+1)}{2}-m+\chi=\frac{\delta^{2}p}{2p-2 }+\frac{pB^{2}+2pB+p}{2}-\frac{3p(B+1)}{2}+\chi+1.\]
This yields
\[\frac{\delta^{2}p}{2p-2}+\frac{pB^{2}-pB}{2}-p+2\chi+1\leqslant\mathrm{cost}( \mathcal{P})\leqslant\frac{pB^{2}-pB}{2}+7p-7.\]
We derive from this inequality that \(\chi\leqslant 4p-4-\frac{\delta^{2}p}{4p-4}\) and \(\chi_{r}-a\leqslant-3-\frac{\delta^{2}p}{4p-4}\) implying
\[\frac{(a-4)B}{4}-\delta\leqslant-3-\frac{\delta^{2}p}{4p-4}\]
The right-hand side is decreasing in \(\delta\), and by plugging in the minimum value for the case \(\delta\geqslant 1\), we finally get \(\frac{(a-4)B}{4}\leqslant-2-\frac{p}{4p-4}\). This shows that \(a<4\) must hold here as well.
Thus, we have proven \(a=3\), which also gives \(\chi_{r}=0\) and \(\chi=4p-4\). So, not only do we have that \(\text{cost}(\mathcal{P})\leqslant\frac{pB^{2}-pB}{2}+7p-7\) but \(\text{cost}(\mathcal{P})=\frac{pB^{2}-pB}{2}+7p-7\). In Equation 3 we see that for \(\chi=4p-4\) this hits exactly the lower bound established by Lemma 33. Hence, by Lemma 34, this implies that all clusters consist of exactly \(1\) blue and \(B\) red vertices and the clustering is exactly fair.
As \(\chi_{r}=0\), all red stars are complete. Given that every red star is of size at least \(\frac{B}{4}\) and at most \(\frac{B}{2}\), this means each cluster consists of exactly three complete red stars with a total number of \(B\) red vertices each and hence yields a solution to the 3-Partition instance. As the construction of the graph only takes polynomial time in the instance size and the constructed tree is of diameter \(4\), this implies our hardness result.
In the hardness proofs in this section, we argued that for the constructed instances clusterings that are relaxed fair, but not exactly fair would have a higher cost than exactly fair ones. However, this is not generally true. It does not even hold when limited to paths and two colors in a \(1:1\) ratio, as illustrated in Figure 11.
Because of this, we have little hope to provide a general scheme that transforms all our hardness proofs from Section 5 to the relaxed fairness setting at once. Thus, we have to individually prove the hardness results in this setting as done for Theorems 35 and 36. We are optimistic that the other hardness results still hold in this setting, especially as the construction for Theorem 13 is similar to the ones employed in this section. We leave the task of transferring these results to future work.
### Algorithms for Relaxed Fairness
We are also able to transfer the algorithmic result of Theorem 23 to a specific \(\alpha\)-relaxed fairness setting. We exploit that the algorithm does not really depend on exact fairness but on the fact that there is an upper bound on the cluster size, which allows us to compute respective splittings. In the following, we show that such upper bounds also exist for \(\alpha\)-relaxed fairness with two colors in a ratio of \(1:1\) and adapt the algorithm accordingly. To compute the upper bound, we first prove Lemma 37, which analogously to Lemma 4 bounds the size of clusters but in uncolored forests. Using this lemma, with Lemma 38, we then prove an upper bound on the cluster size in minimum-cost \(\alpha\)-relaxed fair clusterings for forests with two colors in ratio \(1:1\).
Let \(F=(V,E)\) be an \(n\)-vertex \(m\)-edge forest and let \(\mathcal{P}_{1}=\{V\}\). Further, let \(S\subset V\) with \(4<|S|\leqslant n-3\) and let \(\mathcal{P}_{2}=\{S,V\setminus S\}\). Then, \(\text{cost}(\mathcal{P}_{1})>\text{cost}(\mathcal{P}_{2})\).
Proof.: We have \(\text{cost}(\mathcal{P}_{1})=\frac{n(n-1)}{2}-m\) as there are \(\frac{n(n-1)}{2}\) pairs of vertices and \(m\) edges, none of which is cut by \(\mathcal{P}_{1}\). In the worst case, \(\mathcal{P}_{2}\) cuts all of the at most \(n-1\) edges in the
Figure 11: Exemplary path with a color ratio of \(1:1\) where there is a \(\frac{2}{3}\)-relaxed fair clustering of cost \(3\) (marked by the orange lines) and the cheapest exactly fair clustering costs \(4\).
forest. It has one cluster of size \(|S|\) and one of size \(n-|S|\), so
\[\operatorname{cost}(\mathcal{P}_{2}) \leqslant n-1+\frac{(n-|S|)(n-|S|-1)}{2}+\frac{|S|(|S|-1)}{2}-(m-n-1)\] \[=\frac{n(n-1)}{2}+\frac{-2n|S|+|S|^{2}+|S|}{2}+\frac{|S|^{2}-|S|}{ 2}-m+2n-2\] \[=\frac{n(n-1)}{2}-n|S|+|S|^{2}-m+2n-2.\]
Then, we have
\[\operatorname{cost}(\mathcal{P}_{1})-\operatorname{cost}(\mathcal{P}_{2}) \geqslant n|S|-|S|^{2}-2n+2\geqslant(|S|-2)n-|S|^{2}+2.\]
Note that the bound is increasing in \(n\). As we have, \(n\geqslant|S|+3\) and \(|S|>4\), this gives
\[\operatorname{cost}(\mathcal{P}_{1})-\operatorname{cost}(\mathcal{P}_{2}) \geqslant(|S|-2)(|S|+3)-|S|^{2}+2=|S|-4>0.\qed\]
With the knowledge of when it is cheaper to split a cluster, we now prove that also for \(\alpha\)-relaxed Fair Correlation Clustering there is an upper bound on the cluster size in minimum-cost solutions in forests. The idea is to assume a cluster of a certain size and then argue that we can split it in a way that reduces the cost and keeps \(\alpha\)-relaxed fairness.
Let \(F\) be a forest with two colors in a ratio of \(1:1\). Let \(0<\alpha<1\) and let \(\hat{\alpha}\in\mathbb{N}\) be minimal such that \(\frac{2\hat{\alpha}}{\alpha}\in\mathbb{N}\) and \(\frac{2\hat{\alpha}}{\alpha}>4\). Then, if \(\mathcal{P}\) is a minimum-cost \(\alpha\)-relaxed fair clustering on \(F\), we have \(|S|<4\frac{\hat{\alpha}}{\alpha^{2}}\) for all \(S\in\mathcal{P}\).
Proof.: Assume otherwise, i.e., there is a cluster \(S\) with \(|S|\geqslant 4\frac{\hat{\alpha}}{\alpha^{2}}\). Let \(b\) and \(r\) denote the number of blue and red vertices in \(S\), respectively, and assume w.l.o.g. that \(b\leqslant r\). Because \(|S|\geqslant 4\frac{\hat{\alpha}}{\alpha^{2}}\) we have \(\frac{\alpha}{2}\geqslant\frac{2\hat{\alpha}}{\alpha|S|}\). Due to the \(\alpha\)-relaxed fairness constraint, this yields \(\frac{b}{|S|}\geqslant\frac{2\hat{\alpha}}{\alpha|S|}\) and thereby \(r\geqslant b\geqslant\frac{2\hat{\alpha}}{\alpha}\).
Then, consider the clustering obtained by splitting off \(\hat{\alpha}\) blue and \(\frac{2\hat{\alpha}}{\alpha}-\hat{\alpha}\) red vertices of from \(S\) into a new cluster \(S_{1}\) and let \(S_{2}=S\setminus S_{1}\). Note that we choose \(\hat{\alpha}\) in a way that this is possible, i.e., that both sizes are natural numbers. As the cost induced by all edges with at most one endpoint in \(S\) remains the same and the cost induced by the edges with both endpoints in \(S\) decreases, as shown in Lemma 3.2, the new clustering is cheaper than \(\mathcal{P}\). As we now prove that the new clustering is also \(\alpha\)-relaxed Fair, this contradicts the optimality of \(\mathcal{P}\).
We first prove the \(\alpha\)-relaxed fairness of \(S_{1}\). Regarding the blue vertices, we have a portion of \(\frac{\hat{\alpha}}{\hat{\alpha}+\frac{2\hat{\alpha}}{\alpha}-\hat{\alpha}}= \frac{\alpha}{2}\) in \(S_{1}\), which fits the \(\alpha\)-relaxed fairness constraint. Regarding the red vertices, we have \(\frac{\hat{\alpha}}{\hat{\alpha}+\frac{2\hat{\alpha}}{\alpha}-\hat{\alpha}}=1- \frac{\alpha}{2}\), which fits the \(\alpha\)-relaxed fairness constraint as \(0<\alpha<1\), so \(1-\frac{\alpha}{2}\geqslant\frac{\alpha}{2}\) and \(1-\frac{\alpha}{2}=\frac{2\alpha-\alpha^{2}}{2\alpha}\leqslant\frac{1}{2\alpha}\).
Now we prove the \(\alpha\)-relaxed fairness of \(S_{2}\). The portion of blue vertices in \(S_{2}\) is \(\frac{b-\hat{\alpha}}{r+b-\frac{2\hat{\alpha}}{\alpha}}\), so we have to show that this value lays between \(\frac{\alpha}{2}\) and \(\frac{1}{2\alpha}\). We start with showing the value is at least \(\frac{\alpha}{2}\) by proving \(\frac{\alpha}{2}\cdot\left(r+b-\frac{2\hat{\alpha}}{\alpha}\right)\leqslant b- \hat{\alpha}\). As \(S\) is \(\alpha\)-relaxed fair, we have \(r\leqslant\frac{2b}{\alpha}-b\) because otherwise \(\frac{b}{b+r}<\frac{b}{b+\frac{\alpha}{\alpha}-b}=\frac{\alpha}{2}\). Hence, we have
\[\frac{\alpha}{2}\cdot\left(r+b-\frac{2\hat{\alpha}}{\alpha}\right)\leqslant \frac{\alpha}{2}\cdot\left(\frac{2b}{\alpha}-b+b-\frac{2\hat{\alpha}}{\alpha} \right)=b-\hat{\alpha}.\]
Similarly, we show the ratio is at most \(\frac{1}{2\alpha}\) by proving the equivalent statement of \(2\alpha(b-\hat{\alpha})\leqslant r+b-\frac{2\hat{\alpha}}{\alpha}\). As we assume \(r\geqslant b\), we have
\[r+b-\frac{2\hat{\alpha}}{\alpha}\geqslant 2b-\frac{2\hat{\alpha}}{\alpha} \geqslant 2\left(b-\frac{\hat{\alpha}}{\alpha}-\left((1-\alpha)b+(\alpha^{2}-1) \frac{\hat{\alpha}}{\alpha}\right)\right)=2\alpha\left(b-\hat{\alpha}\right).\]
The second step holds because we assumed \(b\geqslant\frac{2\hat{\alpha}}{\alpha}\geqslant\frac{\alpha\hat{\alpha}+\hat{ \alpha}}{\alpha}=\frac{\frac{\hat{\alpha}}{\alpha}-\alpha\hat{\alpha}}{1- \alpha}\), so we have \((1-\alpha)b+(\alpha^{2}-1)\frac{\hat{\alpha}}{\alpha}\geqslant 0\). Now, we regard the portion of red vertices in \(S_{2}\), which is \(\frac{r-\left(\frac{2\hat{\alpha}}{\alpha}-\hat{\alpha}\right)}{r+b-\frac{2 \hat{\alpha}}{\alpha}}\). We know that \(r\geqslant\frac{2\hat{\alpha}}{\alpha}\), that is, \((1-\alpha)r\geqslant\frac{2\hat{\alpha}}{\alpha}-2\hat{\alpha}\) or, in other words, \(r-\left(\frac{2\hat{\alpha}}{\alpha}+\hat{\alpha}\right)\geqslant\alpha r- \hat{\alpha}\). As \(r\geqslant b\), this implies
\[r-\left(\frac{2\hat{\alpha}}{\alpha}+\hat{\alpha}\right)\geqslant\frac{ \alpha}{2}\cdot\left(r+b-\frac{2\hat{\alpha}}{\alpha}\right)\]
and therefore \(\frac{r-\left(\frac{2\hat{\alpha}}{\alpha}-\hat{\alpha}\right)}{r+b-\frac{2 \hat{\alpha}}{\alpha}}\geqslant\frac{\alpha}{2}\).
It remains to prove that this ratio is also at most \(\frac{1}{2\alpha}\). We have \(r\geqslant\frac{2\hat{\alpha}}{\alpha}-\hat{\alpha}\), which is equivalent to
\[\left(2\alpha-1-\frac{\alpha}{2-\alpha}\right)r\leqslant 4\hat{\alpha}-2 \alpha\hat{\alpha}-\frac{2\hat{\alpha}}{\alpha}.\]
Note that \(2\alpha-1-\frac{\alpha}{2-\alpha}=-\frac{2\alpha^{2}-4\alpha+2}{2-\alpha}=- \frac{2(\alpha-1)^{2}}{2-\alpha}<0\) and that \(r\leqslant\frac{2b}{\alpha}-b\) gives \(b\geqslant\frac{r}{\frac{\alpha}{\alpha}-1}=\frac{\alpha r}{2-\alpha}\). With this, the above inequality implies
\[(2\alpha-1)r-b\leqslant 4\hat{\alpha}-2\alpha\hat{\alpha}-\frac{2\hat{\alpha} }{\alpha}\]
From this, we finally arrive at \(2\alpha\cdot\left(r-\left(\frac{2\hat{\alpha}}{\alpha}-\hat{\alpha}\right) \right)\leqslant r+b-\frac{2\hat{\alpha}}{\alpha}\), that is, \(\frac{r-\left(\frac{2\hat{\alpha}}{\alpha}-\hat{\alpha}\right)}{r+b-\frac{2 \hat{\alpha}}{\alpha}}\leqslant\frac{1}{2\alpha}\).
This proves that both \(S_{1}\) and \(S_{2}\) are \(\alpha\)-relaxed fair. As splitting \(S\) into \(S_{1}\) and \(S_{2}\) remains \(\alpha\)-relaxed fair and is cheaper, this contradicts \(S\) being in a minimum-cost \(\alpha\)-relaxed fair clustering.
We are now able to adapt the algorithm presented in Subsection 6.3 to solve Relaxed Fair Correlation Clustering on forests with two colors in a ratio of \(1:1\). While the original algorithm exploited that any optimum solution has fair clusters of minimum size, with Lemma 38 we are able to bound the clusters also in the \(\alpha\)-relaxed setting.
Like the original algorithm, we first create a list of possible splittings. However, these splittings can contain not only components with one or two vertices, as we know would suffice for the exact fairness with two colors in a \(1:1\) ratio, but each component may contain up to \(4\frac{\hat{\alpha}}{\alpha^{2}}\) vertices with \(\hat{\alpha}\) being the smallest natural number such that \(\frac{2\hat{\alpha}}{\alpha}\in\mathbb{N}\) and \(\frac{2\hat{\alpha}}{\alpha}>4\) as defined in Lemma 38. In the following, we set \(d=4\frac{\hat{\alpha}}{\alpha^{2}}\) to refer to this maximum size of a cluster. In the second phase, it checks which of these splitting can be merged into an \(\alpha\)-relaxed fair clustering and among these returns the one of minimum cost.
### Splitting the forest.
To get the optimal way to obtain a splitting of each possible coloring, we simply apply Lemma 25 and set \(d_{1}=d_{2}=d\) as we know the optimum solution has to be among clusters with no more than \(d\) vertices of either color. This phase takes time in \(\mathrm{O}(n^{2(d+1)^{2}+2d+2}\cdot\left((d+1)^{2}\right)^{2d})=\mathrm{O}(n^{ 2d^{2}+6d+4}\cdot(d+1)^{4d})\).
### Assembling a fair clustering.
In the second phase, we have to find a splitting in \(D_{r}^{\emptyset}\) that can be transformed into an \(\alpha\)-relaxed fair clustering and yields the minimum Correlation Clustering cost. As we tracked the minimum inter-cluster cost for each possible partition coloring of splittings in
the first phase, we do not have to consider cutting more edges in this phase, because for the resulting splittings coloring we already have tracked a minimum inter-cluster cost. Hence, the only questions are whether a splitting is _assemblable_, i.e., whether its components can be merged such that it becomes an \(\alpha\)-relaxed fair clustering, and, if so, what the cheapest way to do so is.
Regarding the first question, observe that the _assemblability_ only depends on the partition coloring of the splitting. Hence, it does not hurt that in the first phase we tracked only all possible partition colorings of splittings and not all possible splittings themselves. First, note that the coloring of a splitting may itself yield an \(\alpha\)-relaxed fair clustering. We mark all such partition colorings as assemblable, taking time in \(\mathrm{O}(n^{d^{2}+1})\). For the remaining partition colorings, we employ the following dynamic program.
Recall that the size of a partition coloring refers to the number of set colorings it contains (not necessarily the number of different set colorings). We decide assemblability for all possible partition colorings from smallest to largest. Note that each partition coloring is of size at least \(\lceil\frac{n}{d}\rceil\). If it is of size exactly \(\lceil\frac{n}{d}\rceil\), then there are no two set colorings that can be merged and still be of size at most \(d\), as all other set colorings are of size at most \(d\). Hence, in this case, a splitting is assemblable if and only if it is already an \(\alpha\)-relaxed fair clustering so we have already marked the partition colorings correctly. Now, assume that we decided assemblability for all partition colorings of size \(i\geqslant\lceil\frac{n}{d}\rceil\). We take an arbitrary partition coloring \(C\) of size \(i+1\), which is not yet marked as assemblable. Then, it is assemblable if and only if at least two of its set colorings are merged together to form an \(\alpha\)-relaxed fair clustering. In particular, it is assemblable if and only if there are two set colorings \(C_{1},C_{2}\) in \(C\) such that the coloring \(C^{\prime}\) obtained by removing the set colorings \(C_{1},C_{2}\) from \(C\) and adding the set coloring of the combined coloring of \(C_{1}\) and \(C_{2}\) is assemblable. Note that \(C^{\prime}\) is of size \(i\). Given all assemblable partition colorings of size \(i\), we therefore find all assemblable partition colorings of size \(i+1\) by for each partition coloring of size \(i\) trying each possible way to split one of its set colorings into two. As there are at most \(i^{d^{2}}\) partition colorings of size \(i\), this takes time in \(\mathrm{O}(i^{d^{2}}\cdot i\cdot 2^{d})\). The whole dynamic program then takes time in \(\mathrm{O}(n^{d^{2}+1}\cdot 2^{d})\subseteq\mathrm{O}(n^{d^{2}+d+1})\).
It remains to answer how we choose the assembling yielding the minimum cost. In the algorithm for exact fairness, we do not have to worry about that as there we could assume that the Correlation Clustering cost only depends on the inter-cluster cost. Here, this is not the case as the \(\alpha\)-relaxed fairness allows clusters of varying size, so Lemma 3 does not apply. However, recall that we can write the Correlation Clustering cost of some partition \(\mathcal{P}\) of the vertices as \(\sum_{S\in\mathcal{P}}\frac{|S|(|S-1|)}{2}+2\chi\), where \(\chi\) is the inter-cluster cost. The cost hence only depends on the inter-cluster cost and the sizes of the clusters, which in turn depends on the partition coloring. To compute the cost of a splitting, we take the inter-cluster cost computed in the first phase for \(\chi\). Once more, we neglect decreasing inter-cluster cost due to the merging of clusters as the resulting splitting is also considered in the array produced in the first phase. By an argument based on the Cauchy-Schwarz Inequality, we see that merging clusters only increases the value of \(\sum_{S\in\mathcal{P}}\frac{|S|(|S-1|)}{2}\) as we have fewer but larger squares. Hence, the cheapest cost obtainable from a splitting which is itself \(\alpha\)-relaxed fair is just this very clustering. If a splitting is assemblable but not \(\alpha\)-relaxed fair itself, the sum is the minimum among all the values of the sums of \(\alpha\)-relaxed fair splittings it can be merged into. This value is easily computed by not only passing down assemblability but also the value of this sum in the dynamic program described above and taking the minimum if there are multiple options for a splitting. This does not change the running time asymptotically and the running time of the second phase is dominated by the one of the first phase.
The complete algorithm hence runs in time in \(\mathrm{O}(n^{2d^{2}+6d+4}\cdot(d+1)^{4d})\).
Let \(F\) be an \(n\)-vertex forest in which the vertices are colored with two colors in a ratio of \(1:1\). Then \(\alpha\)-relaxed Fair Correlation Clustering on \(F\) can be solved in time in \(\mathrm{O}(n^{2d^{2}+6d+4}\cdot(d+1)^{4d})\), where \(d=4\frac{\hat{\alpha}}{\alpha^{2}}\) and \(\hat{\alpha}\in\mathbb{N}\) is minimal such that \(\frac{2\hat{\alpha}}{\alpha}\in\mathbb{N}\) and \(\frac{2\hat{\alpha}}{\alpha}>4\).
We are confident that Lemma 3.3 can be generalized such that for an arbitrary number of colors in arbitrary ratios the maximum cluster size is bounded by some function in \(\alpha\) and the color ratio. Given the complexity of this lemma for the \(1:1\) case, we leave this task open to future work. If such a bound is proven, then the algorithmic approach employed in Theorem 3.3 is applicable to arbitrarily colored forests. Similarly, bounds on the cluster size in the more general relaxed fair clusterings can be proven. As an intermediate solution, we note that for Relaxed Fair Correlation Clustering we can employ the approach used for \(\alpha\)-relaxed Fair Correlation Clustering by setting \(\alpha\) large enough to contain all allowed solutions and filtering out solutions that do not match the relaxed fairness constraint in the assembling phase. We do not give this procedure explicitly here as we suspect for these cases it is more promising to calculate the precise upper bound on the maximum cluster size and perform the algorithm accordingly instead of reducing to the \(\alpha\)-relaxed variant.
## 8 Approximations
So far, we have concentrated on finding an optimal solution to Fair Correlation Clustering in various instances. Approximation algorithms that do not necessarily find an optimum but near-optimum solutions efficiently are often used as a remedy for hard problems, for example, the 2.06-approximation to (unfair) Correlation Clustering[17]. In this section, we find that just taking any fair clustering is a quite close approximation and the approximation becomes even closer to the optimum if the minimum size of any fair cluster, as given by the color ratio, increases.
Formally, a problem is an optimization problem if for every instance \(I\) there is a set of permissible solutions \(S(I)\) and an objective function \(m\colon S(I)\to\mathbb{R}_{>0}\) assigning a score to each solution. Then, some \(S\in S(I)\) is an optimal solution if it has the highest or lowest score among all permissible solutions, depending on the problem definition. We call the score of this solution \(m^{\star}(I)\). For example, for Fair Correlation Clustering, the instance is given by a graph with colored vertices, every fair clustering of the vertices is a permissible solution, the score is the Correlation Clustering cost, and the objective is to minimize this cost.8 An \(\alpha\)-approximation an optimization problem is an algorithm that, for each instance \(I\), outputs a permissible solution \(S\in S(I)\) such that \(\frac{1}{\alpha}\leqslant\frac{m(S)}{m^{\star}(I)}\leqslant\alpha\). For Fair Correlation Clustering in particular, this means the algorithm outputs a fair clustering with a cost of at most \(\alpha\) times the minimum clustering cost.
Footnote 8: We note that the clustering cost could be \(0\), which contradicts the definition \(m\colon S(I)\to\mathbb{R}_{>0}\). However, every \(0\)-cost clustering simply consists of the connected components of the graph. We do not consider those trivial instances.
APX is the class of problems that admit an \(\alpha\)-approximation with \(\alpha\in\mathrm{O}(1)\). A polynomial-time approximation scheme (PTAS), is an algorithm that for each optimization problem instance as well as parameter \(\varepsilon>0\) computes a \((1+\varepsilon)\)-approximation for a minimization problem or a \((1-\varepsilon)\)-approximation for a maximization problem in time in \(\mathrm{O}(n^{f(\varepsilon)})\), for some computable function \(f\) depending only on \(\varepsilon\). We use PTAS to refer to the class of
optimization problems admitting a PTAS. An optimization problem \(L\) is called APX-hard if every problem in APX has a PTAS-reduction to \(L\), i.e., a PTAS for \(L\) implies there is a PTAS for every problem in APX. If \(L\) is additionally in APX itself, \(L\) is called APX-complete. By definition, we have \(\mathsf{PTAS}\subseteq\mathsf{APX}\). Further, \(\mathsf{PTAS}\neq\mathsf{APX}\) unless \(\mathsf{P}=\mathsf{NP}\).
We find that taking _any_ fair clustering of a forest yields a good approximation.
Let \(F\) be an \(n\)-vertex \(m\)-edge forest with \(k\geqslant 2\) colors in a ratio of \(c_{1}:c_{2}:\ldots:c_{k}\) and \(d=\sum_{i=1}^{k}c_{i}\geqslant 4\). Then, there is a \(\frac{\left(d^{2}-d\right)n+2dm}{(d^{2}-5d+4)n+2dm}\)-approximation for Fair Correlation Clustering on \(F\) computable in time in \(\mathrm{O}(n)\).
Proof.: By first sorting the vertices by color and then iteratively adding the next \(c_{i}\) vertices of each color \(i\) to the next cluster, we obtain a fair clustering \(\mathcal{P}\) with clusters of size \(d\) in linear time. In the worst-case, \(\mathcal{P}\) cuts all \(m\) edges. Hence, by Lemma 3, we have
\[\mathrm{cost}(\mathcal{P})\leqslant\frac{(d-1)n}{2}-m+2m=\frac{(d-1)n}{2}+m.\]
We compare this cost to the one of a minimum-cost fair clustering \(\mathcal{P}^{*}\). By Lemma 4, \(\mathcal{P}^{*}\) to consist of clusters of size \(d\). Each of the \(\frac{n}{d}\) clusters contains at most \(d-1\) edges due to the forest structure. Hence, at most \(\frac{n}{d}\cdot(d-1)\) edges are placed inside a cluster. Then, for the inter-cluster cost, we have \(\chi\geqslant m-\frac{n}{d}\cdot(d-1)=\frac{n}{d}-n+m\). Lemma 3 gives
\[\mathrm{cost}(\mathcal{P}^{*})\geqslant\frac{(d-1)n}{2}-m+2\left(\frac{n}{d}- n+m\right)=\frac{(d-5)n}{2}+\frac{2n}{d}+m.\]
Thereby, \(\mathcal{P}\) yields an \(\alpha\)-approximation to Fair Correlation Clustering, where
\[\alpha =\left(\frac{(d-1)n}{2}+m\right)/\left(\frac{(d-5)n}{2}+\frac{2n }{d}+m\right)\] \[=\left(\frac{\left(d^{2}-d\right)n+2dm}{2d}\right)/\left(\frac{ \left(d^{2}-5d+4\right)n+2dm}{2d}\right)=\frac{\left(d^{2}-d\right)n+2dm}{ \left(d^{2}-5d+4\right)n+2dm}.\qed\]
Observe that \(\alpha\) is decreasing in \(d\) for \(d\geqslant 4\) and converges to \(1\) as \(d\to\infty\). Further, for \(d=5\) we obtain \(\alpha=\frac{20n+10m}{4n+10m}<5\). Thus, for \(d\geqslant 5\) we have a \(5\)-approximation to Fair Correlation Clustering on forests. For \(d=4\), \(\alpha\) becomes linear in \(\frac{m}{n}\) and for smaller \(d\) it is not necessarily positive or not even defined if \(\left(d^{2}-5d+4\right)n+2dm=0\). This is because if there are very small clusters, then in forests there are solutions of almost no cost. If \(d=2\), i.e., there are two colors in a \(1:1\) ratio, there are even forests with a cost of \(0\), namely the ones where all vertices have degree \(1\) and each edge connects \(2\) vertices of different colors. A solution cutting every edge is then much worse than an optimum solution. If the factor becomes negative or not defined, this is due to us bounding the inter-cluster cost of the optimum clustering by \(\frac{n}{d}-n+m\), which is possibly negative, while the inter-cluster cost is guaranteed to be non-negative.
On trees, however, if the clusters are small even an optimum solution has to cut some edges as now there always are edges between the clusters. Hence, in this case, we obtain a good approximation for all possible \(d\). Note that the proof of Theorem 4 does not really require \(d\geqslant 4\) but for \(d<4\) the approximation factor is just not helpful or defined. This changes, if we assume the forest to be a tree and plug in \(m=n-1\).
Let \(T\) be an \(n\)-vertex tree with \(k\geqslant 2\) colors in a ratio of \(c_{1}:c_{2}:\ldots:c_{k}\) and \(d=\sum_{i=1}^{k}c_{i}\). Then, there is a \(\frac{\left(d^{2}+d\right)n-2d}{\left(d^{2}-3d+4\right)n-2d}\)-approximation to Fair Correlation Clustering on \(T\) that is computed in time in \(\mathrm{O}(n)\).
Now, the approximation factor is still decreasing in \(d\) and converges to \(1\) as \(d\to\infty\). However, it is positive and defined for all \(d\geqslant 2\). For \(d=2\) we obtain \(\frac{6n-4}{2n-4}<3\). Therefore, we have a \(3\)-approximation to Fair Correlation Clustering on trees.
Nevertheless, our results for forest suffice to place Fair Correlation Clustering in \(\mathsf{APX}\) and even in \(\mathsf{PTAS}\). First, for \(d\geqslant 5\) we have a \(5\)-approximation to Fair Correlation Clustering on forests. If \(d\leqslant 4\), a minimum-cost fair clustering is found on the forest in polynomial time by Theorem 4. Hence, Fair Correlation Clustering on forests is in \(\mathsf{APX}\). Next, recall that the larger the minimum fair cluster size \(d\), the better the approximation becomes. Recall that our dynamic program for Theorem 4 has better running time the smaller the value \(d\). By combining these results, we obtain a \(\mathsf{PTAS}\) for Fair Correlation Clustering on forests. This contrasts Fair Correlation Clustering on general graphs, as even unfair Correlation Clustering is \(\mathsf{APX}\)-hard there [16] and therefore does not admit a \(\mathsf{PTAS}\) unless \(\mathsf{P}=\mathsf{NP}\).
There is a \(\mathsf{PTAS}\) for Fair Correlation Clustering on forests. Moreover, an \((1+\varepsilon)\)-approximate fair clustering can be computed in time \(\mathsf{O}(n^{\mathsf{poly}(1/\varepsilon)})\).
Proof.: If \(d\leqslant 4\), we find a minimum-cost fair clustering in polynomial time by Theorem 4. Else, if \(\frac{\left(d^{2}-d\right)n+2dm}{\left(d^{2}-5d+4\right)n+2dm}\leqslant 1+\varepsilon\), it suffices to return any fair clustering by Theorem 4. Otherwise, we have \(d\geqslant 5\) and
\[1+\varepsilon<\frac{\left(d^{2}-d\right)n+2dm}{\left(d^{2}-5d+4\right)n+2dm}< \frac{\left(d^{2}-d\right)n}{\left(d^{2}-5d\right)n}=\frac{d-1}{d-5}.\]
It follows that, \(d-5+d\varepsilon-5\varepsilon<d-1\), which simplifies to \(d<\frac{4}{\varepsilon}+5\). Hence, by Theorem 4, we find a minimum-cost fair clustering in time in \(\mathsf{O}(n^{f(\varepsilon)})\) for some computable function \(f\) independent from \(n\). In all cases, we find a fair clustering with a cost of at most \(1+\varepsilon\) times the minimum Correlation Clustering cost and take time in \(\mathsf{O}(n^{f(\varepsilon)})\), giving a \(\mathsf{PTAS}\).
To show that \(f\) is in fact bounded by a polynomial in \(\nicefrac{{1}}{{\varepsilon}}\), we only need to look at the third case (otherwise \(f\) is constant). The bound \(d<\frac{4}{\varepsilon}+5\) and \(d=\sum_{i=1}^{k}c_{i}\) together imply the the number of colors \(k\) is constant w.r.t. \(n\). Under this condition, the exponent of the running time in Theorem 4 is a polynomial in \(d\) and thus in \(\nicefrac{{1}}{{\varepsilon}}\).
|
2307.00546 | The full automorphism groups of general position graphs | Let $S$ be a non-empty finite set. A flag of $S$ is a set $f$ of non-empty
proper subsets of $S$ such that $X\subseteq Y$ or $Y\subseteq X$ for all
$X,Y\in f$. The set $\{|X|:X\in f\}$ is called the type of $f$. Two flags $f$
and $f'$ are in general position with respect to $S$ if $X\cap Y=\emptyset$ or
$X\cup Y=S$ for all $X\in f$ and $Y\in f'$. For a fixed type $T$, Klaus Metsch
defined the general position graph $\Gamma(S,T)$ whose vertices are the flags
of $S$ of type $T$ with two vertices being adjacent when the corresponding
flags are in general position. In this paper, we characterize the full
automorphism groups of $\Gamma(S,T)$ in the case that $|T|=2$. In particular,
we solve an open problem proposed by Klaus Metsch. | Junyao Pan | 2023-07-02T11:41:03Z | http://arxiv.org/abs/2307.00546v1 | # The full automorphism groups of general position graphs
# The full automorphism groups of general position graphs
**Dedicated to my father Hongqi Pan's 75th birthday*****
Footnote 0: Junyao Pan. E-mail addresses: Junyao\({}_{-}\)[email protected]
Junyao Pan
School of Sciences, University of Wuxi, Wuxi, Jiangsu,
214105 People's Republic of China
**Abstract:** Let \(S\) be a non-empty finite set. A flag of \(S\) is a set \(f\) of non-empty proper subsets of \(S\) such that \(X\subseteq Y\) or \(Y\subseteq X\) for all \(X,Y\in f\). The set \(\{|X|:X\in f\}\) is called the type of \(f\). Two flags \(f\) and \(f^{\prime}\) are in general position with respect to \(S\) if \(X\cap Y=\emptyset\) or \(X\cup Y=S\) for all \(X\in f\) and \(Y\in f^{\prime}\). For a fixed type \(T\), Klaus Metsch defined the general position graph \(\Gamma(S,T)\) whose vertices are the flags of \(S\) of type \(T\) with two vertices being adjacent when the corresponding flags are in general position. In this paper, we characterize the full automorphism groups of \(\Gamma(S,T)\) in the case that \(|T|=2\). In particular, we solve an open problem proposed by Klaus Metsch.
**Keywords**: Flag; General Position Graph; Automorphism Group.
Mathematics Subject Classification: 20B25, 05D05.
## 1 Introduction
Throughout this paper, \([n]=\{1,2,...,n\}\) denotes the standard \(n\)-element set and \(\overline{A}\) stands for the complement of \(A\) in \([n]\) where \(A\subseteq[n]\). Moreover, \(C^{k}_{m}=\frac{m(m-1)\cdots(m-k+1)}{k!}\) expresses the binomial coefficient.
For two positive integers \(n\) and \(k\) with \(n\geq 2k\), the _Kneser graph_\(KG(n,k)\) has as vertices the \(k\)-subsets of \([n]\) with edges defined by disjoint pairs of \(k\)-subsets. It is well-known that the problem of the independence number of \(KG(n,k)\) reduces to the famous Erdos-Ko-Rado Theorem. From this perspective, Klaus Metsch [8] introduced the _general position graph_ of flags of \([n]\) to generalize the famous Erdos-Ko-Rado Theorem. A _flag_ of \([n]\) is a set \(f\) of non-empty proper subsets of \([n]\) such that \(A\subseteq B\) or \(B\subseteq A\) for all \(A,B\in f\). The set \(\{|A|:A\in f\}\) is called the type of \(f\). Two flags \(f\) and \(f^{\prime}\) are in _general position_ with respect to \([n]\) if \(A\cap B=\emptyset\) or \(A\cup B=[n]\) for all \(A\in f\) and \(B\in f^{\prime}\). For a fixed type \(T\subseteq[n-1]\) the _general position graph_ whose vertices are the flags of \([n]\) of type \(T\) with two vertices being adjacent when the corresponding flags are in general position will be abbreviated as \(\Gamma(n,T)\). If \(|T|=1\), then this graph is isomorphic to a corresponding Kneser
graph. Klaus Metsch [8] not only described the independence number of \(\Gamma(n,T)\) in some situations but also proposed several interesting open problems, such as the following one:
**Question 1.1** ([8, Problem 5] ) Is it true that the graphs \(\Gamma(n,T)\) have \(S_{n}\) as automorphism group, where \(T=\{a,b\}\) with \(n\geq a+b+1\) and \(a<\frac{n}{2}<b\)? Can [8, Remark 5.13] be used to show this?
Let \(\Gamma=(V,E)\) be a undirected graph with vertex set \(V\) and edge set \(E\). If there exists a bijection \(\alpha\) from \(V\) to \(V\) such that \((f^{\alpha},g^{\alpha})\in E\) if and only if \((f,g)\in E\) for all \(f,g\in V\), then \(\alpha\) is called an automorphism of \(\Gamma\). Let \(Aut(\Gamma)\) denote the full automorphism group of \(\Gamma\). Actually, the research on the automorphism groups of graphs has always been an interesting topic for many scholars in group theory and graph theory, for examples [3, 4, 6, 7, 9, 11]. Thereby, we are interested in Question 1.1.
Review some notions and notations about permutation groups, for details see [1, 2]. Let \(G\) be a transitive permutation group acting on \([n]\). A non-empty subset \(\Delta\) of \([n]\) is called a _block_ for \(G\) if for each \(\alpha\in G\) either \(\Delta^{\alpha}=\Delta\) or \(\Delta^{\alpha}\cap\Delta=\emptyset\). Clearly, the singletons \(\{i\}\) (\(i\in[n]\)) and \([n]\) are blocks, and so these blocks are called the _trivial_ blocks. Any other block is called _nontrivial_. Put \(\Sigma=\{\Delta^{\alpha}:\alpha\in G\}\) where \(\Delta\) is a block of \(G\). We call \(\Sigma\) the system of blocks containing \(\Delta\). Clearly, \(G\) reduces a permutation group acting on \(\Sigma\), denoted by \(G|_{\Sigma}\). In addition, there exists a natural homomorphism from \(G\) to \(G|_{\Sigma}\), and the kernel of this homomorphism consists of all permutations in \(G\) which fix every block in \(\Sigma\). In this note, we divide three cases to construct systems of blocks of \(Aut(\Gamma(n,T))\) acting flags and further we show the kernels of the corresponding homomorphisms are all trivial. Thus, we give a positive answer to the Question 1.1.
## 2 Preliminaries
Let \({\cal F}_{n}^{T}\) denote the set of all flags of type \(T\) of \([n]\). In other words, \({\cal F}_{n}^{T}\) is the vertex set of \(\Gamma(n,T)\). Moreover, we set \({\cal F}_{n}^{(T|A)}=\{f\in{\cal F}_{n}^{T}:\ A\in f\}\), where \(A\subseteq[n]\) and \(|A|\in T\). Let \(f\) and \(g\) be two flags in \({\cal F}_{n}^{T}\). If there exists an edge between \(f\) and \(g\) in \(\Gamma(n,T)\), then \(f\) and \(g\) are called _neighbor_ (see [10]). In addition, \(N(f)\) stands for the collection of all neighbours of vertex \(f\). Here, we state a well-known fact that is the key idea of solving Question 1.1.
**Fact 2.1**: Let \(f,g\in{\cal F}_{n}^{T}\). Then \(|N(f)^{\alpha}|=|N(f^{\alpha})|\) and \(|N(f)\cap N(g)|=|N(f^{\alpha})\cap N(g^{\alpha})|\) for every \(\alpha\in Aut(\Gamma(n,T))\).
For convenience, we set \(N(f,g)=N(f)\cap N(g)\) and \(N_{m}(n|T)=\max\{|N(f,g)|:f,g\in{\cal F}_{n}^{T}\}\). Next, we characterize \(N_{m}(n|T)\) in some situations.
**Proposition 2.2**: Let \(f=\{A,B\}\) and \(g=\{C,D\}\) be two flags in \({\cal F}_{n}^{T}\) with \(|A|=|C|=a\) and \(|B|=|D|=b\), where \(T=\{a,b\}\) with \(a+b+1\leq n\) and \(a<\frac{n}{2}<b\). Then the followings hold.
(i) If \(b>\frac{2n}{3}\), then \(|N(f,g)|=N_{m}(n|T)\) if and only if \(A=C\) and \(|B\cap D|=b-1\). In this case, \(N_{m}(n|T)=C^{a}_{n-b-1}C^{2b-n-1}_{b-a-1}\).
(ii) If \(b<\frac{2n}{3}\), then \(|N(f,g)|=N_{m}(n|T)\) if and only if \(B=D\) and \(|A\cap C|=a-1\). In this case, \(N_{m}(n|T)=C^{a}_{n-b}C^{2b-n}_{b-a-1}\).
(iii) If \(b=\frac{2n}{3}\), then \(|N(f,g)|=N_{m}(n|T)\) if and only if either \(A=C\), \(|B\cap D|=b-1\) or \(B=D\), \(|A\cap C|=a-1\). In this case, \(N_{m}(n|T)=C^{a}_{n-b-1}C^{2b-n-1}_{b-a-1}=C^{a}_{n-b}C^{2b-n}_{b-a-1}\).
**Proof** Let \(h=\{G,H\}\in{\cal F}^{T}_{n}\) with \(|G|=a\) and \(|H|=b\). Since \(a+b+1\leq n\) and \(a<\frac{n}{2}<b\), it follows that \(h\in N(f,g)\) if and only if \(H\cap A=\emptyset\), \(H\cap C=\emptyset\), \(H\cup B=H\cup D=[n]\), \(G\cap B=\emptyset\) and \(G\cap D=\emptyset\). In other words, \(h\in N(f,g)\) if and only if \(G\subseteq[n]\setminus(B\cup D)\) and \(\overline{B\cap D}\subseteq H\) and \(H\cap(A\cup C)=\emptyset\). Clearly, the number of the choices of \(G\) is \(C^{a}_{n-|B\cup D|}\). Consider \(H\). Due to \(\overline{B\cap D}\subseteq H\) and \(H\cap(A\cup C)=\emptyset\), it follows that \(A\cup C\subseteq B\cap D\) and otherwise \(N(f,g)=\emptyset\). Note that the number of the choices of \(H\) is equal to the number of the choices of \((b-|\overline{B\cap D}|)\)-subsets from \((B\cap D)\setminus(A\cup C)\). Therefore, the number of the choices of \(H\) is \(C^{b-|\overline{B\cap D}|}_{|B\cap D|-|A\cup C|}\). Additionally, it is clear that \(|B\cup D|=2b-|B\cap D|\) and \(|\overline{B\cap D}|=n-|B\cap D|\). Thus, we have deduced that
\[|N(f,g)|=C^{a}_{n-2b+|B\cap D|}C^{b+|B\cap D|-n}_{|B\cap D|-|A\cup C|}=C^{a}_{ n-2b+|B\cap D|}C^{n-b-|A\cup C|}_{|B\cap D|-|A\cup C|}. \tag{2.1}\]
Here, we make a statement that every abnormal situation occurs imply \(N(f,g)=\emptyset\), where the abnormal situation is \(|B\cap D|-|A\cup C|<0\) or \(n-2b+|B\cap D|<a\) and so on. Our goal is to find \(N_{m}(n|T)\), and so we do not discuss when \(N(f,g)=\emptyset\) holds.
Fixing \(|A\cup C|\geq a\). The equality 2.1 shows that \(|N(f,g)|\) increases with the increase of \(|B\cap D|\). Likewise, fixing \(|B\cap D|\leq b\), \(|N(f,g)|\) decreases with the increase of \(|A\cup C|\). Therefore, \(N_{m}(n|T)\) occurs in two possible situations. One is that \(|A\cup C|=a\) and \(|B\cap D|=b-1\), and the other is that \(|A\cup C|=a+1\) and \(|B\cap D|=b\). If \(|A\cup C|=a\) and \(|B\cap D|=b-1\), then by equality 2.1 we deduce that
\[|N(f,g)|=C^{a}_{n-b-1}C^{2b-n-1}_{b-a-1}. \tag{2.2}\]
If \(|A\cup C|=a+1\) and \(|B\cap D|=b\), then by equality 2.1 we infer that
\[|N(f,g)|=C^{a}_{n-b}C^{2b-n}_{b-a-1}. \tag{2.3}\]
Compare the equalities ( 2.2 ) and ( 2.3 ). We see that \(\{\frac{2.2}{(2.3)}\}=\frac{C^{a}_{n-b-1}C^{2b-n-1}_{b-a-1}}{C^{a}_{n-b}C^{2b- n}_{a-1}}=\frac{2b-n}{n-b}\). Obviously, if \(b>\frac{2n}{3}\), then \(\frac{2.2}{2.3}>1\) and so \(N_{m}(n|T)=C^{a}_{n-b-1}C^{2b-n-1}_{b-a-1}\). In this case, \(|N(f,g)|=N_{m}(n|T)\) if and only if \(A=C\) and \(|B\cap D|=b-1\). If \(b<\frac{2n}{3}\), then \(\frac{2.2}{2.3}<1\) and thus \(N_{m}(n|T)=C^{a}_{n-b}C^{2b-n}_{b-a-1}\). In this case, \(|N(f,g)|=N_{m}(n|T)\) if and only if \(B=D\) and \(|A\cap C|=a-1\). Additionally, if \(b=\frac{2n}{3}\) then \(\frac{2.2}{2.3}=1\) and thus \(|N(f,g)|=N_{m}(n|T)\) if and only if either \(A=C\) and \(|B\cap D|=b-1\) or \(B=D\) and \(|A\cap C|=a-1\). In this case, \(N_{m}(n|T)=C^{a}_{n-b-1}C^{2b-n-1}_{b-a-1}=C^{a}_{n-b}C^{2b-n}_{b-a-1}\). \(\Box\)
**Remark 2.3**: Let \(f,g\in{\cal F}^{T}_{n}\) such that \(|N(f,g)|=N_{m}(n|T)\). Then \(|N(f^{\alpha},g^{\alpha})|=N_{m}(n|T)\) for any \(\alpha\in Aut(\Gamma(n,T))\).
**Proof** This remark follows from Fact 2.1. \(\Box\)
By Proposition 2.2 (iii), we see that if \(b=\frac{2n}{3}\) then \(N_{m}(n|T)\) occurs in two cases. This urges us to further study this situation. Now we investigate the second maximum of \(|N(f,g)|\) and so we define \(N_{sm}(n|T)=\max\{|N(f,g)|:f,g\in{\cal F}^{T}_{n},|N(f,g)|<N_{m}(n|T)\}\).
**Proposition 2.4**: Let \(f=\{A,B\}\) and \(g=\{C,D\}\) be two flags in \({\cal F}^{T}_{n}\) with \(|A|=|C|=a\) and \(|B|=|D|=b\), where \(T=\{a,b\}\) with \(a\leq\frac{n}{3}-1\) and \(b=\frac{2n}{3}\). Then, \(|N(f,g)|=N_{sm}(n|T)\) if and only if \(|A\cup C|=a+1\) and \(|B\cup D|=b+1\).
**Proof** According to the equality 2.1, we see that \(|N(f,g)|=N_{sm}(n|T)\) occurs in three possible cases, those are, \(|A\cup C|=a+1\), \(|B\cup D|=b+1\) or \(|A\cup C|=a+2\), \(|B\cup D|=b\) or \(|A\cup C|=a\), \(|B\cup D|=b+2\). Next, we start to compute \(|N(f,g)|\) in three cases respectively.
If \(|A\cup C|=a+1\) and \(|B\cup D|=b+1\), then \(|B\cap D|=b-1\). In this case, by equality 2.1, we infer that
\[|N(f,g)|=C^{a}_{n-b-1}C^{2b-n-1}_{b-a-2}. \tag{2.4}\]
Similarly, if \(|A\cup C|=a\) and \(|B\cup D|=b+2\), then
\[|N(f,g)|=C^{a}_{n-b-2}C^{2b-n-2}_{b-a-2}; \tag{2.5}\]
and if \(|A\cup C|=a+2\) and \(|B\cup D|=b\) then
\[|N(f,g)|=C^{a}_{n-b}C^{2b-n}_{b-a-2}. \tag{2.6}\]
Compare three equalities (2.4 ) and (2.5 ) and (2.6 ). We deduce that
\[\frac{(\ 2.5\ )}{(\ 2.4\ )}=\frac{C^{a}_{n-b-2}C^{2b-n-2}_{b-a-2}}{C^{a}_{n-b -1}C^{2b-n-1}_{b-a-2}}=\frac{(n-b-a-1)(2b-n-1)}{(n-b-1)(n-b-a)}=\frac{n-b-a-1} {n-b-a}<1\ \mbox{and}\]
\[\frac{(\ 2.6\ )}{(\ 2.4\ )}=\frac{C^{a}_{n-b}C^{2b-n}_{b-a-2}}{C^{a}_{n-b-1} C^{2b-n-1}_{b-a-2}}=\frac{(n-b)(n-b-a-1)}{(n-b-a)(2b-n)}=\frac{n-b-a-1}{n-b-a}<1.\]
Therefore, \(|N(f,g)|=N_{sm}(n|T)\) if and only if \(|A\cup C|=a+1\) and \(|B\cup D|=b+1\). In particular, \(N_{sm}(n|T)=C^{a}_{n-b-1}C^{2b-n-1}_{b-a-2}\). \(\Box\)
So far, we have seen that \(N_{sm}(n|T)\) occurs in unique form when \(b=\frac{2n}{3}\). Let \(f,g\in{\cal F}^{T}_{n}\) where \(T=\{a,b\}\) with \(a\leq\frac{n}{3}-1\) and \(b=\frac{2n}{3}\). Define \(SM(f)=\{h\in{\cal F}^{T}_{n}:|N(f,h)|=N_{sm}(n|T)\}\) and \(SM(f,g)=SM(f)\cap SM(g)\). Next we state two results which are useful in dealing with the case that \(b=\frac{2n}{3}\).
**Proposition 2.5**: Suppose that \(f,g\in{\cal F}^{T}_{n}\), where \(T=\{a,b\}\) with \(a\leq\frac{n}{3}-1\) and \(b=\frac{2n}{3}\). Then \(|SM(f,g)|=|SM(f^{\alpha},g^{\alpha})|\) for any \(\alpha\in Aut(\Gamma(n,T))\).
**Proof** Assume that \(h\in{\cal F}^{T}_{n}\) such that \(h\in SM(f,g)\). Thus, \(|N(f,h)|=|N(g,h)|=N_{sm}(n|T)\). Then by Fact 2.1 we deduce that \(|N(f,h)|=|N(f^{\alpha},h^{\alpha})|\) and \(|N(g,h)|=|N(g^{\alpha},h^{\alpha})|\) for any \(\alpha\in Aut(\Gamma(n,T))\). Conversely, for any \(\alpha\in Aut(\Gamma(n,T))\), if \(v\in SM(f^{\alpha},g^{\alpha})\) then \(v^{\alpha^{-1}}\in SM(f,g)\). This completes the proof of this proposition. \(\Box\)
**Proposition 2.6**: Let \(f=\{A,B\}\), \(g=\{A,C\}\), \(x=\{D,E\}\) and \(y=\{F,E\}\) be four flags in \({\cal F}^{T}_{n}\) such that \(|B\cap C|=b-1\) and \(|D\cap F|=a-1\), where \(T=\{a,b\}\) with \(a\leq\frac{n}{3}-1\) and \(b=\frac{2n}{3}\). Then \(|SM(f,g)|\neq|SM(x,y)|\).
**Proof** Let \(h=\{G,H\}\in{\cal F}^{T}_{n}\). By applying Proposition 2.4, we deduce that \(h\in SM(f,g)\) if and only if \(|G\cap A|=a-1\) and \(|H\cap B|=b-1\) and \(|H\cap C|=b-1\). Count \(|SM(f,g)|\). Note that there exist two possible shapes for \(H\), those are, \(B\cap C\subseteq H\) and \(B\cap C\not\subseteq H\) respectively. In the case of \(B\cap C\subseteq H\), we see that \(H=(B\cap C)\cup\{i\}\) where \(i\in[n]\setminus(B\cup C)\) and so the number of the choices of \(H\) is \(C^{1}_{n-b-1}\). Fix an \(H\), if \(G\subseteq B\cap C\) then \(G\) is the union of a \((a-1)\)-subset
in \(A\) and a \(1\)-subset in \((B\cap C)\setminus A\), and thus the number of the choices of \(G\) is \(C_{a}^{a-1}C_{b-1-a}^{1}\); and if \(G\not\subseteq B\cap C\) then \(G\) is the union of an \((a-1)\)-subset in \(A\) and \(H\setminus(B\cap C)\) and so the number of the choices of \(G\) is \(C_{a}^{a-1}\). Hence, in the case of \(B\cap C\subseteq H\), the number of the choices of \(h\) in \(SM(f,g)\) is
\[C_{n-b-1}^{1}(C_{a}^{a-1}C_{b-1-a}^{1}+C_{a}^{a-1})=(\frac{n}{3}-1)a(\frac{2n} {3}-a-1)+(\frac{n}{3}-1)a.\]
Consider \(B\cap C\not\subseteq H\). In this case, \(H=(B\setminus C)\cup(C\setminus B)\cup K\) where \(K\) is a \((b-2)\)-subset of \(B\cap C\). In addition, there exist two possible subcases for \(H\), those are, \(A\subseteq H\) and \(A\not\subseteq H\). In the subcase of \(A\subseteq H\), the number of the choices of \(H\) is \(C_{b-1-a}^{b-2-a}\). Fix an \(H\), \(G\) is the union of an \((a-1)\)-subset of \(A\) and a \(1\)-subset of \(H\setminus A\), and thus the number of the choices of \(G\) is \(C_{a}^{a-1}C_{b-a}^{1}\). If \(A\not\subseteq H\), then \(H=(B\cup C)\setminus\{j\}\) where \(j\in A\), and so the number of the choices of \(H\) is \(C_{a}^{1}\). Fix an \(H\), \(G=(A\setminus\{j\})\cup\{k\}\) where \(k\in H\setminus A\) and so the number of the choices of \(G\) is \(C_{b-a+1}^{1}\). Therefore, in the case of \(B\cap C\not\subseteq H\), the number of the choices of \(h\) in \(SM(f,g)\) is
\[C_{b-1-a}^{b-2-a}C_{a}^{a-1}C_{b-a}^{1}+C_{a}^{1}C_{b-a+1}^{1}=(\frac{2n}{3}- a-1)a(\frac{2n}{3}-a)+a(\frac{2n}{3}-a+1).\]
So we deduce that
\[|SM(f,g)|=a(\frac{2n}{3}-a)(n-a-1)+a.\]
Likewise, we can count \(|SM(x,y)|\). Let \(z=\{U,V\}\in\mathcal{F}_{n}^{T}\). By applying Proposition 2.4, we deduce that \(z\in SM(x,y)\) if and only if \(|U\cap D|=a-1\) and \(|U\cap F|=a-1\) and \(|V\cap E|=b-1\). Note that there exist two possible shapes for \(U\), those are, \(U\subseteq E\) and \(U\not\subseteq E\) respectively. Consider \(U\subseteq E\). If \(D\cap F\subseteq U\), then \(U=(D\cap F)\cup\{p\}\) where \(p\in E\setminus(D\cup F)\), and so the number of the choices of \(U\) is \(C_{b-1-a}^{1}\). Fix an \(U\), \(V\) is the union of a \((b-1)\)-subset containing \(U\) of \(E\) and a \(1\)-subset of \([n]\setminus E\), and thus the number of the choices of \(V\) is \(C_{b-a}^{b-a-1}C_{n-b}^{1}\). If \(D\cap F\not\subseteq U\), then \(U=(D\cup F)\setminus\{q\}\) where \(q\in D\cap F\) and so the number of the choices of \(U\) is \(C_{a-1}^{1}\). Fix an \(U\), the number of the choices of \(V\) is \(C_{b-a}^{b-a-1}C_{n-b}^{1}\) too. Therefore, in the case of \(U\subseteq E\), the number of the choices of \(z\) in \(SM(x,y)\) is
\[C_{b-1-a}^{1}C_{b-a}^{b-a-1}C_{n-b}^{1}+C_{a-1}^{1}C_{b-a-1}^{b-a-1}C_{n-b}^{ 1}=(\frac{2n}{3}-a-1)(\frac{2n}{3}-a)\frac{n}{3}+(a-1)(\frac{2n}{3}-a)\frac{n }{3}.\]
Consider \(U\not\subseteq E\). In this case, \(U=(D\cap F)\cup\{r\}\) where \(r\in[n]\setminus E\) and so the number of the choices of \(U\) is \(C_{n-b}^{1}\). Fix an \(U\), \(V\) is the union of \(U\) and a \((b-a)\)-subset of \(E\setminus(D\cap F)\) and thus the number of the choices of \(V\) is \(C_{b-a+1}^{b-a}\). Hence, in the case of \(U\not\subseteq E\), the number of the choices of \(z\) in \(SM(x,y)\) is
\[C_{n-b}^{1}C_{b-a+1}^{b-a}=\frac{n}{3}(\frac{2n}{3}-a+1).\]
Therefore, we deduce that
\[|SM(x,y)|=\frac{n}{3}(\frac{2n}{3}-a)(\frac{2n}{3}-1)+\frac{n}{3}.\]
By computing, we derive \(|SM(f,g)|-|SM(x,y)|=(a-\frac{n}{3})\{1-(\frac{2n}{3}-a)(a-\frac{2n}{3}+1)\}\neq 0\) for all \(a\leq\frac{n}{3}-1\). The proof of this proposition is complete.
Now we end this section by introducing a graph that will be used. Let \(\binom{[n]}{k}\) denote the set of all \(k\)-subsets of \([n]\), where \(1\leq k\leq n-1\). If \(A,B\in\binom{[n]}{k}\) such that \(|A\cap B|=k-1\), then we
say that \(A\) and \(B\) are _almost identical_. Define _almost identical graph_\(AIG(n,k)\) has as vertex set \(\left(\begin{bmatrix}n\\ k\end{bmatrix}\right)\) with two vertices being adjacent when the corresponding subsets are almost identical.
**Proposition 2.7**: Let \(n,k\) be positive integers with \(1\leq k\leq n-1\). Then \(AIG(n,k)\) is a connected graph.
**Proof** Let \(A\) and \(B\) be two vertices of \(AIG(n,k)\). If \(|A\cap B|=k-1\) then \(A\) and \(B\) are adjacent. So we assume that \(A=\{a_{1},a_{2},...,a_{l},b_{1},b_{2},...,b_{k-l}\}\) and \(B=\{a_{1},a_{2},...,a_{l},c_{1},c_{2},...,c_{k-l}\}\) with \(|A\cap B|=|\{a_{1},a_{2},...,a_{l}\}|=l<k-1\). We note the characteristic that
\[\{b_{1},b_{2},...,b_{k-l}\}\leftrightarrow\{b_{1},b_{2},...,b_{k-l-1},c_{1}\} \leftrightarrow\{b_{1},b_{2},...,b_{k-l-2},c_{1},c_{2}\}\leftrightarrow\cdots \leftrightarrow\{b_{1},c_{1},c_{2},...,c_{k-l-1}\},\]
which implies that there exists a path between \(A\) and \(B\). The proof of this proposition is complete. \(\Box\)
## 3 Main Result
It is well-known that if \(|T|=1\) then \(\Gamma(n,T)\) is isomorphic to a Kneser graph. However, we note that if \(T=\{t\}\) with \(t>\frac{n}{2}\) then \(\Gamma(n,T)\) does not fit the traditional definition of Kneser graph. In order to facilitate the readers, we first give a proof of this well-known result.
**Lemma 3.1**: Let \(T=\{t\}\subseteq[n-1]\). Then \(Aut(\Gamma(n,T))\cong S_{n}\).
**Proof** Let \(T=\{t\}\subseteq[n-1]\). If \(t\leq\frac{n}{2}\), then by [5, Corollary 7.8.2] we see that \(Aut(\Gamma(n,T))\cong S_{n}\). Suppose \(t>\frac{n}{2}\). Define a map \(\sigma:\Gamma(n,T)\to KG(n,n-t),A\mapsto\overline{A}\), where \(\overline{A}\) denotes the complement of \(A\) in \([n]\). It is easy to verify that \(\sigma\) is an isomorphic map from \(\Gamma(n,T)\) to \(KG(n,n-t)\). The proof of this lemma is complete. \(\Box\)
**Proposition 3.2**: Let \(T=\{a,b\}\subseteq[n-1]\) such that \(a<\frac{n}{2}\) and \(\frac{2n}{3}<b\) and \(a+b+1\leq n\). Then \(\Sigma=\{{\cal F}_{n}^{(T|A)}:A\subseteq[n],|A|=a\}\) is a system of nontrivial blocks of \(Aut(\Gamma(n,T))\) acting on \({\cal F}_{n}^{T}\).
**Proof** Let \(\rho\) be an automorphism in \(Aut(\Gamma(n,T))\). Pick \(f=\{A,B\}\) and \(g=\{A,C\}\) in \({\cal F}^{(T|A)}\) such that \(|B\cap C|=b-1\) and \(|A|=a\). It follows from Proposition 2.2 (i) and Remark 2.3 that there exists an \(a\)-subset \(D\subseteq[n]\) such that \(f^{\rho},g^{\rho}\in{\cal F}^{(T|D)}\). In particular, if \(f^{\rho}=\{D,G\}\) and \(g^{\rho}=\{D,H\}\) then \(|G\cap H|=b-1\). By the same token, we deduce that \(h^{\rho}\in{\cal F}^{(T|D)}\) in case when \(h=\{A,X\}\in{\cal F}^{(T|A)}\) with \(|X\cap B|=b-1\) or \(|X\cap C|=b-1\). Continue moving forward along this line of thought, Proposition 2.7 indicates that \(h^{\rho}\in{\cal F}^{(T|D)}\) for every \(h\in{\cal F}^{(T|A)}\). Therefore, \(\sum\) is a system of nontrivial blocks of \(Aut(\Gamma(n,T))\) acting on \({\cal F}_{n}^{T}\). \(\Box\)
**Lemma 3.3**: Let \(T=\{a,b\}\subseteq[n-1]\) such that \(a<\frac{n}{2}\) and \(\frac{2n}{3}<b\) and \(a+b+1\leq n\). Then \(Aut(\Gamma(n,T))\cong S_{n}\).
**Proof** It follows from Proposition 3.2 that \(\sum=\{{\cal F}_{n}^{(T|A)}:A\subseteq[n],|A|=a\}\) is a system of blocks of \(Aut(\Gamma(n,T))\) acting on \({\cal F}_{n}^{T}\). Consider the induced action of \(Aut(\Gamma(n,T))\) acting on \(\sum\), that is, \(Aut(\Gamma(n,T))|_{\sum}\). Let \(\rho\in Aut(\Gamma(n,T))\) and \({\cal F}_{n}^{(T|A)},{\cal F}_{n}^{(T|B)},{\cal F}_{n}^{(T|C)},{\cal F}_{n}^{(T |D)}\in\Sigma\) such that \({\cal F}_{n}^{(T|A)}{}^{\rho}={\cal F}_{n}^{(T|B)}\) and \({\cal F}_{n}^{(T|C)}{}^{\rho}={\cal F}_{n}^{(T|D)}\). Note that \(B\cap D=\emptyset\) if and only if \(A\cap C=\emptyset\). Otherwise
one of \(\{(f,g),(f^{\rho},g^{\rho})\}\) is an edge and the other is not an edge for some \(f\in{\cal F}_{n}^{(T|A)}\) and \(g\in{\cal F}_{n}^{(T|C)}\). Therefore, \(Aut(\Gamma(n,T))|_{\Sigma}\leq Aut(KG(n,a))\). On the other hand, the symmetric group on the set \([n]\) induces an automorphism group of \(\Gamma(n,T)\), and thus \(Aut(\Gamma(n,T))|_{\Omega}\cong Aut(KG(n,a))\cong S_{n}\). Thus, it suffices to check that no nonidentity automorphism can fix all the blocks of \(\sum\). Let \(\rho\) be an automorphism in \(Aut(\Gamma(n,T))\) such that \({\cal F}_{n}^{(T|A)}{}^{\rho}={\cal F}_{n}^{(T|A)}\) for all \(A\subseteq[n]\) with \(|A|=a\). Assume that there exist two distinct flags \(f=\{A,B\}\) and \(g=\{A,C\}\) in \({\cal F}_{n}^{(T|A)}\) such that \(f^{\rho}=g\) for some \(A\subseteq[n]\) with \(|A|=a\). Since \(f\neq g\), it follows that \(B\neq C\), in other words, there exists an \(i\in C\setminus B\). Hence, there exists a flag \(h=\{D,E\}\in N(f)\) such that \(i\in D\) and \(|D|=a\). By Fact 2.1, we see that \(N(f)^{\rho}=N(g)\). However, it is clear that \(N(g)\cap{\cal F}_{n}^{(T|D)}=\emptyset\), and which indicates that \({\cal F}_{n}^{(T|D)}{}^{\rho}\neq{\cal F}_{n}^{(T|D)}\), a contradiction. \(\Box\)
**Proposition 3.4**: Let \(T=\{a,b\}\subseteq[n-1]\) such that \(a<\frac{n}{2}<b<\frac{2n}{3}\) and \(a+b+1\leq n\). Then \(\Omega=\{{\cal F}_{n}^{(T|B)}:B\subseteq[n],|B|=b\}\) is a system of nontrivial blocks of \(Aut(\Gamma(n,T))\) acting on \({\cal F}_{n}^{T}\).
**Proof** Let \(\rho\) be an automorphism in \(Aut(\Gamma(n,T))\). Take two flags \(f=\{A,B\}\) and \(g=\{C,B\}\) in \({\cal F}^{(T|B)}\) with \(|A\cap C|=a-1\) and \(|B|=b\). It follows from Proposition 2.2 (ii) and Remark 2.3 that there exists a \(b\)-subset \(D\subseteq[n]\) such that \(f^{\rho},g^{\rho}\in{\cal F}^{(T|D)}\). Additionally, if \(f^{\rho}=\{G,D\}\) and \(g^{\rho}=\{H,D\}\) then \(|G\cap H|=a-1\). In a similar manner, \(h^{\rho}\in{\cal F}^{(T|D)}\) in case when \(h=\{X,B\}\in{\cal F}^{(T|B)}\) with \(|X\cap A|=a-1\) or \(|X\cap C|=a-1\). Along this idea of thought, Proposition 2.7 implies that \(h^{\rho}\in{\cal F}^{(T|D)}\) for every \(h\in{\cal F}^{(T|B)}\). Therefore, \(\Omega\) is a system of nontrivial blocks of \(Aut(\Gamma(n,T))\) acting on \({\cal F}_{n}^{T}\). \(\Box\)
**Lemma 3.5**: Let \(T=\{a,b\}\subseteq[n-1]\) such that \(a<\frac{n}{2}<b<\frac{2n}{3}\) and \(a+b+1\leq n\). Then \(Aut(\Gamma(n,T))\cong S_{n}\).
**Proof** By Proposition 3.4, it follows that \(\Omega=\{{\cal F}_{n}^{(T|B)}:B\subseteq[n],|B|=b\}\) is a system of nontrivial blocks of \(Aut(\Gamma(n,T))\) acting on \({\cal F}_{n}^{T}\). Consider the induced action \(Aut(\Gamma(n,T))|_{\Omega}\) of \(Aut(\Gamma(n,T))\) acting on \(\Omega\). Let \(\rho\in Aut(\Gamma(n,T))\) and \({\cal F}_{n}^{(T|A)},{\cal F}_{n}^{(T|B)},{\cal F}_{n}^{(T|C)},{\cal F}_{n}^{(T |D)}\in\Omega\) such that \({\cal F}_{n}^{(T|A)}{}^{\rho}={\cal F}_{n}^{(T|B)}\) and \({\cal F}_{n}^{(T|C)}{}^{\rho}={\cal F}_{n}^{(T|D)}\). Note that \(B\cup D=[n]\) if and only if \(A\cup C=[n]\). Otherwise one of \((f,g),(f^{\rho},g^{\rho})\) is an edge and the other is not an edge will occur for some \(f\in{\cal F}_{n}^{(T|A)}\) and \(g\in{\cal F}_{n}^{(T|C)}\). Therefore, \(Aut(\Gamma(n,T))|_{\Omega}\leq Aut(KG(n,n-b))\). On the other hand, the symmetric group on the set \([n]\) induces an automorphism group of \(\Gamma(n,T)\), and thus \(Aut(\Gamma(n,T))|_{\Omega}\cong Aut(KG(n,n-b))\cong S_{n}\). Thus, it suffices to check that no nonidentity automorphism can fix all the blocks of \(\Omega\). Let \(\rho\) be an automorphism in \(Aut(\Gamma(n,T))\) such that \({\cal F}_{n}^{(T|B)}{}^{\rho}={\cal F}_{n}^{(T|B)}\) for all \(B\subseteq[n]\) with \(|B|=b\). Assume that there exist two distinct flags \(f=\{A,B\}\) and \(g=\{C,B\}\) in \({\cal F}_{n}^{(T|B)}\) such that \(f^{\rho}=g\) for some \(B\subseteq[n]\) with \(|B|=b\). Since \(f\neq g\), it follows that \(A\neq C\), in other words, there exists an \(i\in C\setminus A\). Hence, there exists a flag \(h=\{D,E\}\in N(f)\) such that \(i\in D\) and \(|D|=a\) and \(|E|=b\). By Fact 2.1, we see that \(N(f)^{\rho}=N(g)\). However, it is clear that \(N(g)\cap{\cal F}_{n}^{(T|E)}=\emptyset\), and which indicates that \({\cal F}_{n}^{(T|E)}{}^{\rho}\neq{\cal F}_{n}^{(T|E)}\), a contradiction. \(\Box\)
**Lemma 3.6**: Let \(T=\{a,b\}\subseteq[n-1]\) such that \(a\leq\frac{n}{3}-1\) and \(b=\frac{2n}{3}\). Then the followings hold.
(i) \(\Sigma=\{{\cal F}_{n}^{(T|A)}:A\subseteq[n],|A|=a\}\) and \(\Omega=\{{\cal F}_{n}^{(T|B)}:B\subseteq[n],|B|=b\}\) are two systems of nontrivial blocks of \(Aut(\Gamma(n,T))\) acting on \({\cal F}_{n}^{T}\).
(ii) \(Aut(\Gamma(n,T))\cong S_{n}\).
**Proof** Let \(\rho\) be an automorphism in \(Aut(\Gamma(n,T))\). Pick \(f=\{A,B\}\) and \(g=\{A,C\}\) in \({\cal F}_{n}^{(T|A)}\) such that \(|B\cap C|=b-1\) and \(|A|=a\). It follows from Proposition 2.2 (iii) and Remark 2.3 and Proposition 2.5 and Proposition 2.6 that there exists a \(a\)-subset \(D\subseteq[n]\) such that \(f^{\rho},g^{\rho}\in{\cal F}^{(T|D)}\). An argument similar to the one used in the proof of Proposition 3.2 shows that \(\Sigma\) is a systems of nontrivial blocks of \(Aut(\Gamma(n,T))\) acting on \({\cal F}_{n}^{T}\). In a similar manner, we can show that \(\Omega\) is also a systems of nontrivial blocks of \(Aut(\Gamma(n,T))\) acting on \({\cal F}_{n}^{T}\). Proceeding as in the proof of Lemma 3.3 or Lemma 3.5, we have \(Aut(\Gamma(n,T))\cong S_{n}\). \(\Box\)
So far, we have not used [8, Remark 5.13] to give a positive answer to Question 1.1. On the other hand, by applying [8, Lemma 2.1(b)], we obtain the following theorem.
**Theorem 3.7** Let \(T=\{a,b\}\subseteq[n-1]\) with \(a<\frac{n}{2}<b\) and \(a+b\neq n\). Then \(Aut(\Gamma(n,T))\cong S_{n}\).
## 4 Concluding Remarks
Let \(T=\{a,b\}\subseteq[n-1]\) with \(a<b\). Note that three situations are left to consider, those are, \(a+b=n\) and \(b\leq\frac{n}{2}\) and \(\frac{n}{2}\leq a\).
Let \(T=\{a,b\}\subseteq[n-1]\) with \(a<b\) and \(a+b=n\). Suppose that \(f=\{A,B\}\) and \(g=\{C,D\}\) are two flags in \({\cal F}_{n}^{T}\) such that \(|A|=|C|=a\) and \(|B|=|D|=b\). Clearly, \((f,g)\) is an edge if and only if \(C=\overline{B}\) and \(D=\overline{A}\). Define \(\Delta_{n}^{T(A|B)}=\{f,g\}\) where \(f=\{A,B\}\) and \(g=\{\overline{A},\overline{B}\}\) are in \({\cal F}_{n}^{T}\), and \(\Omega=\{\Delta_{n}^{T(A|B)}:A\subset B\subset[n],|A|=a,|B|=b\}\). It is straightforward to see that \(\Omega\) is a system of nontrivial blocks of \(Aut(\Gamma(n,T))\) acting on \({\cal F}_{n}^{T}\), and further the following lemma holds.
**Lemma 4.1** Let \(\Omega=\{\Delta_{n}^{T(A|B)}:A\subset B\subset[n],|A|=a,|B|=b\}\), where \(T=\{a,b\}\subseteq[n-1]\) with \(a<b\) and \(a+b=n\). Then \(Aut(\Gamma(n,T))\cong N\wr S_{m}\), where \(N=\overbrace{S_{2}\times\cdots\times S_{2}}^{C_{tr}}\) and \(m=|\Omega|=\frac{1}{2}C_{n}^{b}C_{b}^{a}\).
For the case that \(b\leq\frac{n}{2}\), we consider a more general cases, as follows.
**Lemma 4.2** Let \(\Sigma=\{{\cal F}_{n}^{(T|A)}:A\subseteq[n],|A|=t_{r}\}\), where \(T=\{t_{1},t_{2},...,t_{r}\}\subseteq[n-1]\) such that \(t_{1}<t_{2}<\cdots<t_{r}\leq\frac{n}{2}\). Then \(\Sigma\) is a system of nontrivial blocks of \(Aut(\Gamma(n,T))\) acting on \({\cal F}_{n}^{T}\). In addition, \(Aut(\Gamma(n,T))\cong N\wr S_{n}\) where \(N=\overbrace{S_{m}\times\cdots\times S_{m}}^{C_{tr}^{tr}}\) and \(m=|{\cal F}_{n}^{(T|A)}|\) for a \({\cal F}_{n}^{(T|A)}\in\Sigma\).
**Proof** Let \(\rho\in Aut(\Gamma(n,T))\) and \({\cal F}_{n}^{(T|A)}\in\Sigma\). Suppose that \(f\in{\cal F}^{(T|A)}\) and \(f^{\rho}\in{\cal F}^{(T|B)}\) for some \({\cal F}^{(T|B)}\in\Sigma\). It suffices to prove that \(g^{\rho}\in{\cal F}^{(T|B)}\) for any \(g\in{\cal F}^{(T|A)}\). Proof by contradiction. Assume that there exists a \(h\in{\cal F}^{(T|A)}\) such that \(h^{\rho}\in{\cal F}^{(T|C)}\neq{\cal F}^{(T|B)}\). It is clear that
\[N(f,h)=\bigcup{\cal F}_{n}^{(T|D)},\mbox{ where }D\subseteq[n]\setminus A \mbox{ with }|D|=t_{r}.\]
Likewise, it is simple to see that
\[N(f^{\rho},h^{\rho})=\bigcup{\cal F}^{(T|D)},\mbox{ where }D\subseteq[n]\setminus(B\cup C) \mbox{ with }|D|=t_{r}.\]
Obviously, \(|N(f^{\rho},h^{\rho})|<|N(f,h)|\), and which is contradict to Fact 2.1. Therefore, \(\Sigma\) is a system of nontrivial blocks of \(Aut(\Gamma(n,T))\) acting on \({\cal F}_{n}^{T}\).
Let \(\rho\in Aut(\Gamma(n,T))\) and \({\cal F}_{n}^{(T|A)},{\cal F}_{n}^{(T|B)},{\cal F}_{n}^{(T|C)},{\cal F}_{n}^{(T |D)}\in\Sigma\) such that \({\cal F}_{n}^{(T|A)}{}^{\rho}={\cal F}_{n}^{(T|B)}\) and \({\cal F}_{n}^{(T|C)}{}^{\rho}={\cal F}_{n}^{(T|D)}\). Clearly, \(B\cap D=\emptyset\) if and only if \(A\cap C=\emptyset\). Hence, we deduce that
\(Aut(\Gamma(n,T))|_{\Sigma}\leq Aut(KG(n,t_{r}))\). Additionally, the symmetric group on the set \([n]\) induces an automorphism group of \(\Gamma(n,T)\), and therefore \(Aut(\Gamma(n,T))|_{\Sigma}\cong Aut(KG(n,t_{r}))\cong S_{n}\). On the other hand, it is obvious that the kernel of the natural homomorphism from \(Aut(\Gamma(n,T))\) to \(Aut(\Gamma(n,T))|_{\Sigma}\) is the direct product of the symmetric groups on all blocks of \(\Sigma\), and therefore \(Aut(\Gamma(n,T))\cong N\wr S_{n}\) where \(N=\overbrace{S_{m}\times\cdots\times S_{m}}^{C_{n}^{tr}}\) and \(m=|\mathcal{F}_{n}^{(T|A)}|\) for a \(\mathcal{F}_{n}^{(T|A)}\in\Sigma\). \(\Box\)
By [8, Lemma 2.1(b)] and Lemma 4.2, we derive the following corollary.
**Corollary 4.3**: Let \(T=\{t_{1},t_{2},...,t_{r}\}\subseteq[n-1]\) with \(\frac{n}{2}\leq t_{1}<t_{2}<\cdots<t_{r}\). Then \(Aut(\Gamma(n,T))\cong N\wr S_{n}\), where \(N=\overbrace{S_{m}\times\cdots\times S_{m}}^{C_{n}^{tr}}\) and \(m=|\mathcal{F}_{n}^{(T|A)}|\) with \(|A|=n-t_{1}\). __
## 5 Acknowledgement
We are very grateful to the anonymous referees for their useful suggestions and comments.
|
2307.10755 | Fourier decay of equilibrium states on hyperbolic surfaces | Let $\Gamma$ be a (convex-)cocompact group of isometries of the hyperbolic
space $\mathbb{H}^d$, let $M := \mathbb{H}^d/\Gamma$ be the associated
hyperbolic manifold, and consider a real valued potential $F$ on its unit
tangent bundle $T^1 M$. Under a natural regularity condition on $F$, we prove
that the associated $(\Gamma,F)$-Patterson-Sullivan densities are stationary
measures with exponential moment for some random walk on $\Gamma$. As a
consequence, when $M$ is a surface, the associated equilibrium state for the
geodesic flow on $T^1 M$ exhibit "Fourier decay", in the sense that a large
class of oscillatory integrals involving it satisfies power decay. It follows
that the non-wandering set of the geodesic flow on convex-cocompact hyperbolic
surfaces has positive Fourier dimension, in a sense made precise in the
appendix. | Gaétan Leclerc | 2023-07-20T10:37:24Z | http://arxiv.org/abs/2307.10755v1 | # Fourier decay of equilibrium states on hyperbolic surfaces
###### Abstract
Let \(\Gamma\) be a (convex-)cocompact group of isometries of the hyperbolic space \(\mathbb{H}^{d}\), let \(M:=\mathbb{H}^{d}/\Gamma\) be the associated hyperbolic manifold, and consider a real valued potential \(F\) on its unit tangent bundle \(T^{1}M\). Under a natural regularity condition on \(F\), we prove that the associated \((\Gamma,F)\)-Patterson-Sullivan densities are stationary measures with exponential moment for some random walk on \(\Gamma\). As a consequence, when \(M\) is a surface, the associated equilibrium state for the geodesic flow on \(T^{1}M\) exhibit "Fourier decay", in the sense that a large class of oscillatory integrals involving it satisfies power decay. It follows that the non-wandering set of the geodesic flow on convex-cocompact hyperbolic surfaces has positive Fourier dimension, in a sense made precise in the appendix.
## 1 Introduction
### State of the art
Since the early work of Dolgopyat on the decay of correlation of Anosov flows [11], we know that the rate of mixing of hyperbolic flows (with respect to some **equilibrium states**) may be linked with the spectral properties of "twisted transfer operators". This idea has been widely used and generalized: see, for example, [10], [11], [12], and [13], to only name a few related work. In concrete terms, exponential mixing can be reduced to exhibiting enough cancellations in sums of exponentials. It turns out that this (rather complicated) process can be sometimes simplified if one knows that the Fourier transform of those equilibrium states exhibit Fourier decay. This idea has been explored and discussed in Li's work on stationary measures [14], as well as in [15], and more recently in a preprint by Khalil [16]. This connection with the exponential mixing of dynamical systems has sparked recent interest in studying the behavior of the Fourier transform of measures. Historically, this was motivated by understanding sets of unicity for Fourier series [17], which lead us to discover that the Fourier properties of a measure may be used to study the arithmetic properties of its support. This idea is encoded in the notion of "Fourier dimension": see for exemple a recent preprint of Fraser [18] (introducing a new notion of "Fourier dimension spectrum") and the references therein. Let us introduce this notion.
The Fourier dimension is better understood if we first recall a well known formula for the Hausdorff dimension. If \(E\subset\mathbb{R}^{d}\) is a compact subset of some euclidean space, a corollary (see for example [15]) of a lemma by Frostmann [19] yields the following identity:
\[\dim_{H}E=\sup\left\{\alpha\in[0,d]\ |\ \exists\mu\in\mathcal{P}(E),\ \int_{\mathbb{R}^{d}}|\widehat{\mu}(\xi)|^{2}|\xi|^{\alpha-d}d\xi<\infty \right\},\]
where \(\dim_{H}E\) is the Hausdorff "fractal" dimension of \(E\), \(\mathcal{P}(E)\) is the set of all (borel) probability measures supported on \(E\), and where \(\widehat{\mu}:\mathbb{R}^{d}\to\mathbb{C}\), given by
\[\widehat{\mu}(\xi):=\int_{\mathbb{R}^{d}}e^{-2i\pi x\cdot\xi}d\mu(x),\]
is the Fourier transform of the measure \(\mu\in\mathcal{P}(E)\). The condition on the measure in the supremum can be though as a decay condition "on average". In particular, the inner integral is finite if \(\widehat{\mu}(\xi)\) decays like \(|\xi|^{-\alpha/2-\varepsilon}\) for large \(\xi\). With this in mind, the following notion is quite natural to introduce: we define the Fourier dimension of \(E\subset\mathbb{R}^{d}\) by the formula
\[\dim_{F}E:=\sup\left\{\alpha\in[0,d]\ |\ \exists\mu\in\mathcal{P}(E),\exists C \geq 1,\forall\xi\in\mathbb{R}^{d}\setminus\{0\},\ |\widehat{\mu}(\xi)|\leq C|\xi|^{-\alpha/2}\right\}.\]
While it is clear that \(0\leq\dim_{F}E\leq\dim_{H}E\leq d\), we do not always have equality between the two notions. For example, it is well known that the triadic Cantor set has Hausdorff dimension
\(\ln 2/\ln 3\), but has Fourier dimension \(0\). In fact, exhibiting deterministic (fractal) sets with positive Fourier dimension is quite a challenge. One of the earliest examples of deterministic (fractal) set with positive Fourier dimension was discovered by Kaufmann in 1980 [14], involving sets related to continued fractions. Kaufmann's method was optimised by Queffelec and Ramare in [15], and was more recently generalized by Jordan and Sahlsten [16]. A year later, Kaufmann [14] found deterministic examples of (fractal) _Salem sets_ in \(\mathbb{R}\), that is, sets \(E\) satisfying \(\dim_{H}E=\dim_{F}E\). This example is related to diophantine approximations. The construction was generalized in [11], and then in \(\mathbb{R}^{d}\) in [10]. Let us mention that there is a whole bibliography on random constructions of Salem sets: the interested reader may look at the references in [10]. Returning to the study of sets with positive Fourier dimension and of sets of unicity, let us also mention that a there has been a lot of interest for Cantor sets appearing from linear IFS. Related work includes [13], [14], [15], [16], [17], and [18].
In 2017, Bourgain and Dyalov [1] introduced a new method to prove positivity of the Fourier dimension for some sets. The paper takes place in a dynamical context. If we fix some Schottky group \(\Gamma<\operatorname{PSL}(2,\mathbb{R})\), then \(\Gamma\) acts naturally on \(\mathbb{R}\). One can prove that there exists a cantor set \(\Lambda_{\Gamma}\subset\mathbb{R}\), called a limit set, which is invariant by \(\Gamma\). On this limit set, there is a family of natural probability measures associated with the dynamics that are called Patterson-Sullivan measures. Using results from additive combinatorics, more specifically a "sum-product phenomenon", Bourgain and Dyatlov managed to prove power decay for the Fourier transform of those probability measures. In particular, the limit set \(\Lambda_{\Gamma}\subset\mathbb{R}\) has positive Fourier dimension. An essential feature of \(\Gamma\) was the _nonlinearity_ of the dynamics.
The method introduced in this paper inspired numerous generalizations, beginning with a paper of Li, Naud and Pan [19] proving power decay for Patterson-Sullivan measures over (Zariski dense) Fuschian Schottky groups \(\Gamma<\operatorname{PSL}(2,\mathbb{C})\). In this paper, Li proves that such measures may be seen as **stationary measures with finite exponential moment**, which allows them to use several results from the topic of random walks on groups. From this, they obtain positivity of the Fourier dimension of the associated limit set \(\Lambda_{\Gamma}\subset\mathbb{C}\).
From there, at least two different directions exists for generalization. The first one is to notice that Patterson-Sullivan measures are equilibrium states for some hyperbolic dynamical system. A natural generalization is then given by Sahlsten and Steven in [13]: for one dimensional and "totally nonlinear" IFS, one can show that any equilibrium state exhibit Fourier decay. In particular, Sahlsten and Steven obtain positive Fourier dimension for a large class of "nonlinear" Cantor sets. This paper use the method introduced by Bourgain-Dyatlov, and is also inspired by previous techniques appearing in [16]. See also [1] for some related work on nonlinear IFS and pointwise normality. Some complementary remarks on the work of Sahlsten and Steven may be found in [11]. Past the one-dimensionnal setting, it was proved by the author in [11] that the same results hold true in the context of hyperbolic Julia sets in the complex plane. Some decay results are also true, in the unstable direction, for equilibrium states of (sufficiently bunched) non-linear solenoids [11].
A second natural direction to look at is for result concerning stationary measures with exponential moment for random walks on groups. Li proved in [12] and [13] several Fourier decay results in the context of random walks over \(SL_{n}(\mathbb{R})\) (a crucial property of this group in the proofs is its splitness). Past the split setting, further results seems difficult to achieve.
### Our setting of interest
In this paper, we are interested in studying the Fourier properties of equilibrium states for the geodesic flow on convex-cocompact surfaces of constant negative curvature. More details on our setting will be explained during the paper, but let's quickly introduce the main objects at play. A usefull reference is [10].
We work on hyperbolic manifolds, that is, a riemannian manifold \(M\) that may be written as \(M=\mathbb{H}^{d}/\Gamma\), where \(\mathbb{H}^{d}\) is the hyperbolic space of dimension \(d\), and where \(\Gamma\) is a (non-elementary, discrete, without torsion, orientation preserving) group of isometries of \(\mathbb{H}^{d}\). The geodesic flow
\(\phi=(\phi_{t})_{t\in\mathbb{R}}\) acts on the unit tangent bundle of \(M\), denoted by \(T^{1}M\). We say that a point \(v\in T^{1}M\) is wandering for the flow if there exists an open neighborhood \(U\subset T^{1}M\) of \(v\), and a positive number \(T>0\) such that:
\[\forall t>T,\ \phi_{t}(U)\cap U=\emptyset.\]
The set of non-wandering points for \(\phi\), denoted by \(NW(\phi)\subset T^{1}M\), is typically "fractal" and is invariant by the geodesic flow. We will work under the hypothesis that the group \(\Gamma\) is convex-cocompact, which exactly means that \(\operatorname{NW}(\phi)\) is supposed compact. In particular, _the case where \(M\) is itself compact is authorized_. Under this condition, the flow \(\phi\) restricted to \(NW(\phi)\) is Axiom A.
In this context, for any choice of Holder regular potential \(F:T^{1}M\to\mathbb{R}\), and for any probability measure \(m\in\mathcal{P}(T^{1}M)\) (the set of borel probability measures on \(T^{1}M\)) invariant by the geodesic flow, one can consider the _metric pressure_ associated to \(m\), defined by:
\[P_{\Gamma,F}(m)=h_{m}(\phi)+\int_{\operatorname{NW}(\phi)}Fdm,\]
where \(h_{m}(\phi)\) denotes the entropy of the time-\(1\) map of the geodesic flow with respect to the measure \(m\). Notice that any probability measure invariant by the geodesic flow must have support included in the non-wandering set of \(\phi\). The _topological pressure_ is then defined by
\[P(\Gamma,F):=\sup_{m}P_{\Gamma,F}(m),\]
where the sup is taken over all the \(\phi\)-invariant probability measures \(m\). Those quantities generalize the variationnal principle for the topological and metric entropy (that we recover when \(F=0\)). It is well known that this supremum is, in fact, a maximum: see for example [10] or [11].
**Theorem 1.1**.: _Let \(\Gamma\) be convex-cocompact, \(M:=\mathbb{H}^{d}/\Gamma\), and \(F:T^{1}M\to\mathbb{R}\) be a Holder regular potential. Then there exists a unique probability measure \(m_{F}\) invariant by \(\phi\) such that \(P_{\Gamma,F}(m_{F})=P(\Gamma,F)\). This measure is called the equilibrium state associated to \(F\) and its support is the non-wandering set of the geodesic flow. When \(F=0\), \(m_{F}\) is the measure of maximal entropy._
Theorem 6.1 in [11] also gives us a description of equilibrium states. To explain it, recall that the _Hopf coordinates_ allows us to identify \(T^{1}\mathbb{H}^{d}\) with \(\partial_{\infty}\mathbb{H}^{d}\times\partial_{\infty}\mathbb{H}^{d}\times \mathbb{R}\), where \(\partial_{\infty}\mathbb{H}^{d}\) denotes the _ideal boundary_ of the hyperbolic space (diffeomorphic to a sphere in our context). The measure \(m_{F}\) lift into a \(\Gamma\)-invariant measure \(\tilde{m}_{F}\) on \(T^{1}\mathbb{H}^{d}\), which can then be studied in these coordinates. The interesting remark is that \(\tilde{m}_{F}\) may be seen as a product measure, involving what we call \((\Gamma,F)\)-Patterson-Sullivan densities, which are generalization of the usual Patterson-Sullivan probability measures. More precisely, there exists \(\mu_{F}\) and \(\mu_{F}^{\star}\), two Patterson-Sullivan densities supported on the ideal boundary \(\partial_{\infty}\mathbb{H}^{d}\), such that one may write (in these Hopf coordinates):
\[d\tilde{m}_{F}(\xi,\eta,t)=\frac{d\mu_{F}(\xi)\otimes d\mu_{F}^{\star}(\eta) \otimes dt}{D_{F}(\xi,\eta)^{2}},\]
where \(D_{F}\) is the "potential gap" (or gap map), that we will define later. More details on Patterson-Sullivan densities can be found in section 2 (which will be devoted to recalling various preliminary results). Since the Hopf coordinates are smooth on \(\mathbb{H}^{d}\), we see that one may reduce Fourier decay for \(m_{F}\) to proving Fourier decay for Patterson-Sullivan densities. This reduction is the content of section 4. Then, to prove Fourier decay for those measures, several possibilities exists. With our current techniques, this may only be achieved when \(d=2\), so that Patterson-Sullivan densities are supported on the circle.
The first possibility would be to use the fact that, in this low dimensionnal context, there exists a coding of the dynamics of the group \(\Gamma\) on the ideal boundary: see for example [10] or [1]. Using these, one should be able to get Fourier decay for Patterson-Sullivan densities by adapting the proof of Bougain and Dyatlov in [1]. The second possibility would be to adapt the argument found in Li's appendix [14] to prove that Patterson-Sullivan densities are actually stationary measures with exponential moment (for a random walk on \(\Gamma\)). Since in dimension 2, isometries of \(\mathbb{H}^{2}\) may be seen as elements of \(\operatorname{SL}_{2}(\mathbb{R})\), one could then apply Li's work [13] to get Fourier decay. This is the strategy that we choose to follow in section 3. Finally, let us enhance the fact that we are only able to work under a regularity condition (R) (see definition 2.13) that ensure Holder regularity for our measures of interest. We now state our main results.
**Theorem 1.2** (Compare Theorem 3.2).: _Let \(\Gamma\) be a convex-cocompact group of isometries of \(\mathbb{H}^{d}\), and let \(F:T^{1}(\mathbb{H}^{d}/\Gamma)\to\mathbb{R}\) be a Holder potential satisfying (R). Let \(\mu\in\mathcal{P}(\partial_{\infty}\mathbb{H}^{d})\) be a \((\Gamma,F)\) Patterson-Sullivan density. Then there exists \(\nu\in\mathcal{P}(\Gamma)\) with exponential moment such that \(\mu\) is \(\nu\)-stationary and such that the support of \(\nu\) generates \(\Gamma\)._
Theorem 1.2 is our main technical result. The strategy is inspired by the appendix of [10], but in our setting, some additionnal difficulties appear since the potential may be non-zero. For example, the proof of Lemma A.12 in [10] fails to work in our context. Our main idea to replace this lemma is to do a carefull study of the action of \(\Gamma\) on the sphere at infinity: we will be particularly interested in understanding its contractions properties. This is the content of section 2. The proof of Theorem 1.2 is in section 3. Once this main technical result is proved, one can directly use the work of Li [11] and get:
**Corollary 1.3** ([11], Theorem 1.5).: _Let \(\Gamma\) be a convex-cocompact group of isometries of \(\mathbb{H}^{2}\), and let \(F:T^{1}(\mathbb{H}^{2}/\Gamma)\to\mathbb{R}\) be a Holder potential satisfying (R). Let \(\mu\in\mathcal{P}(\Lambda_{\Gamma})\) be a \((\Gamma,F)\) Patterson-Sullivan density. There exists \(\varepsilon>0\) such that the following hold. Let \(R\geq 1\) and let \(\chi:\partial_{\infty}\mathbb{H}^{2}\simeq\mathbb{S}^{1}\to\mathbb{R}\) be a \(\alpha\)-Holder map supported on some compact \(K\). Then there exists \(C\geq 1\) such that, for any \(C^{2}\) function \(\varphi:\partial_{\infty}\mathbb{H}^{2}\to\mathbb{R}\) such that \(\|\varphi\|_{C^{2}}+(\inf_{K}|\varphi^{\prime}|)^{-1}\leq R\), we have:_
\[\forall s\in\mathbb{R}^{*},\ \left|\int_{\partial_{\infty}\mathbb{H}^{2}}e^{ is\varphi}\chi d\mu\right|\leq\frac{C}{|s|^{\varepsilon}}.\]
Using the previous Corollary 1.3 and using the Hopf coordinates, we can conclude Fourier decay for equilibrium states on convex-cocompact hyperbolic surfaces. The proof is done in section 4.
**Theorem 1.4** (Compare Theorem 4.5).: _Let \(\Gamma\) be a convex-cocompact group of isometries of \(\mathbb{H}^{2}\), and let \(F:T^{1}(\mathbb{H}^{2}/\Gamma)\to\mathbb{R}\) be a Holder potential satisfying (R). Let \(m_{F}\) be the associated equilibrium state. There exists \(\varepsilon>0\) such that the following holds. Let \(\chi:T^{1}\mathbb{H}^{2}\to\mathbb{R}\) be a Holder map supported on a compact neighborhood of some point \(v_{o}\in T^{1}\mathbb{H}^{2}\), and let \(\varphi:T^{1}\mathbb{H}^{2}\to\mathbb{R}^{3}\) be a \(C^{2}\) local chart containing the support of \(\chi\). There exists \(C\geq 1\) such that:_
\[\forall\zeta\in\mathbb{R}^{3}\setminus\{0\},\ \left|\int_{NW(\phi)}e^{i \zeta\cdot\varphi(v)}\chi(v)dm_{F}(v)\right|\leq\frac{C}{|\zeta|^{\varepsilon}},\]
_where \(\zeta\cdot\zeta^{\prime}\) and \(|\zeta|\) denotes the euclidean scalar product and the euclidean norm on \(\mathbb{R}^{3}\). In other word, the pushforward measure \(\varphi_{*}(\chi dm_{F})\in\mathcal{P}(\mathbb{R}^{3})\) exhibit power Fourier decay._
**Remark 1.5**.: We will see in section 4 that the argument to prove Theorem 1.4 from Corollary 1.3 is fairly general. In particular, if one is able to prove Fourier decay for \((\Gamma,F)\)-Patterson-Sullivan densities in some higher dimensionnal context, this would prove Fourier decay for equilibrium states in higher dimensions. For example, [10] precisely proves Fourier decay for Patterson-Sullivan densities with the potential \(F=0\) when \(\Gamma<\mathrm{PSL}(2,\mathbb{C})\) is a Zariski-dense Kleinian Schottky group. This yields power decay for the measure of maximal entropy on \(M:=\mathbb{H}^{3}/\Gamma\) in this context.
**Remark 1.6**.: With our result in mind, it is natural to try to give some sense to the sentence "\(\dim_{F}NW(\phi)>0\)". The problem is that the notion of Fourier dimension is not well defined on manifolds. In the appendix, we suggest some natural notions of Fourier dimensions for sets living in a manifold, in particular a notion of _lower Fourier dimension_, that measure "persistence of the positivity of the Fourier dimension under deformations". The sentence is then made rigorous in Remark A.7 and Example A.23.
### Acknowledgments
I would like to thank my PhD advisor, Frederic Naud, for pointing out to me the existing bibliography on Patterson-Sullivan densities and for encouraging me to work in the context of hyperbolic surfaces. I would also like to thank Jialun Li for explaining to me his work on Fourier decay for stationary measures with exponential moment in the context of split groups. Finally, I would like to thank Paul Laubie for some very stimulating discussions.
Preliminaries
### Moebius transformations preserving the unit ball
In this first paragraph we recall well known properties of Moebius transformations. Usefull references for the study of such maps are [1], [2] and [1]. The group of all Moebius transformations of \(\mathbb{R}^{d}\cup\{\infty\}\) is the group generated by the inversion of spheres and by the reflexions. This group contains dilations and rotations. Denote by \(\text{Mob}(B^{b})\) the group of all the Moebius transformations \(\gamma\) such that \(\gamma\) preserves the orientation of \(\mathbb{R}^{d}\), and such that \(\gamma(B^{d})=B^{d}\), where \(B^{d}\) denotes the open unit ball in \(\mathbb{R}^{d}\). These maps also acts on the unit sphere \(\mathbb{S}^{d-1}\). These transformations can be put in a "normal form" as follows.
**Lemma 2.1** ([2], page 124).: _Define, for \(b\in B^{d}\), the associated "hyperbolic translation" by:_
\[\tau_{b}(x)=\frac{(1-|b|^{2})x+(|x|^{2}+2x\cdot b+1)b}{|b|^{2}|x|^{2}+2x\cdot b +1}.\]
_Then \(\tau_{b}\in\text{Mob}(B^{d})\). Moreover, for every \(\gamma\in\text{Mob}(B^{d})\), \(\tau_{\gamma(0)}^{-1}\gamma\in SO(d,\mathbb{R})\)._
It follows that the distortions of any Moebius transformation \(\gamma\in\text{Mob}(B^{d})\) can be understood by studying the distortions of hyperbolic translations. The main idea is the following: if \(\gamma(o)\) is close to the unit sphere, then \(\gamma\) contracts strongly on a large part of the sphere. Let us state a quantitative statement:
**Lemma 2.2** (First contraction lemma).: _Let \(\gamma\in\text{Mob}(B^{d})\). Suppose that \(|\gamma(o)|\geq c_{0}>0\). Denote by \(x_{\gamma}^{m}:=\gamma(o)/|\gamma(o)|\), and let \(\varepsilon_{\gamma}:=1-|\gamma(o)|\). Then:_
1. _There exists_ \(c_{1},c_{2}>0\) _that only depends on_ \(c_{0}\) _such that_ \[\forall x\in\mathbb{S}^{d-1},\ |x-x_{\gamma}^{m}|\geq c_{1}\varepsilon_{ \gamma}^{2}\Longrightarrow|\gamma^{-1}x-\gamma^{-1}x_{\gamma}^{m}|\geq c_{2}.\]
2. _For all_ \(c\in(0,1)\)_, there exists_ \(C\geq 1\) _and a set_ \(A_{\gamma}\subset\mathbb{S}^{d-1}\) _such that diam_\((A_{\gamma})\leq C\varepsilon_{\gamma}\) _and such that:_ \[\forall x\in\mathbb{S}^{d-1}\setminus A_{\gamma},\ |\gamma(x)-x_{\gamma}^{m}|\leq c \varepsilon_{\gamma}.\]
Proof.: Let \(\gamma\in\text{Mob}(B^{d})\). Since \(\gamma=\tau_{\gamma(o)}\Omega\) for some \(\Omega\in SO(n,\mathbb{R})\), we see that we may suppose \(\gamma=\tau_{b}\) for some \(b\in B^{d}\). Without loss of generality, we may even choose \(b\) of the form \(\beta e_{d}\), where \(e_{d}\) is the d-th vector of the canonical basis of \(\mathbb{R}^{d}\), and where \(\beta=|\gamma(o)|\in[c_{0},1[\). Denote by \(\pi_{d}\) the projection on the \(d-th\) coordinate. We find:
\[\forall x\in\mathbb{S}^{d-1},\ \pi_{d}\tau_{b}(x)=\frac{(1+\beta^{2})x_{d}+2 \beta}{2\beta x_{d}+(1+\beta^{2})}=:\varphi(x_{d}).\]
The function \(\varphi\) is continuous and increasing on \([-1,1]\), and fixes \(\pm 1\). Computing its value at zero gives \(\varphi(0)=\frac{2\beta}{1+\beta^{2}}\geq 1-\varepsilon_{\gamma}^{2}\), which proves the first point. Computing its value at \(-\beta\) gives \(\varphi(-\beta)=\beta\), which (almost) proves the second point. The second point is proved rigorously by a direct computation, noticing that
\[1-\varphi(-1+C\varepsilon_{\gamma})=\frac{\varepsilon_{\gamma}}{C}\frac{1-C \varepsilon_{\gamma}/2}{1-(1-1/(2C))\varepsilon_{\gamma}}\leq\varepsilon_{ \gamma}/C.\]
Finally, let us recall a well known way to see \(\text{Mob}(B^{d})\) as a group of matrices.
**Lemma 2.3**.: _Let \(q:\mathbb{R}^{d+1}\rightarrow\mathbb{R}^{d+1}\) be the quadratic form \(q(t,\omega):=-t^{2}+\sum_{i}\omega_{i}^{2}\) on \(\mathbb{R}^{d+1}\). We denote by \(SO(d,1)\) the set linear maps with determinant one that preserves \(q\). Let \(H:=\{(t,\omega)\in\mathbb{R}\times\mathbb{R}^{d}\,\ q(t,\omega)=-1\,t>0\}.\) Define the stereographic projection \(\zeta:B^{d}\to H\) by \(\zeta(x):=\left(\frac{1+|x|^{2}}{1-|x|^{2}},\frac{2x}{1-|x|^{2}}\right)\). Then, for any \(\gamma\in\text{Mob}(B^{d})\), the map \(\zeta\gamma\zeta^{-1}:H\to H\) is the restriction to \(H\) of an element of \(SO(d,1)\)._
Proof.: It suffices to check the lemma when \(\gamma\) is a rotation or a hyperbolic translation. A direct computation shows that, when \(\Omega\in SO(d,\mathbb{R})\), then \(\zeta\Omega\zeta^{-1}\) is a rotation leaving invariant the \(t\) coordinate, and so it is trivially an element of \(SO(d,1)\). We now do the case where \(\gamma=\tau_{g_{ed}}\) is a hyperbolic translation. We denote by \(x\) the variable in \(B^{d}\) and \((t,\omega)\) the variables in \(\mathbb{R}^{d+1}\). The expression \(\zeta(x)=(t,\omega)\) gives
\[\frac{1+|x|^{2}}{1-|x|^{2}}=t,\quad\frac{2x}{1-|x|^{2}}=\omega,\text{ and }\quad\zeta^{-1}(t,\omega)=\frac{\omega}{1+t}.\]
For \(\alpha\in\mathbb{R}\), denote \(s_{\alpha}:=\sinh(\alpha)\) and \(c_{\alpha}:=\cosh(\alpha)\). There exists \(\alpha\) such that \(\beta=s_{\alpha}/(c_{\alpha}+1)=(c_{\alpha}-1)/s_{\alpha}\). For this \(\alpha\), we also have \(\beta^{2}=(c_{\alpha}-1)/(c_{\alpha}+1)\), and \(1-\beta^{2}=2/(c_{\alpha}+1)\). Now, we see that
\[\tau_{\beta e_{d}}(x) =\frac{(1-\beta^{2})x+(|x|^{2}+2x_{d}\beta+1)\beta e_{d}}{\beta^{ 2}|x|^{2}+2x_{d}\beta+1}=\frac{\frac{2}{c_{\alpha}+1}x+(|x|^{2}+2x_{d}\frac{c_ {\alpha}-1}{s_{\alpha}+1})\frac{s_{\alpha}}{c_{\alpha}+1}e_{d}}{\frac{c_{ \alpha}-1}{c_{\alpha}+1}|x|^{2}+2x_{d}\frac{s_{\alpha}}{c_{\alpha}+1}+1}\] \[=\frac{2x+(s_{\alpha}(1+|x|^{2})+2x_{d}(c_{\alpha}-1))e_{d}}{1-|x |^{2}+c_{\alpha}(1+|x|^{2})+2s_{\alpha}x_{d}}=\frac{\omega+(s_{\alpha}t+(c_{ \alpha}-1)\omega_{d})\,e_{d}}{1+c_{\alpha}t+s_{\alpha}\omega_{d}}\] \[=\zeta^{-1}\left(c_{\alpha}t+s_{\alpha}\omega_{d},\omega+(s_{ \alpha}t+(c_{\alpha}-1)\omega_{d})\,e_{d}\right),\]
and so \(\zeta\tau_{\beta e_{d}}\zeta^{-1}(t,\omega)=\left(c_{\alpha}t+s_{\alpha} \omega_{d}\,\ \omega+(s_{\alpha}t+(c_{\alpha}-1)\omega_{d})\,e_{d}\right)\) is indeed linear in \((t,\omega)\). In this form, checking that \(\zeta\tau_{\beta e_{d}}\zeta^{-1}\in SO(d,1)\) is immediate.
**Remark 2.4**.: From now on, we will allow to directly identify elements of \(\operatorname{Mob}(B^{d})\) with matrices in \(SO(d,1)\). (By continuity of \(\gamma\mapsto\zeta\gamma\zeta^{-1}\), we even know that thoses matrices lies in \(SO_{0}(d,1)\), the connected component of the identity in \(SO(d,1)\)). It follows from the previous explicit computations that, for any matrix norm on \(SO(d,1)\), and for any \(\gamma\in\operatorname{Mob}(B^{d})\) such that \(|\gamma(o)|\geq c_{0}\), there exists \(C_{0}\) only depending on \(c_{0}\) such that
\[\|\gamma\|\leq C_{0}\varepsilon_{\gamma}^{-1}.\]
### The Gibbs cocycle
In this paragraph, we introduce our geometric setting. For an introduction to geometry in negative (non-constant) curvature, the interested reader may refer to [1], or to the first chapters of [12].
Let \(M=\mathbb{H}^{d}/\Gamma\) be a hyperbolic manifold of dimension \(d\), where \(\Gamma\subset Iso^{+}(\mathbb{H}^{d})\) denotes a non-elementary and discrete group of isometries of the hyperbolic space without torsion that preserves the orientation. Let \(T^{1}M\) denotes the unit tangent bundle of \(M\), and denote by \(p:T^{1}M\to M\) the usual projection. The projection lift to a \(\Gamma\)-invariant map \(\tilde{p}:T^{1}\mathbb{H}^{d}\to\mathbb{H}^{d}\). Fix \(F:T^{1}M\to\mathbb{R}\) a Holder map: we will call it a potential. The potential \(F\) lift to a \(\Gamma\)-invariant map \(\tilde{F}:T^{1}\mathbb{H}^{d}\to\mathbb{R}\). All future constants appearing in the paper will implicitely depend on \(\Gamma\) and \(F\).
To be able to do use our previous results about Moebius transformations, we will work in the conformal ball model for a bit. In this model, we can think of \(\mathbb{H}^{d}\) as being the unit ball \(B^{d}\) equipped with the metric \(ds^{2}:=\frac{4\|dx\|^{2}}{(1-\|x\|^{2})^{2}}\). The ideal boundary \(\partial_{\infty}\mathbb{H}^{d}\) (see [1] for a definition) of \(\mathbb{H}^{d}\) is then naturally identified with \(\mathbb{S}^{d-1}\), and its group of orientation-preserving isometries with \(\operatorname{Mob}(B^{d})\). On the ideal boundary, there is a natural family of distances \((d_{x})_{x\in\mathbb{H}^{d}}\) called visual distances (seen from \(x\)), defined as follow:
\[d_{x}(\xi,\eta):=\lim_{t\to+\infty}\exp\left(-\frac{1}{2}\left(d(x,\xi_{t})+d( x,\eta_{t})-d(\xi_{t},\eta_{t})\right)\right)\in[0,1],\]
where \(\xi_{t}\) and \(\eta_{t}\) are any geodesic rays ending at \(\xi\) and \(\eta\). To get the intuition behind this quantity, picture a finite tree with root \(x\) and think of \(\xi\) and \(\eta\) as leaves in this tree.
**Lemma 2.5** ([12] page 15 and [13] lemma A.5).: _The visual distances are all equivalent and induces the usual euclidean topology on \(\mathbb{S}^{d-1}\simeq\partial_{\infty}\mathbb{H}^{d}\). More precisely:_
\[\forall x,y\in\mathbb{H}^{d},\forall\xi,\eta\in\partial_{\infty}\mathbb{H}^{d}, \ \ e^{-d(x,y)}\leq\frac{d_{x}(\xi,\eta)}{d_{y}(\xi,\eta)}\leq e^{d(x,y)}.\]
_In the ball model, the visual distance from the center of the ball is the sine of (half of) the angle._
The sphere at infinity \(\partial_{\infty}\mathbb{H}^{d}\) takes an important role in the study of \(\Gamma\). For any \(x\in\mathbb{H}^{d}\), the orbit \(\Gamma x\) accumulates on \(\mathbb{S}^{d-1}\) (for the euclidean topology) into a (fractal) _limit set_ denoted \(\Lambda_{\Gamma}\). This limit set is independant of \(x\). We will denote by \(\operatorname{Hull}(\Lambda_{\Gamma})\) the convex hull of the limit set: that is, the set of points \(x\in\mathbb{H}^{d}\) such that \(x\) is in a geodesic starting and finishing in \(\Lambda_{\Gamma}\). Since \(\Gamma\) acts naturally on \(\Lambda_{\Gamma}\), \(\Gamma\) acts on \(\operatorname{Hull}(\Lambda_{\Gamma})\). Without loss of generality, we can assume that \(o\in\operatorname{Hull}(\Lambda_{\Gamma})\), and we will do from now on. We will say that \(\Gamma\) is convex-cocompact if \(\Gamma\) is discrete, without torsion and if \(\operatorname{Hull}(\Lambda_{\Gamma})/\Gamma\) is compact. In particular, in this paper, we allow \(M\) to be compact.
We will suppose througout the paper that \(\Gamma\) is convex cocompact. In this context, the set \(\Omega\Gamma=p^{-1}(\operatorname{Hull}\Lambda_{\Gamma}/\Gamma)\subset T^{1}M\) is compact, and it follows that \(\sup_{\Omega\Gamma}|F|<\infty\). In particular, \(\tilde{F}\) is bounded on \(\tilde{p}^{-1}(\operatorname{Hull}(\Lambda_{\Gamma}))\), which is going to allow us to get some control over line integrals involving \(F\). Recall the notion of line integral in this context: if \(x,y\in\mathbb{H}^{d}\) are distinct points, then there exists a unique unit speed geodesic joining \(x\) to \(y\), call it \(c_{x,y}\). We then define:
\[\int_{x}^{y}\tilde{F}:=\int_{0}^{d(x,y)}\tilde{F}(\hat{c}_{x,y}(s))ds.\]
Beware that if \(\tilde{F}(-v)\neq\tilde{F}(v)\) for some \(v\in T^{1}M\), then \(\int_{x}^{y}\tilde{F}\) and \(\int_{y}^{x}\tilde{F}\) might not be equal.
We are ready to introduce the _Gibbs cocycle_ and recall some of its properties.
**Definition 2.6** ([17], page 39).: The following "Gibbs cocycle" \(C_{F}:\partial_{\infty}\mathbb{H}^{d}\times\mathbb{H}^{d}\times\mathbb{H}^{d} \rightarrow\mathbb{R}\) is well defined and continuous:
\[C_{F,\xi}(x,y):=\lim_{t\rightarrow+\infty}\left(\int_{y}^{\xi_{t}}\tilde{F}- \int_{x}^{\xi_{t}}\tilde{F}\right)\]
where \(\xi_{t}\) denotes any geodesic converging to \(\xi\).
**Remark 2.7**.: Notice that if \(\xi\) is the endpoint of the ray joining \(x\) to \(y\), then
\[C_{F,\xi}(x,y)=-\int_{x}^{y}\tilde{F}.\]
For \(A\subset\mathbb{H}^{d}\), we call "shadow of \(A\) seen from \(x\)" the set \(\mathcal{O}_{x}A\) of all \(\xi\in\partial_{\infty}\mathbb{H}^{d}\) such that the geodesic joining \(x\) to \(\xi\) intersects \(A\). (The letter \(\mathcal{O}\) stands for "Ombre" in french.)
**Proposition 2.8** ([17], prop 3.4 and 3.5).: We have the following estimates on the Gibbs cocycle.
1. For all \(R>0\), there exists \(C_{0}>0\) such that for all \(\gamma\in\Gamma\) and for all \(\xi\in\mathcal{O}_{o}B(\gamma(o),R)\) in the shadow of the (hyperbolic) ball \(B(\gamma(o),R)\) seen from \(o\), we have: \[\left|C_{F,\xi}(o,\gamma(o))+\int_{o}^{\gamma(o)}\tilde{F}\right|\leq C_{0}\]
2. There exists \(\alpha\in(0,1)\) and \(C_{0}>0\) such that, for all \(\gamma\in\Gamma\) and for all \(\xi,\eta\in\Lambda_{\Gamma}\) such that \(d_{o}(\xi,\eta)\leq e^{-d(o,\gamma(o))-2}\), \[|C_{F,\xi}(o,\gamma(o))-C_{F,\eta}(o,\gamma(o))|\leq C_{0}e^{\alpha d(o,\gamma (o))}d_{o}(\xi,\eta)^{\alpha}.\] (The hypothesis asking \(\xi,\eta\) to be very close can be understood as an hypothesis asking for the rays \([o,\xi],[o,\eta[\) and \([\gamma(o),\xi],[\gamma(o),\eta[\) to be close. This way, we can use the Holder regularity of \(\tilde{F}\) to get some control.)
### Patterson-Sullivan densities
In this paragraph, we recall the definition of \((\Gamma,F)\) Patterson-Sullivan densities, and we introduce a regularity condition. To begin with, recall the definition and some properties of the critical exponent of \((\Gamma,F)\).
**Definition 2.9** ([17], Lemma 3.3).: Recall that \(F\) is supposed Holder, and that \(\Gamma\) is convex-cocompact. The critical exponent of \((\Gamma,F)\) is the quantity \(\delta_{\Gamma,F}\in\mathbb{R}\) defined by:
\[\delta_{\Gamma,F}:=\limsup_{n\rightarrow\infty}\,\frac{1}{n}\ln\sum_{\gamma\in \Gamma}e^{\int_{x}^{y}\tilde{F}},\]
for any \(x,y\in\mathbb{H}^{d}\) and any \(c>0\). The critical exponent doesn't depend on the choice of \(x,y\) and \(c\).
**Theorem 2.10** ([15], section 3.6 and section 5.3).: _Let \(\Gamma\subset Iso^{+}(\mathbb{H}^{d})\) be convex-cocompact, and note \(M:=\mathbb{H}^{d}/\Gamma\). Let \(F:T^{1}M\to\mathbb{R}\) be a Holder regular potential. Then there exists a unique (up to a scalar multiple) family of finite nonzero measures \((\mu_{x})_{x\in\mathbb{H}^{d}}\) on \(\partial_{\infty}\mathbb{H}^{d}\) such that, for all \(\gamma\in\Gamma\), for all \(x,y\in\mathbb{H}^{d}\) and for all \(\xi\in\partial_{\infty}\mathbb{H}^{d}\):_
* \(\gamma_{*}\mu_{x}=\mu_{\gamma x}\)__
* \(d\mu_{x}(\xi)=e^{-C_{F-\xi_{\Gamma,F},\xi}(x,y)}d\mu_{y}(\xi)\)__
_Moreover, these measures are all supported on the limit set \(\Lambda_{\Gamma}\). We call them \((\Gamma,F)\)-Patterson Sullivan densities._
**Remark 2.11**.: Notice that Patterson-Sullivan densities only depend on the normalized potential \(F-\delta_{\Gamma,F}\). Since \(\delta_{\Gamma,F+\kappa}=\delta_{\Gamma,F}+\kappa\), replacing \(F\) by \(F-\delta_{\Gamma,F}\) allows us to work without loss of generality with potential satisfying \(\delta_{\Gamma,F}=0\). We call such potential _normalized_.
The next estimate tells us, in a sense, that we can think of \(\mu_{0}\) as a measure of a "fractal solid angle", pondered by the potential. This is better understood by recalling that since the area of a hyperbolic sphere of large radius \(r\) is a power of \(\sim e^{r}\), then the solid angle of an object of diameter \(1\) lying in that sphere is a power of \(\sim e^{-r}\). In the following "Shadow lemma", the object is a ball \(B(y,R)\), at distance \(d(x,y)\) from an observer at \(x\).
**Proposition 2.12** (Shadow Lemma, [15] Lemma 3.10).: Let \(R>0\) be large enough. There exists \(C>0\) such that, for all \(x,y\in\operatorname{Hull}(\Lambda_{\Gamma})\):
\[C^{-1}e^{\int_{x}^{y}(\tilde{F}-\delta_{\Gamma,F})}\leq\mu_{x}\left(\mathcal{ O}_{x}B(y,R)\right)\leq Ce^{\int_{x}^{y}(\tilde{F}-\delta_{\Gamma,F})}.\]
**Definition 2.13**.: The shadow lemma calls for the following hypothesis: we say that the potential \(F\) satisfy the regularity assumptions (R) if \(F\) is Holder regular and if \(\sup_{\Omega\Gamma}F<\delta_{\Gamma,F}\).
**Remark 2.14**.: By Lemma 3.3 in [15], we see that we can construct potentials satisfying (R) as follow: choose some potential \(F_{0}\) satisfying (R) (for example, the constant potential) and then choose any Holder map \(E:T^{1}M\to\mathbb{R}\) satisfying \(2\sup_{\Omega\Gamma}|E|<\delta_{\Gamma,F}-\sup_{\Omega\Gamma}F\). Then \(F:=F_{0}+E\) satisfies the assumption (R). A similar assumption is introduced in [14].
The point of the assumption (R) is to ensure that the Patterson-Sullivan densities exhibit some regularity. This is possible because we have a tight control over the geometry of shadows.
**Lemma 2.15**.: _Let \(\Gamma\subset Iso^{+}(\mathbb{H}^{d})\) be convex-cocompact, and let \(F:T^{1}(\mathbb{H}^{d}/\Gamma)\to\mathbb{R}\) satisfy the regularity assumptions (R). Let \(\delta_{reg}\in(0,1)\) such that \(\delta_{reg}<\delta_{\Gamma,F}-\sup_{\Omega\Gamma}F\). Let \(\mu\) denote some \((\Gamma,F)\)-Patterson-Sullivan density. Then_
\[\exists C>0,\ \forall\xi\in\partial_{\infty}\mathbb{H}^{d},\ \forall r>0,\ \mu(B(\xi,r))\leq Cr^{\delta_{reg}},\]
_where the ball is in the sense of some visual distance._
Proof.: First of all, since for all \(p,q\in\mathbb{H}^{d}\), \(\xi\in\partial_{\infty}\mathbb{H}^{d}\mapsto e^{C_{\xi}(p,q)}\) is continuous and since the ideal boundary is compact, we can easily reduce our statement to the case where \(\mu\) is a Patterson-Sullivan density based on the center of the ball \(o\). Moreover, since all the visual distances are equivalent, one can suppose that we are working for the visual distance based at \(o\) too. Finally, since the support of \(\mu_{o}\) is \(\Lambda_{\Gamma}\), we can suppose without loss of generality that \(B(\xi,r)\cap\Lambda_{\Gamma}\neq\emptyset\). Since in this case there exists some \(\tilde{\xi}\in\Lambda_{\Gamma}\) such that \(B(\xi,r)\subset B(\tilde{\xi},2r)\), we may further suppose without loss of generality that \(\xi\in\Lambda_{\Gamma}\).
Now fix \(\xi\in\Lambda_{\Gamma}\) and \(r>0\). Let \(x\in\operatorname{Hull}(\Lambda_{\Gamma})\) lay in the ray starting from \(o\) and ending at \(\xi\). Let \(\rho\in[0,1]\), let \(y\in\mathbb{H}^{d}\) such that \([o,y]\) is tangent to the sphere \(S(x,\rho)\), and note \(\eta\) the ending of the ray starting from \(o\) and going through \(y\). The hyperbolic law of sine (see Lemma A.4 and A.5 in [17]) allows us to compute directly:
\[d_{o}(\xi,\eta)=\frac{1}{2}\cdot\frac{\sinh(\rho)}{\sinh(d(o,x))}.\]
It follows that there exists \(C>0\) such that for all \(\xi\in\Lambda_{\Gamma}\), for all \(r>0\), there exists \(x\in\operatorname{Hull}(\Lambda_{\Gamma})\) such that \(e^{-d(o,x)}\leq Cr\) and \(B_{o}(\xi,r)\subset\mathcal{O}_{o}B(x,1)\). The desired bound follows from the shadow lemma, since the geodesic segment joining \(o\) and \(x\) lays in \(\operatorname{Hull}(\Lambda_{\Gamma})\)
The regularity of the Patterson-Sullivan densities is going to allow us to state a second version of the contraction lemma. First, let us introduce a bit of notations. We fix, for all the duration of the paper, a large enough constant \(C_{\Gamma}>0\). For \(\gamma\in\Gamma\), we define \(\kappa(\gamma):=d(o,\gamma o)\), \(r_{\gamma}:=e^{-\kappa(\gamma)}\) and \(B_{\gamma}:=\mathcal{O}_{o}B(\gamma o,C_{\Gamma})\). By the hyperbolic law of sine, the radius of \(B_{\gamma}\) is \(\sinh(C_{\Gamma})/\sinh(r_{\gamma})\) (Lemma A.5 in [10]). If \(C_{\Gamma}\) is chosen large enough and when \(\kappa(\gamma)\) is large, we get a radius of \(\sim e^{C_{\Gamma}}r_{\gamma}\geq r_{\gamma}\). We have the following covering result.
**Lemma 2.16** ([10], Lemma A.8).: _Define \(r_{n}:=e^{-4C_{\Gamma}n}\), and let \(S_{n}:=\{\gamma\in\Gamma\,e^{-2C_{\Gamma}}r_{n}\leq r_{\gamma}<r_{n}\}\). For all \(n\geq 1\), the family \(\{B_{\gamma}\}_{\gamma\in S_{n}}\) cover \(\Lambda_{\Gamma}\). Moreover, there exists \(C>0\) such that:_
\[\forall n,\forall\xi\in\Lambda_{\Gamma},\ \#\{\gamma\in S_{n}\,\ \xi\in B_{ \gamma}\}\leq C.\]
Now, we are ready to state our second contraction lemma. Since the potential is not supposed bounded, a lot of technical bounds will only be achieved by working on the limit set or on its convex hull. One of the main goal of the second contraction lemma is then to replace \(x_{\gamma}^{m}\) by a point \(\eta_{\gamma}\) lying in the limit set.
**Lemma 2.17** (Second contraction lemma).: _Let \(\Gamma\subset\text{Iso}^{+}(\mathbb{H}^{d})\) be convex-cocompact. Let \(F:T^{1}(\mathbb{H}^{d}/\Gamma)\to\mathbb{R}\) be a potential satisfying (R). Denote by \(\mu_{o}\in\mathcal{P}(\Lambda_{\Gamma})\) the associated Patterson-Sullivan density at \(o\). Then there exists a family of points \((\eta_{\gamma})_{\gamma\in\Gamma}\) such that, for any \(\gamma\in\Gamma\) with large enough \(\kappa(\gamma)\), we have \(\eta_{\gamma}\in\Lambda_{\Gamma}\cap B_{\gamma}\), and moreover:_
1. _there exists_ \(c>0\) _independant of_ \(\gamma\) _such that_ \(d_{o}(\xi,\eta_{\gamma})\geq r_{\gamma}/2\Rightarrow d_{o}(\gamma^{-1}\xi, \gamma^{-1}\eta_{\gamma})\geq c\)_,_
2. _for all_ \(\varepsilon_{0}\in(0,\delta_{\text{reg}})\)_, there exists_ \(C\) _independant of_ \(\gamma\) _such that:_ \[\int_{\Lambda_{\Gamma}}d_{o}(\gamma(\xi),\eta_{\gamma})^{\varepsilon_{0}}d \mu_{o}(\xi)\leq Cr_{\gamma}^{\varepsilon_{0}}.\]
Proof.: Recalling that the visual distance and the euclidean distance are equivalent on the unit sphere, if we forget about \(\eta_{\gamma}\) and replace it by \(x_{\gamma}^{m}\) instead, then the first point is a direct corollary of the first contraction lemma. We just have to check two points: first, since \(\Gamma\) is discrete without torsion, there exists \(c_{0}>0\) such that for all \(\gamma\in\Gamma\setminus\{Id\}\), \(d(o,\gamma o)>c_{0}\). The second point is to check that the orders of magnitude of \(r_{\gamma}\) and \(\varepsilon_{\gamma}\) (quantity introduced in the first contraction lemma) are compatible. This can be checked using an explicit formula relating the hyperbolic distance with the euclidean one in the ball model: \(r_{\gamma}=e^{-\kappa(\gamma)}=\frac{1-|\gamma(o)|}{1+|\gamma(o)|}\sim \varepsilon_{\gamma}/2\) (see [11], exercise 4.5.1). We even have a large security gap for the first statement to hold (recall that the critical scale is \(\sim\varepsilon_{\gamma}^{2}\)).
We will use the strong contraction properties of \(\Gamma\) to construct a point \(\eta_{\gamma}\in\Lambda_{\Gamma}\) very close to \(x_{\gamma}^{m}\). Since \(\Gamma\) is convex-cocompact, we know in particular that \(\text{diam}(\Lambda_{\Gamma})>0\). Now let \(\gamma\in\Gamma\) such that \(\kappa(\gamma)\) is large enough. The first contraction lemma says that there exists \(A_{\gamma}\subset\partial_{\infty}\mathbb{H}^{d}\) with \(\text{diam}(A_{\gamma})\leq Cr_{\gamma}\) such that \(\text{diam}(\gamma(A_{\gamma}^{c}))\leq r_{\gamma}/10\). It follows that we can find a point \(\tilde{\eta}_{\gamma}\in\Lambda_{\Gamma}\) such that \(d(A_{\gamma},\tilde{\eta}_{\gamma})>\text{diam}(\Lambda_{\Gamma})/3\). Fixing \(\eta_{\gamma}:=\gamma(\tilde{\eta}_{\gamma})\) gives us a point satisfying \(\eta_{\gamma}\in\Lambda_{\Gamma}\) and \(d_{o}(\eta_{\gamma},x_{\gamma}^{m})\lesssim r_{\gamma}^{2}\). Hence \(\eta_{\gamma}\in B_{\gamma}\), and moreover, any point \(\xi\) satisfying \(d_{o}(\xi,\eta_{\gamma})\geq r_{\gamma}/2\) will satisfy \(\gamma^{-1}(\xi)\in A_{\gamma}\), and so \(d_{o}(\gamma^{-1}(\xi),\gamma^{-1}(\eta_{\gamma}))\geq\text{diam}(\Lambda_{ \Gamma})/3\). This proves the first point.
For the second point: since the set \(A_{\gamma}\subset\partial_{\infty}\mathbb{H}^{d}\) is of diameter \(\leq Cr_{\gamma}\) and satisfy that, for all \(\xi\notin A_{\gamma}\), \(d_{o}(\gamma(\xi),x_{\gamma}^{m})\leq r_{\gamma}/10\), the upper regularity of \(\mu_{0}\) yields
\[\int_{\Lambda_{\Gamma}}d_{o}(\gamma(\xi),x_{\gamma}^{m})^{\varepsilon_{0}}d \mu_{o}(\xi)\leq C\mu_{o}(A_{\gamma})+\int_{A_{\gamma}^{c}}Cr_{\gamma}^{ \varepsilon_{0}}d\mu_{o}\leq Cr_{\gamma}^{\varepsilon_{0}}.\]
The desired bound follows from \(d_{o}(x_{\gamma}^{m},\eta_{\gamma})\lesssim r_{\gamma}\), using the triangle inequality.
## 3 Patterson-Sullivan densities are stationary measures
### Stationary measures
In this subsection we define stationary measures and state our main theorem.
**Definition 3.1**.: Let \(\nu\in\mathcal{P}(\Gamma)\) be a probability measure on \(\Gamma\subset SO(n,1)\). Let \(\mu\in\mathcal{P}(\partial_{\infty}\mathbb{H}^{d})\). We say that \(\mu\) is \(\nu\)-stationary if:
\[\mu=\nu*\mu:=\int_{\Gamma}\gamma_{*}\mu\ d\nu(\gamma).\]
Moreover, we say that the measure \(\nu\) has exponential moment if there exists \(\varepsilon>0\) such that \(\int_{\Gamma}\|\gamma\|^{\varepsilon}d\nu(\gamma)<\infty\). Finally, we denote by \(\Gamma_{\nu}\) the subgroup of \(\Gamma\) generated by the support of \(\nu\).
**Theorem 3.2**.: _Let \(\Gamma\subset Iso^{+}(\mathbb{H}^{d})\) be a convex-cocompact group, and let \(F:T^{1}(\mathbb{H}^{d}/\Gamma)\to\mathbb{R}\) be a potential on the unit tangent bundle satisfying (R). Let \(x\in\mathbb{H}^{d}\) and let \(\mu_{x}\in\mathcal{P}(\Lambda_{\Gamma})\) denotes the \((\Gamma,F)\) Patterson-Sullivan density from \(x\). Then there exists \(\nu\in\mathcal{P}(\Gamma)\) with exponential moment (seen as a random walk in \(SO(d,1)\)) such that \(\mu\) is \(\nu\)-stationary and such that \(\Gamma_{\nu}=\Gamma\)._
**Remark 3.3**.: This result for \(d=2\) was announced without proof by Jialun Li in [11] (see remark 1.9). A proof in the case of constant potentials is done in the appendix of [10]. Our strategy is inspired by this appendix. For more details on stationary measures, see the references therein.
First of all, a direct computation allows us to see that if \((\mu_{x})_{x\in\mathbb{H}^{d}}\) are \((\Gamma,F)\) Patterson-Sullivan densities, then, for any \(\eta\in SO_{0}(d,1)\), \((\eta_{*}\mu_{\eta^{-1}x})_{x\in\mathbb{H}^{d}}\) are \((\eta\Gamma\eta^{-1},\eta_{*}F)\) Patterson-Sullivan densities. This remark allows us to reduce our theorem to the case where the basepoint \(x\) is the center of the ball \(o\). Our goal is to find \(\nu\in\mathcal{P}(\Gamma)\) such that \(\nu*\mu_{o}=\mu_{o}\). Assuming that \(F\) is normalized, this can be rewritten as follows:
\[d\mu_{o}(\xi)=\sum_{\gamma}\nu(\gamma)d(\gamma_{*}\mu_{o})(\xi)=\sum_{\gamma} \nu(\gamma)e^{C_{F,\xi}(o,\gamma o)}d\mu_{o}(\xi).\]
Hence, \(\mu_{o}\) is \(\nu\)-stationary if
\[\sum_{\gamma\in\Gamma}\nu(\gamma)f_{\gamma}=1\ \ \text{on}\ \Lambda_{\Gamma},\]
where
\[f_{\gamma}(\xi):=e^{C_{F,\xi}(o,\gamma o)}.\]
**Remark 3.4**.: Our main goal is to find a way to decompose the constant function \(1\) as a sum of \(f_{\gamma}\). Here is the intuition behind our proof.
Define \(r_{\gamma}^{-F}:=e^{-\int_{o}^{\infty}\bar{F}}=f_{\gamma}(x_{\gamma}^{m})\simeq f _{\gamma}(\eta_{\gamma})\). The first thing to notice is that \(f_{\gamma}\) looks like an approximation of unity centered at \(x_{\gamma}^{m}\). Renormalizing yields the intuitive statement \(r_{\gamma}^{F}f_{\gamma}\sim\mathbb{1}_{B_{\gamma}}\). The idea is that this approximation gets better as \(\kappa(\gamma)\) becomes large. Once this observation is done, there is a natural "n-th approximation" operator that can be defined. For some positive function \(R\), one can write:
\[R\simeq\sum_{\gamma\in S_{n}}R(\eta_{\gamma})\mathbb{1}_{B_{\gamma}}\simeq \sum_{\gamma\in S_{n}}R(\eta_{\gamma})r_{\gamma}^{F}f_{\gamma}=:P_{n}R.\]
Proving that the operator \(P_{n}\) does a good enough job at approximating some functions \(R\) is the content of the "approximation lemma" 3.8. In particular, it is proved that, under some assumptions on \(R>0\), we have \(cR\leq P_{n}R\leq CR\).
The conclusion of the proof is then easy. We fix a constant \(\beta>0\) small enough so that \(cR\leq\beta P_{n}R\leq R\). Then, we define by induction \(R_{0}=1\) and \(0<R_{n+1}:=R_{n}-\beta P_{n+1}R_{n}\leq(1-c)R_{n}\). By induction, this gives \(R_{n}\leq(1-c)^{n}\), and hence \(1=R_{0}-\lim_{n}R_{n}=\sum_{k}(R_{k}-R_{k+1})\) is a decomposition of \(1\) as a sum of \(f_{\gamma}\).
### The approximation operator
First, we collect some results on \(f_{\gamma}\) that will allows us to think of it as an approximation of unity around \(x_{n}^{\gamma}\) with width \(r_{\gamma}\). The first point studies \(f_{\gamma}\) near \(x_{\gamma}^{m}\), the second point study the decay of \(f_{\gamma}\) away from it, and the last point is a regularity estimate at the scale \(r_{\gamma}\). To quantify this decay, we recall the notion of _potential gap_ (or gap map).
**Definition 3.5**.: The following "potential gap"\(D_{F}:\mathbb{H}^{d}\times\partial_{\infty}\mathbb{H}^{d}\times\partial_{\infty }\mathbb{H}^{d}\) is well defined and continuous:
\[D_{F,x}(\eta,\xi):=\exp\frac{1}{2}\lim_{t\to\infty}\left(\int_{x}^{\xi_{t}}\tilde {F}+\int_{\eta_{t}}^{x}\tilde{F}-\int_{\eta_{t}}^{\xi_{t}}\tilde{F}\right)\]
where \((\eta_{t})\) and \((\xi_{t})\) denotes any unit speed geodesic ray converging to \(\eta\) and \(\xi\).
Under our assumptions, the gap map (for some fixed \(x\)) behaves like a distance on \(\Lambda_{\Gamma}\): see [11], section 3.4 for details. Finally, we denote by \(\iota(v)=-v\) the _flip map_ on the unit tangent bundle.
**Lemma 3.6** (Properties of \(f_{\gamma}\)).: _There exists \(C_{\text{reg}}\geq 2\) (independant of \(C_{\Gamma}\)) and \(C_{0}\geq 1\) (depending on \(C_{\Gamma}\)) such that, for all \(\gamma\in\Gamma\):_
1. _For all_ \(\xi\in B_{\gamma}\)_,_ \[r_{\gamma}^{F}f_{\gamma}(\xi)\in[e^{-C_{0}},e^{C_{0}}].\]
2. _For all_ \(\xi\in\Lambda_{\Gamma}\) _such that_ \(d_{o}(\xi,\eta_{\gamma})\geq r_{\gamma}/2\)_,_ \[C_{0}^{-1}D_{o}(\xi,\eta_{\gamma})^{-2}\leq r_{\gamma}^{-F_{0}}f_{\gamma}(\xi )\leq C_{0}D_{o}(\xi,\eta_{\gamma})^{-2}.\]
3. _For all_ \(\xi,\eta\in\Lambda_{\Gamma}\) _such that_ \(d_{o}(\xi,\eta)\leq r_{\gamma}/e^{2}\)_,_ \[|f_{\gamma}(\xi)/f_{\gamma}(\eta)-1|\leq C_{\text{reg}}\cdot d_{o}(\xi,\eta)^{ \alpha}r_{\gamma}^{-\alpha}.\]
Proof.: Recall that \(f_{\gamma}(x_{\gamma}^{m})=r_{\gamma}^{-F}\): the first point is then a consequence of Proposition 2.8. The third point also directly follows without difficulty. For the second item, recall that \(\eta_{\gamma}\in B_{\gamma}\), so that by the same argument we get
\[r_{\gamma}^{-F_{0}}\simeq e^{C_{F_{0},\eta_{\gamma}}(o,\gamma o)}.\]
Then, a direct computation yields:
\[f_{\gamma}(\xi)r_{\gamma}^{-F_{0}}\simeq\exp\lim_{t\to\infty}\left(\left(\int _{\gamma o}^{\xi_{t}}\tilde{F}-\int_{o}^{\xi_{t}}\tilde{F}\right)-\left(\int _{(\eta_{t})_{t}}^{o}\tilde{F}-\int_{(\eta_{t})_{t}}^{\gamma o}\tilde{F} \right)\right)\]
\[=\lim_{t\to\infty}\exp\left(\left(-\int_{o}^{\xi_{t}}\tilde{F}-\int_{(\eta_{ \gamma})_{t}}^{o}\tilde{F}+\int_{(\eta_{\gamma})_{t}}^{\xi_{t}}\right)+\left( \int_{\gamma o}^{\xi_{t}}\tilde{F}+\int_{(\eta_{\gamma})_{t}}^{o}\tilde{F}- \int_{(\eta_{\gamma})_{t}}^{\xi_{t}}\tilde{F}\right)\right)\]
\[=\frac{D_{F,\gamma o}(\eta_{\gamma},\xi)^{2}}{D_{F,o}(\eta_{\gamma},\xi)^{2}}.\]
Under our regularity hypothesis (R), and because \(\Gamma\) is convex6ocompact and \(\xi,\eta\in\Lambda_{\Gamma}\), it is known that there exists \(c_{0}>0\) such that \(d_{\gamma o}(\xi,\eta_{\gamma})^{c_{0}}\leq D_{F,\gamma o}(\eta_{\gamma},\xi)\leq 1\) (see [11], page 56). The second contraction lemma allows us to conclude, since \(d_{\gamma o}(\xi,\eta_{\gamma})=d_{o}(\gamma^{-1}\xi,\gamma^{-1}(\eta_{\gamma }))\geq c\) under the hypothesis \(d_{o}(\xi,\eta_{\gamma})\geq r_{\gamma}/2\).
To get further control over the decay rate of \(f_{\gamma}\) away from \(\eta_{\gamma}\), the following "role reversal" result will be helpful.
**Lemma 3.7** (symmetry).: _Let \(n\in\mathbb{N}\). Since \(S_{n}\) is a covering of \(\Lambda_{\Gamma}\), and by choosing \(C_{\Gamma}\) larger if necessary, we know that for every \(\eta\in\Lambda_{\Gamma}\) there exists \(\tilde{\eta}_{\eta}\in S_{n}\) such that \(\eta\in S_{\tilde{\gamma}_{\eta}}\) and \(d(\eta,\eta_{\tilde{\gamma}_{\eta}})\leq C_{\text{reg}}^{-2/\alpha}r_{\gamma}\). Suppose that \(d_{o}(\eta,\eta_{\gamma})\geq r_{\gamma}\). Then:_
\[C^{-1}f_{\tilde{\gamma}_{\eta}}(\eta_{\gamma})\leq f_{\gamma}(\eta)\leq Cf_{ \gamma_{\eta}}(\eta_{\gamma})\]
_for some constant \(C\) independant of \(n\), \(\gamma\) and \(\eta\)._
Proof.: First of all, by the third point of the previous lemma, and since \(d(\eta,\eta_{\tilde{\gamma}_{\eta}})\leq C_{\text{reg}}^{-2/\alpha}r_{\gamma}\), we can write \(f_{\gamma}(\eta)/f_{\gamma}(\eta_{\gamma_{\eta}})\in[1/2,3/2]\). Then, by the previous lemma again, we see that
\[\frac{f_{\gamma}(\eta)}{f_{\tilde{\gamma}_{\eta}}(\eta_{\gamma})}\simeq\frac{f_ {\gamma}(\eta_{\gamma_{\eta}})}{f_{\tilde{\gamma}_{\eta}}(\eta_{\gamma})}\simeq \frac{D_{F,o}(\eta_{\tilde{\gamma}_{\eta}},\eta_{\gamma})^{2}}{D_{F,o}(\eta_{ \gamma},\eta_{\tilde{\gamma}_{\eta}})^{2}}\simeq 1,\]
where we used the quasi symmetry of the gap map ([11], page 47).
We are ready to introduce the \(n\)-th approximation operator. For some positive function \(R\) : \(\Lambda_{\Gamma}\to\mathbb{R}_{+}^{*}\), define, on \(\partial_{\infty}\mathbb{H}^{d}\), the following positive function:
\[P_{n}R(\eta):=\sum_{\gamma\in S_{n}}R(\eta_{\gamma})r_{\gamma}^{F}f_{\gamma}( \eta).\]
The function \(P_{n}R\) has the regularity of \(f_{\gamma}\) for \(\gamma\in S_{n}\).
**Lemma 3.8**.: _Choosing \(C_{\Gamma}\) larger if necessary, the following hold. Let \(n\in\mathbb{N}\) and let \(\xi,\eta\in\Lambda_{\Gamma}\) such that \(d_{o}(\xi,\eta)\leq r_{n+1}\). Then:_
\[\left|\frac{P_{n}R(\xi)}{P_{n}R(\eta)}-1\right|\leq\frac{1}{2}d_{o}(\xi,\eta) ^{\alpha}r_{n+1}^{-\alpha}.\]
Proof.: The regularity estimates on \(f_{\gamma}\) and the positivity of \(R\) yields:
\[|P_{n}R(\xi)-P_{n}R(\eta)|\leq\sum_{\gamma\in S_{n}}R(\eta_{\gamma})r_{\gamma} ^{F}|f_{\gamma}(\xi)-f_{\gamma}(\eta)|\]
\[\leq C_{\rm reg}\sum_{\gamma\in S_{n}}R(\eta_{\gamma})r_{\gamma}^{F}f_{\gamma }(\eta)d_{0}(\xi,\eta)^{\alpha}r_{\gamma}^{-\alpha}=P_{n}R(\eta)\cdot C_{\rm reg }e^{-2\alpha C_{\Gamma}}d_{0}(\xi,\eta)^{\alpha}r_{n+1}^{-\alpha}.\]
The bound follows.
Now is the time where we combine all of our preliminary lemma to prove our main technical lemma: the approximation operator does a good enough job at approximating on \(\Lambda_{\Gamma}\). Some natural hypothesis on \(R\) are required: the function to approximate has to be regular enough at scale \(r_{n}\), and has to have mild global variations (so that the decay of \(f_{\gamma}\) away from \(x_{\gamma}^{m}\) is still usefull).
**Lemma 3.9** (Approximation lemma).: _Let \(\varepsilon_{0}\in(0,\delta_{reg})\) and \(C_{0}\geq 1\). Let \(n\in\mathbb{N}\), and let \(R:\Lambda_{\Gamma}\to\mathbb{R}\) be a positive function satisfying:_
1. _For_ \(\xi,\eta\in\Lambda_{\Gamma}\)_, if_ \(d_{o}(\xi,\eta)\leq r_{n+1}\)_, then_ \[\left|\frac{R(\xi)}{R(\eta)}-1\right|\leq\frac{1}{2}\left(\frac{d_{o}(\xi,\eta )}{r_{n+1}}\right)^{\alpha}.\]
2. _For_ \(\xi,\eta\in\Lambda_{\Gamma}\)_, if_ \(d_{o}(\xi,\eta)>r_{n+1}\)_, then_ \[R(\xi)/R(\eta)\leq C_{0}d_{o}(\xi,\eta)^{\varepsilon_{0}}r_{n}^{-\varepsilon_ {0}}.\]
_Then there exists \(A\geq 1\) that only depends on \(\varepsilon_{0}\) and \(C_{0}\) such that, for all \(\eta\in\Lambda_{\Gamma}\):_
\[A^{-1}R(\eta)\leq P_{n+1}R(\eta)\leq AR(\eta).\]
Proof.: Let \(\eta\in\Lambda_{\Gamma}\). We have:
\[P_{n+1}R(\eta)=\sum_{\gamma\in S_{n+1}}R(\eta_{\gamma})r_{\gamma}^{F}f_{\gamma }(\eta)=\sum_{\begin{subarray}{c}\gamma\in S_{n+1}\\ \eta\in B_{\gamma}\end{subarray}}R(\eta_{\gamma})r_{\gamma}^{F}f_{\gamma}( \eta)+\sum_{\begin{subarray}{c}\gamma\in S_{n+1}\\ \eta\notin B_{\gamma}\end{subarray}}R(\eta_{\gamma})r_{\gamma}^{F}f_{\gamma}( \eta).\]
The first sum is easily controlled: if \(\eta\in B_{\gamma}\), then \(R(\eta_{\gamma})\simeq R(\eta)\) and \(r_{\gamma}^{F}f_{\gamma}(\eta)\simeq 1\). Since \(\eta\) is in a (positive and) bounded number of \(B_{\gamma}\), we find
\[C^{-1}R(\eta)\leq\sum_{\begin{subarray}{c}\gamma\in S_{n+1}\\ \eta\in B_{\gamma}\end{subarray}}R(\eta_{\gamma})r_{\gamma}^{F}f_{\gamma}( \eta)\leq CR(\eta),\]
which gives the lower bound since \(R\), \(r_{\gamma}^{F}\) and \(f_{\gamma}\) are positive. To conclude, we need to get an upper bound on the residual term. Using \(\text{diam}(B_{\gamma})\lesssim r_{n}\), the symmetry lemma on \(f_{\gamma}(\eta)\), the regularity and mild variations of \(R\), and using the shadow lemma \(r_{\gamma}^{F}\simeq\mu_{o}(B_{\gamma})\), we get:
\[\sum_{\begin{subarray}{c}\gamma\in S_{n}\\ \eta\notin B_{\gamma}\end{subarray}}R(\eta_{\gamma})r_{\gamma}^{F}f_{\gamma}( \eta)\lesssim R(\eta)r_{n}^{-\varepsilon_{0}}\sum_{\eta\notin B_{\gamma}}r_{ \gamma}^{F}d_{o}(\eta_{\gamma},\eta)^{\varepsilon_{0}}f_{\bar{\gamma}_{\eta}}( \eta_{\gamma})\]
\[\lesssim R(\eta)\left(1+r_{n}^{-\varepsilon_{0}}\int_{B(\eta,r_{n+1})^{ \varepsilon}}d_{o}(\xi,\eta)^{\varepsilon_{0}}\ f_{\tilde{\gamma}_{\eta}}(\xi)d \mu_{o}(\xi)\right).\]
Finally, the second contraction lemma and the bound \(d_{o}(\eta,\eta_{\tilde{\gamma}_{\eta}})\lesssim r_{n}\) yields:
\[\int_{B(\eta,r_{n+1})^{\varepsilon}}d_{o}(\xi,\eta)^{\varepsilon_{0}}\ f_{ \tilde{\gamma}_{\eta}}(\xi)d\mu_{o}(\xi)\leq\int_{\Lambda_{\Gamma}}d_{o}(\xi, \eta)^{\varepsilon_{0}}\ f_{\tilde{\gamma}_{\eta}}(\xi)d\mu_{o}(\xi)\]
\[=\int_{\Lambda_{\Gamma}}d_{o}(\xi,\eta)^{\varepsilon_{0}}d\mu_{\tilde{\gamma} _{\eta}o}(\xi)=\int_{\Lambda_{\Gamma}}d_{o}(\tilde{\gamma}_{\eta}(\xi),\eta)^ {\varepsilon_{0}}d\mu_{o}(\xi)\]
\[\lesssim r_{n}^{\varepsilon_{0}}+\int_{\Lambda_{\Gamma}}d_{o}(\tilde{\gamma} _{\eta}(\xi),\eta_{\tilde{\gamma}_{\eta}})^{\varepsilon_{0}}d\mu_{o}(\xi) \lesssim r_{n}^{\varepsilon_{0}},\]
which concludes the proof.
### The construction of \(\nu\)
In this last subsection, we construct the measure \(\nu\) and conclude that \((\Gamma,F)\) Patterson-Sullivan densities are stationary measures for a random walk on \(\Gamma\) with exponential moment. We can conclude by following the end of [10] very closely, but we will recall the last arguments for the reader's convenience.
Recall that the large constant \(C_{\Gamma}\geq 1\) was fixed just before Lemma 2.16, and that \(r_{n}:=e^{-4C_{\Gamma}n}\). Recall also that \(\alpha>0\) is fixed by Lemma 2.8. We fix \(\beta\in(0,1)\) small enough so that \(1-\beta\geq e^{-4C_{\Gamma}n}+\beta\), and we choose \(\varepsilon_{0}\) so that \(r_{n}^{\varepsilon_{0}}=(1-\beta)^{n}\). By taking \(\beta\) even smaller, we can suppose that \(\varepsilon_{0}<\delta_{reg}\). For this choice of \(\varepsilon_{0}\), and for \(C_{0}(\varepsilon_{0}):=2(1-\beta)^{-2}e^{4C_{\Gamma}e_{0}}\), the approximation lemma gives us a constant \(A>1\) such that, under the hypothesis of Lemma 3.9:
\[\frac{\beta}{A^{2}}R\leq\frac{\beta}{A}P_{n+1}R\leq\beta R.\]
We then use \(P_{n}\) to successively take away some parts of \(R\). Define, by induction, \(R_{0}:=1\) and
\[R_{n+1}:=R_{n}-\frac{\beta}{A}P_{n+1}R_{n}\leq R_{n}.\]
For the process to work as intended, we need to check that \(R_{n}\) satisfies the hypothesis of the approximation lemma.
**Lemma 3.10**.: _Let \(n\in\mathbb{N}\). The function \(R_{n}\) is positive on \(\Lambda_{\Gamma}\), and for any \(\xi,\eta\in\Lambda_{\Gamma}\):_
1. _If_ \(d_{o}(\xi,\eta)\leq r_{n+1}\) _then_ \[\left|\frac{R_{n}(\xi)}{R_{n}(\eta)}-1\right|\leq\frac{1}{2}d_{o}(\xi,\eta)^{ \alpha}r_{n+1}^{-\alpha}\]
2. _If_ \(d_{o}(\xi,\eta)>r_{n+1}\)_, then_ \[R_{n}(\xi)/R_{n}(\eta)\leq C_{0}(\varepsilon_{0})\cdot d_{o}(\xi,\eta)^{ \varepsilon_{0}}(1-\beta)^{-n}.\]
Proof.: The proof goes by induction on \(n\). The case \(n=0\) is easy: the first point holds trivially and the second holds since \(C_{0}(\varepsilon_{0})r_{1}^{\varepsilon_{0}}=C_{0}(\varepsilon_{0})e^{-4C_{ \Gamma}e_{0}}\geq 1\). Now, suppose that the result hold for some \(n\). In this case, the approximation lemma yields
\[R_{n+1}=R_{n}-\frac{\beta}{A}P_{n}R\geq(1-\beta)R_{n},\]
and in particular \(R_{n+1}\) is positive. Let us prove the first point: consider \(\xi,\eta\in\Lambda_{\Gamma}\) such that \(d_{o}(\xi,\eta)\leq r_{n+2}\). Then Lemma 3.8 gives
\[\left|\frac{\beta}{A}P_{n+1}R_{n}(\xi)-\frac{\beta}{A}P_{n+1}R_{n}(\eta) \right|\leq\frac{1}{2}\left(\frac{\beta}{A}P_{n+1}R_{n}(\eta)\right)\cdot d_{o} (\xi,\eta)^{\alpha}r_{n+2}^{-\alpha}\leq\frac{1}{2}\beta R_{n}(\eta)\cdot d_{o} (\xi,\eta)^{\alpha}r_{n+2}^{-\alpha}.\]
Hence, using the induction hypothesis:
\[|R_{n+1}(\xi)-R_{n+1}(\eta)|\leq|R_{n}(\xi)-R_{n}(\eta)|+\left|\frac{\beta}{A}P_{ n+1}R_{n}(\xi)-\frac{\beta}{A}P_{n+1}R_{n}(\eta)\right|\]
\[\leq\frac{1}{2}\left(r_{n+1}^{-\alpha}+\beta r_{n+2}^{-\alpha}\right)R_{n}(\eta )d_{o}(\xi,\eta)^{\alpha}\]
\[\leq\frac{1}{2}\frac{e^{-4C_{\Gamma}\alpha}+\beta}{1-\beta}R_{n+1}(\eta)d_{o}( \xi,\eta)^{\alpha}r_{n+2}^{-\alpha}.\]
Recalling the definition of \(\beta\) gives the desired bound. It remains to prove the second point. First of all, notice that, for any \(\xi\) and \(\eta\), we have:
\[\frac{R_{n+1}(\xi)}{R_{n+1}(\eta)}\leq(1-\beta)^{-1}\frac{R_{n}(\xi)}{R_{n}( \eta)}.\]
Now, suppose that \(d_{o}(\xi,\eta)\in(r_{n+2},r_{n+1}]\). The induction hypothesis gives \(R_{n}(\xi)/R_{n}(\eta)\leq 1+|R_{n}(\xi)/R_{n}(\eta)-1|\leq 2\), and so:
\[\frac{R_{n+1}(\xi)}{R_{n+1}(\eta)}\leq\frac{2}{1-\beta}=\frac{2}{1-\beta}r_{n +2}^{\varepsilon_{0}}(1-\beta)^{-(n+2)}\leq\frac{2}{(1-\beta)^{2}}\cdot d_{o} (\xi,\eta)^{\varepsilon_{0}}(1-\beta)^{-n+1},\]
which proves the bound. Finally, suppose that \(d_{o}(\xi,\eta)>r_{n+1}\). In this case, the induction hypothesis directly yields
\[\frac{R_{n+1}(\xi)}{R_{n+1}(\eta)}\leq\frac{1}{1-\beta}\frac{R_{n}(\xi)}{R_{n }(\eta)}\leq C_{0}(\varepsilon_{0})d_{o}(\xi,\eta)^{\varepsilon_{0}}(1-\beta) ^{-(n+1)},\]
and the proof is done.
We are ready to prove Theorem 3.2, following Li in [10].
Proof.: The previous lemma ensure that for all \(n\), the function \(R_{n}\) satisfies the hypothesis of the approximation lemma. Hence, we can write for all \(n\)
\[R_{n+1}=R_{n}-\frac{\beta}{A}P_{n+1}R_{n}\leq\left(1-\frac{\beta}{A^{2}} \right)R_{n},\]
so that by induction:
\[R_{n}\leq\left(1-\frac{\beta}{A^{2}}\right)^{n}\longrightarrow 0.\]
It follows that
\[1=R_{0}-\lim_{n}R_{n}=\sum_{n=1}^{\infty}(R_{n-1}-R_{n})=\frac{\beta}{A}\sum_ {n=1}^{\infty}P_{n}(R_{n-1}),\]
in other words:
\[1=\sum_{n=1}^{\infty}\sum_{\gamma\in S_{n}}\frac{\beta}{A}R_{n-1}(\eta_{ \gamma})r_{\gamma}^{F}\cdot f_{\gamma}.\]
Letting
\[\nu(\gamma):=\frac{\beta}{A}R_{n-1}(\eta_{\gamma})r_{\gamma}^{F}\text{ if } \gamma\in S_{n},\quad\nu(\gamma):=0\text{ if }\gamma\neq\bigcup_{k}S_{k}\]
gives us a probability measure on \(\Gamma\) (since \(\int f_{\gamma}d\mu_{o}=1\)) satisfying \(\nu*\mu_{0}=\mu_{0}\), by the remarks made section 3.1. Checking that the measure has exponential moment is easy since \(\|\gamma\|\lesssim r_{\gamma}^{-1}\) by Remark 2.4. Hence, by the shadow lemma \(r_{\gamma}^{F}\simeq\mu_{o}(B_{\gamma})\) and since \(S_{n}\) covers each point a bounded number of time, we get:
\[\int_{\Gamma}\|\gamma\|^{\varepsilon}d\nu\lesssim\sum_{n}\sum_{\gamma\in S_{n }}\nu(\gamma)r_{\gamma}^{-\varepsilon}\]
\[\lesssim\sum_{n}\left(\sum_{\gamma}\mu_{o}(B_{\gamma})\right)(1-\beta/A^{2})^ {n}e^{e4C_{\Gamma}n}<\infty\]
if \(\varepsilon\) is small enough. Finally, we show that group \(\Gamma_{\nu}\) spanned by the support of \(\nu\) is \(\Gamma\). To see this, say that \(C_{\Gamma}\) was chosen so large that \(C_{\Gamma}\geq 6\mathrm{diam}(\mathrm{Hull}(\Lambda_{\Gamma})/\Gamma)\). In this case, there exists \(\gamma_{1}\in S_{1}\) such that \(d(o,\gamma_{1}o)\in[|\ln r_{1}|+C_{\Gamma}/2,|\ln r_{1}|+3C_{\Gamma}/2]\). Then, any \(\gamma\in\Gamma\) such that \(d(o,\gamma_{0})\leq C_{\Gamma}/2\) satisfies \(\gamma_{1}\gamma\in S_{1}\). In particular:
\[\{\gamma\in\Gamma\,\ d(o,\gamma o)\leq C_{\Gamma}/2\}\subset\Gamma_{\nu},\]
and it is then well known (see for example Lemma A.14 in [10]) that this set spans the whole group \(\Gamma\) as soon as \(C_{\Gamma}/2\) is larger than \(3\) times the diameter of \(\mathrm{Hull}(\Lambda_{\Gamma})/\Gamma\).
## 4 Consequences on equilibrium states
Now that we have proved Theorem 1.2, Corollary 1.3 follows directly from [11] (since, when \(d=2\), being Zariski dense is equivalent to being non-elementary). To see how this statement induce some knowledge over equilibrium states, let us recall more precisely the link between the latter and Patterson-Sullivan densities. First, recall that the Hopf coordinates
\[\mathrm{Hopf}:\left(\left(\partial_{\infty}\mathbb{H}^{2}\times\partial_{ \infty}\mathbb{H}^{2}\right)\setminus\mathcal{D}\right)\times\mathbb{R} \longrightarrow T^{1}\mathbb{H}^{2}\]
allows us to smoothly identify the unit tangent bundle of \(\mathbb{H}^{2}\) with a torus minus the diagonal times \(\mathbb{R}\) by the following process. For any \(v^{+}\neq v^{-}\in\partial_{\infty}\mathbb{H}^{2}\), and for any \(t\in\mathbb{R}\), \(\mathrm{Hopf}(v^{+},v^{-},t):=v\) is the unique vector \(v\in T^{1}M\) lying on the geodesic \(|v^{-},v^{+}|\) such that \(\tilde{p}(\phi_{-t}(v))\) is the closest point to \(o\) on this geodesic. We will denote by \((\partial_{v^{+}},\partial_{v^{-}},\partial_{t})\) the induced basis of \(T(T^{1}M)\) in these coordinates. Finally, recall that \(\iota\) denotes the flip map.
**Theorem 4.1** ([12], Theorem 6.1).: _Let \(\Gamma\subset Iso^{+}(\mathbb{H}^{d})\) be convex cocompact, \(M:=\mathbb{H}^{d}/\Gamma\), and \(F:T^{1}M\to\mathbb{R}\) be a normalized and Holder-regular potential. Denote by \(m_{F}\in\mathcal{P}(T^{1}M)\) the associated equilibrium state, and let \(\tilde{m}_{F}\) be its \(\Gamma\)-invariant lift on \(T^{1}\mathbb{H}^{d}\). Denotes by \(\mu_{x}^{F}\) the \((\Gamma,F)\) Patterson-Sullivan density with basepoint \(x\). Then, for any choice of \(x\in\mathbb{H}^{d}\), the following identity hold in the Hopf coordinates (up to a multiplicative constant \(c_{0}>0\)):_
\[c_{0}\cdot d\tilde{m}_{F}(v^{+},v^{-},t)=\frac{d\mu_{x}^{F}(v^{+})d\mu_{x}^{F \circ t}(v^{-})dt}{D_{F,x}(v^{+},v^{-})^{2}}.\]
We are now ready to prove Fourier decay for \(m_{F}\). To do a clean proof, we write down three lemmas corresponding to Fourier decay in the three directions \((\partial_{v^{+}},\partial_{v^{-}},\partial_{t})\). We will then combine all of them to get the desired result.
**Lemma 4.2**.: _Under the conditions of Theorem 1.2, there exists \(\varepsilon>0\) such that the following hold. Let \(R\geq 1\) and let \(\chi:T^{1}\mathbb{H}^{2}\to\mathbb{R}\) be a Holder map supported on some compact \(K\). There exists \(C\geq 1\) such that for any \(C^{2}\) function \(\varphi:T^{1}\mathbb{H}^{2}\to\mathbb{R}\) satisfying \(\|\varphi\|_{C^{2}}+(\inf_{K}|\partial_{v^{+}}\varphi|)^{-1}\leq R\), we have:_
\[\forall\xi\in\mathbb{R}^{*},\ \left|\int_{T^{1}\mathbb{H}^{2}}e^{i\xi\varphi(v )}\chi(v)d\tilde{m}_{F}(v)\right|\leq\frac{C}{|\xi|^{\varepsilon}}.\]
Proof.: Denotes \(\tilde{\varphi}\) and \(\tilde{\chi}\) the functions \(\varphi,\chi\) seen in the Hopf coordinates. We get, for some large \(a>0\) depending only on the support of \(\chi\):
\[c_{0}\int_{T^{1}\mathbb{H}^{2}}e^{i\xi\varphi(v)}\chi(v)d\tilde{m}_{F}(v)=\int _{-a}^{a}\int_{\Lambda_{\Gamma}}\left(\int_{\Lambda_{\Gamma}}e^{i\xi\tilde{ \varphi}(v^{+},v^{-},t)}\frac{\tilde{\chi}(v^{+},v^{-},t)}{D_{F,o}(v^{+},v^{-} )^{2}}d\mu_{o}^{F}(v^{+})\right)d\mu_{o}^{F_{0}\iota}(v^{-})dt.\]
Now, since \(\tilde{\chi}\) is supported in a compact subset of \(\left((\partial_{\infty}\mathbb{H}^{2}\times\partial_{\infty}\mathbb{H}^{2}) \setminus\mathcal{D}\right)\times\mathbb{R}\), and since \(D_{F}\) is uniformly Holder (and doesn't vanish) on a compact subset of \(\Lambda_{\gamma}\times\Lambda_{\Gamma}\setminus\mathcal{D}\) (see [12], Lemma 3.6 and Proposition 3.5), and finally since \(\partial_{v^{+}}\tilde{\varphi}\neq 0\) on the compact support of \(\tilde{\chi}\), we see that Corollary 1.3 applies to the inner integral. (Notice that we can always extend \(D_{F}\) outside of \(\Lambda_{\Gamma}\times\Lambda_{\Gamma}\setminus\mathcal{D}\) so that it becomes Holder on all \((\partial_{\infty}\mathbb{H}^{2}\times\partial_{\infty}\mathbb{H}^{2}) \setminus\mathcal{D}\), see [11].) This gives the desired bound.
**Lemma 4.3**.: _Under the conditions of Theorem 1.2, there exists \(\varepsilon>0\) such that the following hold. Let \(R\geq 1\) and let \(\chi:T^{1}\mathbb{H}^{2}\to\mathbb{R}\) be a Holder map supported on some compact \(K\). There exists \(C\geq 1\) such that for any \(C^{2}\) function \(\varphi:T^{1}\mathbb{H}^{2}\to\mathbb{R}\) satisfying \(\|\varphi\|_{C^{2}}+(\inf_{K}|\partial_{v^{-}}\varphi|)^{-1}\leq R\), we have:_
\[\forall\xi\in\mathbb{R}^{*},\ \left|\int_{T^{1}\mathbb{H}^{2}}e^{i\xi\varphi(v)} \chi(v)d\tilde{m}_{F}(v)\right|\leq\frac{C}{|\xi|^{\varepsilon}}.\]
Proof.: We need to check that when \(F\) satisfies the regularity assumptions (R), then \(F\circ\iota\) satisfies them too. This is easy, since \(\sup_{\Omega\Gamma}F\circ\iota=\sup_{\Omega\Gamma}F<\delta_{\Gamma,F\circ\iota}\) by Lemma 3.3 in [11]. Moreover, \(F\circ\iota\) is still Holder regular. Hence, one can apply our previous lemma with \(F\) replaced by \(F\circ\iota\), and conclude.
**Lemma 4.4**.: _Under the conditions of Theorem 1.2, let \(R\geq 1\) and let \(\chi:T^{1}\mathbb{H}^{2}\to\mathbb{R}\) be a \(\alpha\)-Holder map supported on some compact \(K\). There exists \(C\geq 1\) such that, for any \(C^{2}\) function \(\varphi:T^{1}\mathbb{H}^{2}\to\mathbb{R}\) satisfying \(\|\varphi\|_{C^{2}}+(\inf_{K}|\partial_{t}\varphi|)^{-1}\leq R\), we have:_
\[\forall\xi\in\mathbb{R}^{*},\ \left|\int_{T^{1}\mathbb{H}^{2}}e^{i\xi\varphi(v )}\chi(v)d\tilde{m}_{F}(v)\right|\leq\frac{C}{|\xi|^{\alpha}}\]
Proof.: The proof is classic. We have, for some compact \(\tilde{K}\subset\left(\partial_{\infty}\mathbb{H}^{2}\times\partial_{\infty} \mathbb{H}^{2}\right)\setminus\mathcal{D}\) and for some large enough \(a>0\) depending only on the support of \(\chi\):
\[c_{0}\int_{T^{1}\mathbb{H}^{2}}e^{i\xi\varphi(v)}\chi(v)d\tilde{m}_{F}(v)= \iint_{\tilde{K}}\left(\int_{-a}^{a}e^{i\xi\tilde{\varphi}(v^{+},v^{-},t)} \tilde{\chi}(v^{+},v^{-},t)dt\right)D_{F,o}(v^{+},v^{-})^{-2}d(\mu_{o}^{F} \otimes\mu_{o}^{F\circ\iota})(v^{+},v^{-}).\]
We then work on the inner integral. When \(\tilde{\chi}\) is \(C^{1}\), we can conclude by an integration by parts. So a way to conclude is to approximate \(\tilde{\chi}\) by a \(C^{1}\) map. Fix some smooth bump function \(\rho:\mathbb{R}\to\mathbb{R}^{+}\) such that \(\rho\) is zero outside \([-2,2]\), one inside \([-1,1]\), increasing on \([-2,-1]\) and decreasing on \([1,2]\). For any \(\varepsilon>0\), set
\[\tilde{\chi}_{\varepsilon}(\cdot,\cdot,t):=\int_{\mathbb{R}}\tilde{\chi}( \cdot,\cdot,t-x)\rho(x/\varepsilon)dx/\varepsilon.\]
This function is smooth on the \(t\)-variable. Moreover, if we denote by \(\alpha\) a Holder exponent for \(\chi\), then a direct computation yields:
\[\|\tilde{\chi}_{\varepsilon}-\tilde{\chi}\|_{\infty}\lesssim\varepsilon^{ \alpha},\quad\|\partial_{t}\tilde{\chi}_{\varepsilon}\|_{\infty}\lesssim \varepsilon^{-(1-\alpha)}.\]
Hence:
\[\left|\int_{-a}^{a}e^{i\xi\tilde{\varphi}}\tilde{\chi}dt\right|\leq 2a\|\tilde{ \chi}-\tilde{\chi}_{\varepsilon}\|_{\infty}+\left|\int_{-a}^{a}e^{i\xi\tilde {\varphi}}\tilde{\chi}_{\varepsilon}dt\right|.\]
To control the integral on the right, we do our aforementioned integration by parts:
\[\int_{-a}^{a}e^{i\xi\tilde{\varphi}}\tilde{\chi}_{\varepsilon}dt=\int_{-a}^{ a}\frac{i\xi\partial_{t}\tilde{\varphi}}{i\xi\partial_{t}\tilde{\varphi}}e^{i \xi\tilde{\varphi}}\tilde{\chi}_{\varepsilon}dt\]
\[=\left[\frac{\tilde{\chi}_{\varepsilon}}{i\xi\partial_{t}\tilde{\varphi}}e^{ i\xi\tilde{\varphi}}\right]_{t=-a}^{t=a}-\frac{i}{\xi}\int_{-a}^{a}\partial_{t} \left(\frac{\tilde{\chi}_{\varepsilon}}{\partial_{t}\tilde{\varphi}}\right)e ^{i\xi\tilde{\varphi}}dt,\]
so that
\[\left|\int_{-a}^{a}e^{i\xi\tilde{\varphi}}\tilde{\chi}_{\varepsilon}dt\right| \lesssim|\xi|^{-1}\varepsilon^{-(1-\alpha)}.\]
Finally, choosing \(\varepsilon=1/|\xi|\) yields
\[\left|\int_{T^{1}\mathbb{H}^{2}}e^{i\xi\varphi(v)}\chi(v)d\tilde{m}_{F}(v) \right|\lesssim\varepsilon^{\alpha}+\varepsilon^{-(1-\alpha)}|\xi|^{-1} \lesssim|\xi|^{-\alpha},\]
which is the desired bound.
**Theorem 4.5**.: _Under the conditions of Theorem 1.2, there exists \(\varepsilon>0\) such that the following holds. Let \(R\geq 1\) and let \(\chi:T^{1}M\to\mathbb{R}\) be a Holder map supported on some compact \(K\). There exists \(C\geq 1\) such that, for any \(C^{2}\) function \(\varphi:T^{1}M\to\mathbb{R}\) satisfying \(\|\varphi\|_{C^{2}}+(\inf_{K}\|d\varphi\|)^{-1}\leq R\), we have:_
\[\forall\xi\in\mathbb{R}^{*},\ \left|\int_{T^{1}M}e^{i\xi\varphi}\chi dm_{F} \right|\leq\frac{C}{|\xi|^{\varepsilon}}\]
Proof.: First of all, choose \(\tilde{\chi}:T^{1}\mathbb{H}^{2}\to\mathbb{R}\) a lift of \(\chi\) supported on a fundamental domain of \(\Gamma\). Denote by \(\tilde{K}\subset T^{1}\mathbb{H}^{2}\) its (compact) support. Lift \(\varphi\) to a \(\Gamma\)-invariant map \(\tilde{\varphi}:T^{1}\mathbb{H}^{2}\to\mathbb{R}\). We then have:
\[\int_{T^{1}M}e^{i\xi\varphi}\chi dm_{F}=\int_{T^{1}\mathbb{H}^{2}}e^{i\xi \tilde{\varphi}}\tilde{\chi}d\tilde{n}_{F}.\]
Now, consider the map \(\mathcal{B}_{\varphi}:T^{1}\mathbb{H}^{2}\to\mathbb{R}^{3}\) defined by \(\mathcal{B}_{\varphi}(v):=(d\tilde{\varphi})_{v}(\partial_{v^{+}}),(d\tilde{ \varphi})_{v}(\partial_{v^{-}}),(d\tilde{\varphi})_{v}(\partial_{t}))\). Since \(((\partial_{v^{+}})_{v},(\partial_{v^{-}})_{v},(\partial_{t})_{v})\) is a basis of \(T_{v}(T^{1}\mathbb{H}^{2})\) for any \(v\in T^{1}\mathbb{H}^{2}\), and since \(d\tilde{\varphi}\) doesn't vanish on \(\tilde{K}\), we see that \(\mathcal{B}_{\varphi}(K)\subset\mathbb{R}^{3}\setminus\{0\}\). By uniform continuity of \(\mathcal{B}_{\varphi}\) on the compact \(\tilde{K}\), it follows that there exists \(c_{0}>0\) such that we can cover \(\tilde{K}\) by a finite union of compact balls \((B_{j})_{j\in J}\) satisfying:
\[\forall j\in J,\ \exists e\in\{v^{+},v^{-},t\},\ \forall v\in B_{j},\ | \partial_{e}\tilde{\varphi}(v)|>c_{0}.\]
To conclude, we consider a partition of unity \((\widehat{\chi}_{j})_{j}\) adapted to the cover \((B_{j})_{j}\), and we write:
\[\int_{T^{1}\mathbb{H}^{2}}e^{i\xi\tilde{\varphi}}\tilde{\chi}d\tilde{n}_{F}= \sum_{j\in J}\int_{B_{j}}e^{i\xi\tilde{\varphi}}\tilde{\chi}\widehat{\chi}_{j }d\tilde{n}_{F}.\]
Each of the inner integrals is then controlled by either Lemma 4.2, Lemma 4.3 or Lemma 4.4.
**Remark 4.6**.: We recover our main Theorem 1.4 as a particular case of Theorem 4.5. Indeed, if \(\varphi:K\subset T^{1}M\to\mathbb{R}^{3}\) is a \(C^{2}\) local chart, then for any \(\zeta\in\mathbb{R}^{3}\setminus\{0\}\), one may write:
\[\int_{T^{1}M}e^{i\zeta\cdot\varphi(v)}\chi(v)dm_{F}(v)=\int_{T^{1}M}e^{i[ \zeta](\zeta/[\zeta])\cdot\varphi(v)}\chi(v)dm_{F}(v)\leq C|\zeta|^{-\varepsilon},\]
since the map \((u,v)\in\mathbb{S}^{2}\times K\mapsto u\cdot(d\varphi)_{v}\in T_{v}^{*}(T^{1}M)\) doesn't vanish (because the range of \((d\varphi)_{v}\) isn't contained in a plane). Notice that we used the uniformity of the constants \(C\geq 1\) given by the phases \(u\cdot(d\varphi)\).
## Appendix A On the Fourier dimension
### The upper and lower Fourier dimension
We naturally want to make sense of the Fourier dimension of the non-wandering set of the geodesic flow, so that we can write a sentence of the form: "\(\dim_{F}\operatorname{NW}(\phi)>0\)". But since \(NW(\phi)\) is a subset of an abstract manifold, the usual definition doesn't apply. In this appendix, we suggest some definitions that one could choose to talk about the Fourier dimension of a compact set lying in an abstract manifold.
First of all, recall that the Fourier dimension of a probability measure \(\mu\in\mathcal{P}(E)\), supported on some compact set \(E\subset\mathbb{R}^{d}\), can be defined as:
\[\dim_{F}(\mu):=\sup\{\alpha\geq 0\ |\ \exists C\geq 1,\forall\xi\in\mathbb{R}^ {d}\setminus\{0\},\ |\widehat{\mu}(\xi)|\leq C|\xi|^{-\alpha/2}\},\]
where the Fourier transform of \(\mu\) is defined by
\[\widehat{\mu}(\xi):=\int_{E}e^{-2i\pi\xi\cdot x}d\mu(x).\]
The Fourier dimension of a compact set \(E\subset\mathbb{R}^{d}\) is then defined as
\[\dim_{F}(E):=\sup\{\min(d,\dim_{F}\mu),\ \mu\in\mathcal{P}(E)\}\leq\dim_{H}E.\]
To define the Fourier dimension of a measure lying in a abstract manifold, a natural idea is to look at our measure into local charts. But this suppose that we have a meaningful way to "localize" the usual definition of the Fourier dimension. This is the content of the next well known lemma.
**Lemma A.1**.: _Let \(E\subset\mathbb{R}^{d}\) be a compact set. Let \(\mu\in\mathcal{P}(E)\). Let \(\varepsilon>0\). Denote by \(\text{Bump}(\varepsilon)\) the set of smooth functions with support of diameter at most \(\varepsilon\). Then:_
\[\dim_{F}\mu=\inf\{\dim_{F}(\chi d\mu)\ |\ \chi\in\text{Bump}(\varepsilon)\}.\]
Proof.: Let \(E\subset\mathbb{R}^{d}\) be a fixed compact set, and let \(\mu\in\mathcal{P}(E)\) be a fixed (borel) probability measure supported on \(E\). First of all, consider a finite covering of the compact set \(E\) by balls \((B_{i})_{i\in I}\) of radius \(\varepsilon\). Consider an associated partition of unity \((\chi_{i})_{i\in I}\). Then, for all \(\alpha<\inf_{\chi}\dim_{F}(\chi d\mu)\), there exists \(C\geq 1\) such that:
\[|\widehat{\mu}(\xi)|=\left|\sum_{i\in I}\widehat{\chi_{i}d\mu}(\xi)\right|\leq C |\xi|^{-\alpha}.\]
Hence \(\dim_{F}\mu\geq\alpha\). Since this hold for any \(\alpha<\inf\{\dim_{F}(\chi d\mu)\ |\ \chi\in\text{Bump}(\varepsilon)\}\), this yields \(\dim_{F}\mu\geq\inf\{\dim_{F}(\chi d\mu)\ |\ \chi\in\text{Bump}( \varepsilon)\}.\) Now we prove the other inequality.
Fix some smooth function with compact support \(\chi\). Its Fourier transform \(\widehat{\chi}\) is in the Schwartz class: in particular, for all \(N\geq d+1\), there exists \(C_{N}\) such that \(\widehat{\chi}(\eta)\leq C_{N}|\eta|^{-N}\) for all \(\eta\in\mathbb{R}^{d}\setminus\{0\}\). Let \(\alpha<\alpha^{\prime}<\dim_{F}\mu\). Then there exists \(C\geq 1\) such that \(|\widehat{\mu}(\xi)|\leq C|\xi|^{-\alpha^{\prime}}\) for all \(\xi\in\mathbb{R}^{d}\setminus\{0\}\). Now, notice that:
\[\widehat{\chi d\mu}(\xi)=\widehat{\chi}*\widehat{\mu}(\xi)=\int_{\mathbb{R}^{ d}}\widehat{\chi}(\eta)\widehat{\mu}(\xi-\eta)d\eta.\]
We cut the integral in two parts, depending on some radius \(r>0\) that we choose to be \(r:=|\xi|^{1-\varepsilon}\), where \(\varepsilon:=1-\alpha/\alpha^{\prime}\). We suppose that \(|\xi|\geq 2\). In this case, a direct computation show that whenever \(\eta\in B(0,r)\), we have \(|\xi|^{1-\varepsilon}\leq C|\xi-\eta|\). We are finally ready to conclude our computation:
\[\left|\widehat{\chi d\mu}(\xi)\right| \leq\left|\int_{B(0,r)}\widehat{\chi}(\eta)\widehat{\mu}(\xi-\eta )d\eta\right|+\left|\int_{B(0,r)^{\varepsilon}}\widehat{\chi}(\eta)\widehat{ \mu}(\xi-\eta)d\eta\right|\] \[\lesssim_{N}\int_{\mathbb{R}^{d}}|\widehat{\chi}(\eta)|d\eta \cdot\frac{C}{|\xi|^{(1-\varepsilon)\alpha^{\prime}}}+\int_{B(0,r)^{ \varepsilon}}\frac{1}{|\eta|^{N}}d\eta\] \[\lesssim_{N}\frac{1}{|\xi|^{\alpha}}+r^{N-d}\int_{B(0,1)^{ \varepsilon}}\frac{1}{|\zeta|^{N}}d\zeta\lesssim\frac{1}{|\xi|^{\alpha}}\]
if \(N\) is choosen large enough. It follows that \(\dim_{F}(\chi d\mu)\geq\alpha\), and this for any \(\alpha<\dim_{F}\mu\), so \(\dim_{F}(\chi d\mu)\geq\dim_{F}(\mu)\). Taking the infimum in \(\chi\) yields the desired inequality.
Now we understand how the Fourier dimension of a measure \(\mu\) can be computed by looking at the local behavior of \(\mu\). But another, much harder problem arise now: the Fourier dimension of a measure depends very much on the embedding of this measure in the ambiant space. In concrete terms, the Fourier dimension is not going to be independant on the choice of local charts. A way to introduce an "intrinsic" quantity related to the Fourier dimension of a measure would be to take the supremum or the infimum under all those charts. We directly give our definition in the context of a manifold.
**Definition A.2**.: Let \(M\) be a smooth manifold of dimension \(d\). Let \(E\subset M\) be a compact set. Let \(\mu\in\mathcal{P}(E)\). Let \(k\in\mathbb{N}^{*}\). Let \(\text{Bump}(E)\) denote the set of all smooth functions \(\chi:M\to\mathbb{R}\) such that \(\text{supp}(\chi)\) is contained in a local chart. We denote by \(\text{Chart}(\chi,C^{k})\) the set of all \(C^{k}\) local charts \(\varphi:U\to\mathbb{R}^{d}\), where \(U\supset\text{supp}(\chi)\) is an open set containing the support of \(\chi\). Now, define the lower Fourier dimension of \(\mu\) by \(C^{k}\) charts of \(M\) by:
\[\underline{\dim}_{F,C^{k}}(\mu):=\inf_{\chi\in\text{Bump}(E)}\inf\{\dim_{F}( \varphi_{*}(\chi d\mu)),\ \varphi\in\text{Chart}(\chi,C^{k})\}.\]
Similarly, define the upper Fourier dimension of \(\mu\) by \(C^{k}\) charts of \(M\) by:
\[\overline{\dim}_{F,C^{k}}(\mu):=\inf_{\chi\in\text{Bump}(E)}\sup\{\dim_{F}( \varphi_{*}(\chi d\mu)),\ \varphi\in\text{Chart}(\chi,C^{k})\}.\]
**Definition A.3**.: Let \(M\) be a smooth manifold of dimension \(d\). Let \(E\subset M\) be a compact set. Let \(\mu\in\mathcal{P}(E)\). We define the lower Fourier dimension of \(\mu\) by:
\[\underline{\dim}_{F}(\mu)=\underline{\dim}_{F,C^{\infty}}(\mu).\]
**Remark A.4**.: The lower Fourier dimension test if, for any localization \(\chi d\mu\) of \(\mu\), and for any smooth local chart \(\varphi\), one has some decay of the Fourier transform of \(\varphi_{*}(\chi d\mu)\). We then take the infimum of all the best decay exponents. This quantity is \(\underline{C^{\infty}}\)-intrinsic in the following sense: if \(\Phi:M\to M\) is a \(C^{\infty}\)-diffeomorphism, then \(\underline{\dim}_{F}(\Phi_{*}\mu)=\underline{\dim}_{F}(\mu)\). Symmetrically, the \(C^{k}\)-upper Fourier dimension test if, for any localization \(\chi d\mu\) of \(\mu\), there exists a \(C^{k}\)-chart \(\varphi\) such that one has some decay for the Fourier transform of \(\varphi_{*}(\chi d\mu)\). This quantity is also \(C^{\infty}\)-intrinsic. Still, beware that the upper and lower Fourier dimensions depends on the dimension of the ambiant manifold.
**Remark A.5**.: Let \(E\subset M\) be a compact set lying in a manifold \(M\) of dimension \(d\). Fix a bump function \(\chi\) and a local chart \(\varphi\in\operatorname{Charnt}(\chi,C^{k})\). For \(\mu\in\mathcal{P}(E)\) a measure supported in \(E\subset M\), we have the following bounds:
\[0\leq\underline{\dim}_{F,C^{k}}\mu\leq\dim_{F}\varphi_{*}(\chi d\mu)\leq \overline{\dim}_{F,C^{k}}\mu.\]
Moreover, if \(\dim_{H}E<d\), then:
\[\overline{\dim}_{F,C^{k}}\mu\leq\dim_{H}E.\]
**Example A.6**.: Let \(M\) be a manifold of dimension \(d\), and consider any smooth hypersurface \(N\subset M\). Let \(k\geq 1\). Let \(\mu\) be any smooth and compactly supported measure on \(N\). Then:
\[\underline{\dim}_{F,C^{k}}(\mu)=0,\quad\overline{\dim}_{F,C^{k}}(\mu)=d-1.\]
The first fact is easily proved by noticing that, locally, \(N\) is diffeomorphic to a linear subspace of \(\mathbb{R}^{d}\), which has zero Fourier dimension. The second fact is proved by noticing that, locally, \(N\) is diffeomorphic to a half sphere, and any smooth measure supported on the half sphere exhibit power decay of its Fourier transform with exponent \((d-1)/2\).
**Remark A.7**.: It seems that, for some well behaved measures \(\mu\in\mathcal{P}(E)\) supported on compacts \(E\) with \(\dim_{H}E<d\), one might expect the quantity \(\overline{\dim}_{F,C^{k}}\mu\) is be comparable to \(\dim_{H}E\). For some measures lying in a \(1\)-dimensionnal curve, this is the content of Theorem 2 in [1].
**Remark A.8**.: Using this langage, the results of [1], [19], [20], [18] and [15] all implies positivity of the lower Fourier dimension by \(C^{2}\) charts of some measures (respectively: Patterson-Sullivan measures, Patterson-Sullivan measures, equilibrium states, equilibrium states, and stationary measures). This is a bit stronger than a related notion found in [19], namely the "\(C^{2}\)-stable positivity of the Fourier dimension". The results in our paper implies the following: under the conditions of Theorem 1.2, the equilibrium state \(m_{F}\in\mathcal{P}(NW(\phi))\) satisfies
\[\underline{\dim}_{F,C^{2}}(m_{F})>0,\]
where the non-wandering set \(NW(\phi)\) of the geodesic flow is seen in the unit tangent bundle \(T^{1}M\). In particular, its lower Fourier dimension is positive.
### A variation with real valued phases
For completeness, we suggest two variations for intrinsic notions of Fourier dimension for a measure in an abstract manifold. The first is exposed in this subsection, and the next will be discussed in the next subsection. Inspired by the computations made in section 4, we may want to look at more general oscillatory integrals involving \(\mu\). A possibility is the following.
**Definition A.9**.: Let \(M\) be a smooth manifold of dimension \(d\). Let \(E\subset M\) be a compact set. Let \(\mu\in\mathcal{P}(E)\). Let \(k\in\mathbb{N}^{*}\). Let \(\operatorname{Bump}(E)\) denote the set of all smooth functions \(\chi:M\to\mathbb{R}\) such that \(\operatorname{supp}(\chi)\) is contained in a local chart. We denote by \(\operatorname{Phase}(\chi,C^{k})\) the set of all real valued and \(C^{k}\) maps \(\psi:U\to\mathbb{R}\) with nonvanishing differential, where \(U\supset\operatorname{supp}(\chi)\) is an open set containing the support of \(\chi\). Now, define the lower Fourier dimension of \(\mu\) by \(C^{k}\) phases of \(M\) by:
\[\underline{\dim}_{F,C^{k}}^{\operatorname{real}}(\mu):=\inf_{\chi\in \operatorname{Bump}(E)}\inf\{\dim_{F}(\psi_{*}(\chi d\mu)),\ \psi\in\operatorname{Phase}(\chi,C^{k})\}.\]
Similarly, define the upper Fourier dimension of \(\mu\) by \(C^{k}\) phases of \(M\) by:
\[\overline{\dim}_{F,C^{k}}^{\operatorname{real}}(\mu):=\inf_{\chi\in \operatorname{Bump}(E)}\sup\{\dim_{F}(\psi_{*}(\chi d\mu)),\ \psi\in\operatorname{Phase}(\chi,C^{k})\}.\]
As before, we also denote \(\underline{\dim}_{F}^{\operatorname{real}}(\mu):=\underline{\dim}_{F,C^{\infty }}^{\operatorname{real}}(\mu)\).
**Remark A.10**.: First of all, notice that \(\psi_{*}(\chi d\mu)\) is a measure supported in \(\mathbb{R}\), so its Fourier transform is a function from \(\mathbb{R}\) to \(\mathbb{C}\). More precisely:
\[\forall t\in\mathbb{R},\ \widehat{\psi_{*}(\chi d\mu)}(t):=\int_{E}e^{it\psi(x)} \chi(x)d\mu(x).\]
Like before, the lower/upper Fourier dimensions with real phases are \(C^{\infty}\)-intrinsic in the sense that for any \(C^{\infty}\)-diffeomorphism \(\Phi:M\to M\), we have \(\underline{\dim}_{F,C^{k}}^{\mathrm{real}}(\Phi_{*}\mu)=\underline{\dim}_{F,C ^{k}}^{\mathrm{real}}(\mu)\) and \(\overline{\dim}_{F,C^{k}}^{\mathrm{real}}(\Phi_{*}\mu)=\overline{\dim}_{F,C ^{k}}^{\mathrm{real}}(\mu)\).
**Example A.11**.: Let \(M\) be a smooth manifold, and let \(N\) be a smooth submanifold of \(M\). Let \(\mu\) be a smooth and compactly supported probability measure in \(N\). Then:
\[\underline{\dim}_{F,C^{k}}^{\mathrm{real}}(\mu)=0,\quad\overline{\dim}_{F,C^{ k}}^{\mathrm{real}}(\mu)=\infty.\]
These equalities can be proved as follow. Consider some smooth bump function \(\chi\) with small enough support. Now, there exists a phase \(\psi\), defined on a neighborhood \(U\) of \(\mathrm{supp}(\chi)\), with nonvanishing differential on \(U\) but which is constant on \(N\). The associated oscillatory integral \(\widehat{\psi_{*}(\chi d\mu)}\) doesn't decay, hence the computation on the lower Fourier dimension with real phases. There also exists smooth a phase \(\psi\) such that \((d\psi)_{|TN}\) doesn't vanish. By the non-stationnary phase lemma, the associated oscillatory integral decay more than \(t^{-N}\), for any \(N\geq 0\), hence the computation on the upper Fourier dimension with real phases.
Notice how, in particular, \(\min(\overline{\dim}_{F,C^{k}}^{real}(\mu),d)\) may be strictly larger than the Hausdorff dimension of the support of \(\mu\). This may be a sign that this variation of the upper dimension isn't well behaving as a "Fourier dimension".
**Lemma A.12**.: _We can compare this Fourier dimension with the previous one. We have:_
\[\underline{\dim}_{F,C^{k}}(\mu)\leq\underline{\dim}_{F,C^{k}}^{real}(\mu), \quad\overline{\dim}_{F,C^{k}}(\mu)\leq\overline{\dim}_{F,C^{k}}^{real}(\mu).\]
Proof.: Let \(\alpha<\underline{\dim}_{F,C^{k}}(\mu)\). Then, for any bump function \(\chi\) and for any associated local chart \(\varphi\), there exists some constant \(C\) such that, for all \(\xi\in\mathbb{R}^{d}\setminus\{0\}\), we have \(|\widehat{\varphi_{*}(\chi d\mu)}(\xi)|\leq C|\xi|^{-\alpha/2}\). Now fix \(\psi:U\to\mathbb{R}\) with nonvanishing differential, where \(\mathrm{supp}(\chi)\subset U\). By the submersion theorem, there exists a local chart \(\varphi:U\to\mathbb{R}^{d}\) such that \(\varphi(x)=\psi(x)e_{1}+\sum_{j=2}^{d}f_{j}(x)e_{j}\) (where \((e_{i})_{i}\) is the canonical basis of \(\mathbb{R}^{d}\), and where \(f_{j}\) are some real valued functions). Hence, one can write:
\[|\psi_{*}(\chi d\mu)(t)|=|\varphi_{*}(\chi d\mu)(te_{1})|\leq C|t|^{-\alpha/2}.\]
Hence \(\underline{\dim}_{F,C^{k}}^{real}(\mu)\geq\alpha\), and this for any \(\alpha<\underline{\dim}_{F,C^{k}}(\mu)\), hence the desired bound.
The second bound is proved as follow. Let \(\alpha<\overline{\dim}_{F,C^{k}}(\mu)\). Let \(\chi\) be a small bump function. There there exists a local chart \(\varphi:U\to\mathbb{R}^{d}\), with \(U\supset\mathrm{supp}(\chi)\), such that \(\widehat{\varphi_{*}(\chi d\mu)}\lesssim|\xi|^{-\alpha/2}\). Let \(u\in\mathbb{S}^{d-1}\) and consider \(\psi(x):=u\cdot\varphi(x)\). It is easy to check that \(\psi\) has nonvanishing differential, and since, for any \(t\in\mathbb{R}\setminus\{0\}\),
\[|\widehat{\psi_{*}(\chi d\mu)}(t)|=|\widehat{\varphi_{*}(\chi d\mu)}(ut)| \lesssim|t|^{-\alpha/2},\]
we get \(\overline{\dim}_{F,C^{k}}^{real}(\mu)\geq\alpha\). The bound follow.
In concrete cases, we expect the lower Fourier dimension and the lower Fourier dimension with real phases to be equal. Unfortunately, our choices of definitions doesn't clearly make that happen all the time. We have to add a very natural assumption for the equality to hold.
**Definition A.13**.: Let \(\mu\in\mathcal{P}(E)\), where \(E\subset M\) is a compact subset of a smooth manifold. We say that \(\mu\) admits reasonnable constants for \(C^{k}\)-phases if, for any \(\alpha<\underline{\dim}_{F,C^{k}}^{\mathrm{real}}(\mu)\), and for any \(\chi\in\mathrm{Bump}(E)\), the following hold:
\[\forall R\geq 1,\ \exists C_{R}\geq 1,\ \forall\psi\in\mathrm{Phase}(\chi,C^{k}),\]
\[\left(\|\psi\|_{C^{k}}+\sup_{x\in U}\|(d\psi)_{x}\|^{-1}\leq R\right) \Longrightarrow\left(\forall t\in\mathbb{R}^{*},\ |\widehat{\psi_{*}(\chi d\mu)}(t)|\leq C_{R}t^{-\alpha/2}\right).\]
Under this natural assumption, we have equality of the lower Fourier dimensions.
**Lemma A.14**.: _Let \(\mu\in\mathcal{P}(E)\), where \(E\subset M\) is a compact subset of some smooth manifold \(M\). Suppose that \(\mu\) admits reasonnable constants for \(C^{k}\)-phases. Then:_
\[\underline{\dim}_{F,C^{k}}(\mu)=\underline{\dim}_{F,C^{k}}^{real}(\mu)\]
Proof.: An inequality is already known, we just have to prove the second one. The proof of the other inequality is the same argument as the one explained in Remark 4.6.
### A directionnal variation
A second natural and intrinsic idea would be to fix some (spatial) direction on which to look for Fourier decay. We quickly discuss these notions and then we will move on to discuss some notions of Fourier dimensions for sets.
**Definition A.15**.: Let \(E\subset M\) be a compact set in some smooth manifold. Let \(V\subset TM\) be a continuous vector bundle on an open neighborhood \(\tilde{E}\) of \(E\). Denote by \(\operatorname{Bump}^{V}(E)\) the set of all smooth bump functions with support included in \(\tilde{E}\), and included in some local chart. For some \(\chi\in\operatorname{Bump}^{V}(E)\), denote by \(\operatorname{Phase}^{V}(\chi,C^{k})\) the set of all \(C^{k}\) maps \(\psi:U\to\mathbb{R}\) such that \((d\psi)_{|V}\) doesn't vanish on \(U\), where \(\operatorname{supp}(\chi)\subset U\subset\tilde{E}\) is some open set.
For \(\mu\in\mathcal{P}(E)\), we define its lower Fourier dimension in the direction \(V\) for \(C^{k}\) phases by:
\[\underline{\dim}_{F,C^{k}}^{V}(\mu):=\inf_{\chi\in\operatorname{Bump}^{V}(E) }\inf\{\dim_{F}(\psi_{*}(\chi d\mu)),\ \psi\in\operatorname{Phase}^{V}(\chi,C^{k})\}.\]
Similarly, define its upper Fourier dimension in the direction \(V\) for \(C^{k}\) phases by:
\[\overline{\dim}_{F,C^{k}}^{V}(\mu):=\inf_{\chi\in\operatorname{Bump}^{V}(E)} \sup\{\dim_{F}(\psi_{*}(\chi d\mu)),\ \psi\in\operatorname{Phase}^{V}(\chi,C^{k})\}.\]
**Remark A.16**.: Again, these notions of Fourier dimensions are \(C^{\infty}\)-intrinsic, in the following sense: if \(\Phi:M\to M\) is a \(C^{k}\)-diffeomorphism of \(M\), then \(\underline{\dim}_{F,C^{k}}^{\Phi_{*}V}(\Phi_{*}\mu)=\underline{\dim}_{F,C^{k} }^{V}(\mu)\), and \(\overline{\dim}_{F,C^{k}}^{\Phi_{*}V}(\Phi_{*}\mu)=\overline{\dim}_{F,C^{k}}^ {V}(\mu)\).
**Remark A.17**.: With these notations, the results found in [10] implies that, for any "nonlinear" and sufficiently bunched solenoid \(S\), and for any equilibrium state \(\mu\), one has \(\underline{\dim}_{F,C^{1+\alpha}}^{E_{u}}(\mu)>0\), where \(E_{u}\) is the unstable line bundle associated to the dynamics on the solenoid.
**Lemma A.18**.: _Let \(V_{1},\ldots,V_{n}\subset TM\) be some continuous vector bundles defined on some open neighborhood \(\tilde{E}\) of \(E\). Suppose that \((V_{1})_{p}+\ldots(V_{n})_{p}=T_{p}M\) for all \(p\in\tilde{E}\). Then:_
\[\min_{j}\underline{\dim}_{F,C^{k}}^{V_{j}}(\mu)=\underline{\dim}_{F,C^{k}}^{ real}(\mu),\quad\max_{j}\overline{\dim}_{F,C^{k}}^{V_{j}}(\mu)\leq\overline{ \dim}_{F,C^{k}}^{\text{real}}(\mu).\]
Proof.: Let \(\alpha<\underline{\dim}_{F,C^{k}}^{\text{real}}(\mu)\). Then, for any bump \(\chi\) and associated phase \(\psi\), one has \(\widehat{\psi_{*}(\chi d\mu)}(t)\lesssim|t|^{-\alpha/2}\). In paticular, for any phase \(\phi_{j}\in\operatorname{Phase}^{V_{j}}(\chi,C^{k})\), the previous decay holds, and so \(\min_{j}\underline{\dim}_{F,C^{k}}^{V_{j}}(\mu)\geq\alpha\). Hence \(\min_{j}\underline{\dim}_{F,C^{k}}^{V_{j}}(\mu)\geq\underline{\dim}_{F,C^{k}} ^{\text{real}}(\mu)\).
Now let \(\alpha<\min_{j}\underline{\dim}_{F,C^{k}}^{V_{j}}(\mu)\). Then, for all \(j\), for any bump \(\chi\), and for any phase \(\psi_{j}\in\operatorname{Phase}^{V_{j}}(\chi,C^{k})\), the previous decay applies. Now, if we fix some \(\chi\) and some associated phase \(\psi\in\operatorname{Phase}(\chi,C^{k})\), we know that at each point \(p\), \((d\psi)_{p}\) is nonzero. In particular, there exists \(j(p)\) such that \((d\psi)_{|V_{p}^{j(p)}}\neq 0\). Following the proof of Theorem 4.5, we can show by using a partition of unity that this implies \(\widehat{\psi_{*}(\chi d\mu)}(t)\lesssim|t|^{-\alpha/2}\). Hence \(\underline{\dim}_{F,C^{k}}^{\text{real}}(\mu)\geq\alpha\), and we have prove equality.
For our last bound, let \(\alpha<\max_{j}\overline{\dim}_{F,C^{k}}^{V_{j}}\mu\). Then there exists \(j\) such that, for all bump \(\chi\), there exists an associated phase \(\psi_{j}\in\operatorname{Phase}^{V_{j}}(\chi,C^{k})\) such that \(\widehat{\psi_{j}(\chi d\mu)}(t)\lesssim|t|^{-\alpha/2}\). Since \(\psi_{j}\in\operatorname{Phase}(\chi,C^{k})\), we get \(\overline{\dim}_{F,C^{k}}^{\text{real}}\mu\geq\max_{j}\overline{\dim}_{F,C^{k} }^{V_{j}}\mu\).
**Remark A.19**.: The reverse bound for the upper dimensions is not clear: if for all bump functions \(\chi\), there exists a phase \(\psi\) with good fourier decay properties for \(\mu\), then nothing allows us to think that \(\psi\) is going to have nonvanishing diffenrenrential in some fixed \(V_{j}\) on all \(E\).
### What about sets?
We finally define some intrinsic notions of Fourier dimensions for sets. First of all, recall that the usual definition for some \(E\subset\mathbb{R}^{d}\) is:
\[\dim_{F}(E):=\sup\{\dim_{F}(\mu)\leq d\,\ \mu\in\mathcal{P}(E)\}\leq\dim_{H}(E).\]
In particular, in view of the proof of Lemma A.1, we see that any measure \(\mu\) with some Fourier decay properties may be localized anywhere on its support to still yield a measure with large Fourier dimension. Hence we find the following "localized" formula, for any \(\varepsilon>0\):
\[\dim_{F}(E)=\sup_{\begin{subarray}{c}U\cap E\neq\emptyset\\ U\text{ open}\end{subarray}}\dim_{F}(E\cap U).\]
Now, we have two main ways to define the up(per) and low(er) Fourier dimension of a compact set in a manifold: directly computing the Fourier dimension of parts of \(E\) in local charts, or taking the sup over the previously defined notions of Fourier dimension for measures.
**Definition A.20**.: Let \(E\subset M\) be a compact set included in some smooth manifold. We define its lower/upper Fourier dimension with \(C^{k}\)-charts by:
\[\underline{\dim}_{F,C^{k}}(E):=\sup\{\underline{\dim}_{F,C^{k}}(\mu)\leq d,\ \mu\in\mathcal{P}(E)\},\quad\overline{\dim}_{F,C^{k}}(E):=\sup\{\overline{ \dim}_{F,C^{k}}(\mu)\leq d,\ \mu\in\mathcal{P}(E)\},\]
We also define the \(C^{k}\)-low Fourier dimension and \(C^{k}\)-up Fourier dimensions of \(E\) by:
\[\underline{\dim}_{F,C^{k}}(E):=\sup_{\begin{subarray}{c}U\cap E\neq\emptyset \\ U\text{ open chart}\end{subarray}}\inf\{\dim_{F}(\varphi(E\cap U))\,\ \varphi:U\to\mathbb{R}^{d}\ C^{k}\text{ local chart}\},\]
\[\widetilde{\dim}_{F,C^{k}}(E):=\sup_{\begin{subarray}{c}U\cap E\neq\emptyset \\ U\text{ open chart}\end{subarray}}\sup\{\dim_{F}(\varphi(E\cap U))\,\ \varphi:U\to\mathbb{R}^{d}\ C^{k}\text{ local chart}\}.\]
**Remark A.21**.: The low and up Fourier dimension are \(C^{k}\)-intrinsic in the natural sense. For exemple, if \(\Phi:M\to M\) is a \(C^{k}\)-diffeomorphism, then \(\underline{\dim}_{F,C^{k}}(\Phi(E))=\underline{\dim}_{F,C^{k}}(E)\). The lower and upper Fourier dimension are \(C^{\infty}\)-intrinsic.
**Lemma A.22**.: _Let \(E\subset M\) be a compact set in some smooth manifold \(M\). Then:_
\[0\leq\underline{\dim}_{F,C^{k}}(E)\leq\underline{\dim}_{F,C^{k}}(E)\leq \widetilde{\dim}_{F,C^{k}}(E)=\overline{\dim}_{F,C^{k}}(E)\leq\dim_{H}(E) \leq d.\]
Proof.: Let us prove all the inequalities in order, from left to right. \(0\leq\underline{\dim}_{F,C^{k}}(E)\) is trivial. Let us prove the second one.
Let \(\alpha<\underline{\dim}_{F,C^{k}}(E)\). By definition, there exists some probability measure \(\mu\in\mathcal{P}(E)\) such that \(\underline{\dim}_{F,C^{k}}(\mu)\geq\alpha\). Now, since the support of \(\mu\) is nonempty, there exists \(U\) some small open set and a bump function \(\chi\) supported in \(U\) such that \(\chi d\mu\) is a (localized) nonzero measure. Let \(\varphi:U\to\mathbb{R}^{d}\) a local chart. Then, by hypothesis on \(\mu\), \(\dim_{F}\varphi_{*}(\chi d\mu)\geq\alpha\). In particular, since (up to normalization) \(\varphi_{*}(\chi d\mu)\in\mathcal{P}(\varphi(E\cap U))\), we have \(\dim_{F}\varphi(E\cap U)\geq\alpha\). This for any local chart \(\varphi\), and so \(\inf_{\varphi}\dim_{F}(\varphi(E\cap U))\geq\alpha\). This yields \(\underline{\dim}_{F,C^{k}}(E)\geq\alpha\). Since this is true for any \(\alpha<\underline{\dim}_{F,C^{k}}(E)\), we get the desired inequality.
The inequality \(\dim_{F,C^{k}}(E)\leq\widetilde{\dim}_{F,C^{k}}(E)\) is trivial. Let us prove the equality between \(\widetilde{\dim}_{F,C^{k}}(E)\) and \(\overline{\dim}_{F,C^{k}}(E)\). Let \(\alpha<\widetilde{\dim}_{F,C^{k}}(E)\). Then, there exists some small open set \(U\) (such that \(U\cap E\neq\emptyset\) and a local chart \(\varphi:U\to\mathbb{R}^{d}\) such that \(\dim_{F}(\varphi(U\cap E))\geq\alpha\). By definition, it means that there exists some measure \(\nu\in\mathcal{P}(\varphi(E\cap U))\) such that \(\dim_{F}\nu\geq\alpha\). Letting \(\mu:=\varphi_{*}^{-1}\nu\in\mathcal{P}(E\cap U)\) yields a measure supported in \(E\) that satisfies \(\overline{\dim}_{F,C^{k}}(\mu)\geq\alpha\) (in view of the proof of Lemma A.1). Hence, \(\overline{\dim}_{F,C^{k}}(E)\geq\alpha\). This, for any \(\alpha<\widetilde{\dim}_{F,C^{k}}(E)\), so that \(\overline{\dim}_{F,C^{k}}(E)\geq\widetilde{\dim}_{F,C^{k}}(E)\).
Let us prove the other inequality. Let \(\alpha<\overline{\dim}_{F,C^{k}}(E)\). By definition, there exists \(\mu\in\mathcal{P}(E)\) such that \(\overline{\dim}_{F,C^{k}}(\mu)\geq\alpha\). Now let \(U\) be some small open set with \(\mu_{|U}\neq 0\), and let \(\varphi:U\to\mathbb{R}^{d}\) be
a local chart. Let \(\chi\) be some bump function supported in \(U\). Then, by hypothesis on \(\mu\), we have \(\dim_{F}\varphi_{*}(\chi d\mu)\geq\alpha\). In particular, \(\dim_{F}(\varphi(E\cap U))\geq\alpha.\) Hence \(\widetilde{\dim}_{F,C^{k}}(E)\geq\alpha\). This proves the other inequality, and hence concludes the proof that \(\widetilde{\dim}_{F,C^{k}}(E)=\overline{\dim}_{F,C^{k}}(E)\).
Finally, the fact that the Hausdorff dimension is invariant under \(C^{1}\)-diffeomorphisms implies
\[\widetilde{\dim}_{F,C^{k}}(E)=\sup_{U}\sup_{\varphi}\dim_{F}(\varphi(U\cap E ))\leq\sup_{U}\sup_{\varphi}\dim_{H}(\varphi(E\cap U))=\sup_{U}\dim_{H}(E\cap U )=\dim_{H}(E)\leq d.\]
**Example A.23**.: Let \(N\subset M\) be a hypersurface in some smooth manifold \(M\). Then:
\[\underline{\dim}_{F,C^{k}}(N)=\underline{\dim}_{F,C^{k}}(N)=0\quad, \widetilde{\dim}_{F,C^{k}}(N)=\overline{\dim}_{F,C^{k}}(N)=\dim_{H}(N)=d-1.\]
**Example A.24**.: We can finally state the result that we wanted to state. Let \(M\) be a convex-cocompact hyperbolic surface. Let \(NW(\phi)\subset T^{1}M\) be the non-wandering set of the geodesic flow \(\phi\), seen as lying in the unit tangent bundle of \(M\). Then:
\[\underline{\dim}_{F,C^{2}}(NW(\phi))>0.\]
**Example A.25**.: Let \(L\) be a 1-dimensionnal manifold, and let \(E\subset L\) be **any compact subset**. Then:
\[\overline{\dim}_{F,C^{1}}E=\dim_{H}E.\]
This very striking result is proved in [1]. Also, Ekstrom proves that, for any \(k\geq 1\), we have \(\overline{\dim}_{F,C^{k}}E\geq(\dim_{H}E)/k\). This motivates the following question: do we have, for any compact set \(E\) in any manifold \(M\), the formula \(\overline{\dim}_{F,C^{1}}(E)=\dim_{H}(E)\)?
**Remark A.26**.: Other natural questions are the following. Can we find an example of set \(E\subset\mathbb{R}^{d}\) such that \(\underline{\dim}_{F,C^{k}}(E)<\underline{\dim}_{F,C^{k}}(E)\)? Or is it always an equality? Is the lower Fourier dimension \(C^{k}\)-intrinsic?
For completeness, we conclude by introducing the real variation for the lower Fourier dimension. We will not introduce this variation for the upper Fourier dimension, as we said earlier that these seems to behave quite badly with respect to the Hausdorff dimension. To keep it concise, we will not discuss the directionnal variations.
**Definition A.27**.: Let \(E\subset M\) be a compact subset of some smooth manifold \(M\). Define the lower Fourier dimension with \(C^{k}\)-phases by:
\[\underline{\dim}_{F,C^{k}}(E):=\sup\{\underline{\dim}_{F,C^{k}}^{\text{real}} \mu\leq d\,\ \mu\in\mathcal{P}(E)\}.\]
**Remark A.28**.: By Lemma A.11, we see that \(\underline{\dim}_{F,C^{k}}(E)\leq\underline{\dim}_{F,C^{k}}^{\text{real}}(E)\). Is this an equality, or are we able to produce an exemple were this inequality is strict? A related question is: if we denote by \(\mathcal{P}_{reas,C^{k}}(E)\) the set of probability measures that admits reasonnables constants for \(C^{k}\)-phases (see Definition A.12), do we have
\[\underline{\dim}_{F,C^{k}}(E)=\sup\{\underline{\dim}_{F,C^{k}}\mu\leq d,\ \mu\in\mathcal{P}_{reas,C^{k}}(E)\}\quad?\]
|
2306.11762 | MultiEarth 2023 Deforestation Challenge -- Team FOREVER | It is important problem to accurately estimate deforestation of satellite
imagery since this approach can analyse extensive area without direct human
access. However, it is not simple problem because of difficulty in observing
the clear ground surface due to extensive cloud cover during long rainy season.
In this paper, we present a multi-view learning strategy to predict
deforestation status in the Amazon rainforest area with latest deep neural
network models. Multi-modal dataset consists of three types of different
satellites imagery, Sentinel-1, Sentinel-2 and Landsat 8 is utilized to train
and predict deforestation status. MMsegmentation framework is selected to apply
comprehensive data augmentation and diverse networks. The proposed method
effectively and accurately predicts the deforestation status of new queries. | Seunghan Park, Dongoo Lee, Yeonju Choi, SungTae Moon | 2023-06-20T09:10:06Z | http://arxiv.org/abs/2306.11762v1 | # MultiEarth 2023 Deforestation Challenge - Team FOREVER
###### Abstract
It is important problem to accurately estimate deforestation of satellite imagery since this approach can analyse extensive area without direct human access. However, it is not simple problem because of difficulty in observing the clear ground surface due to extensive cloud cover during long rainy season. In this paper, we present a multi-view learning strategy to predict deforestation status in the Amazon rainforest area with latest deep neural network models. Multi-modal dataset consists of three types of different satellites imagery, Sentinel-1, Sentinel-2 and Landsat 8 is utilized to train and predict deforestation status. MM-segmentation framework is selected to apply comprehensive data augmentation and diverse networks. The proposed method effectively and accurately predicts the deforestation status of new queries.
## 1 Introduction
The Amazon is the most important and wireless resource on Earth. It plays a key role in reducing the negative consequences of climate change by actively absorbing greenhouse gases and producing oxygen, acting as a crucial regulator. However, the problem of Amazon deforestation is becoming increasingly serious. Unregulated deforestation has negative effects, such as the eradication of ecosystems, loss of biodiversity, soil erosion, and accelerated global warming. A daily average of 2,300 hectares of forest were destroyed in 2020 [1] and as a result, reducing the rate of Amazonian deforestation through the creation of protected zones has gained international attention. Effective forest management, including strengthened forest monitoring, is essential to addressing this urgent issue [2].
Recently, as the types of available satellite images have been diversified, the spatial resolution of satellites is improving, and deep learning is showing remarkable performance in the image analysis area [7, 9]. Especially, the multi-sensor data fusion strategy combining SAR ( Synthetic Aperture Radar) and optical sensors such as Landsat has improved the accuracy of forest mapping [12]. The MultiEarth 2023 is the second CVPR workshop utilizing multi-satellites imagery to help monitor and analyze the health of these Earth ecosystems [3, 4]. The MultiEarth 2023 Challenge's objective is to combine optical and SAR imagery to carry out continuous assessments of forest monitoring at any time and in all weather. This research is the extending work of last year's deforestation challenge study [10]
The main contributions of this research are summarized as follows.
* Post-processing with effective cloud removal to accurately predict deforestation with multi-modal dataset is proposed.
* Time-series analysis method with adjacent month data is proposed.
The remainder of this paper is organized as follows. Sec. 2 introduced the contents and generation method of the dataset in detail and Sec. 3 described the proposed methodology for deforestation detection and in Sec. 4, prediction results are presented. The conclusion is presented in Sec. 5.
## 2 Dataset
The area of interest in this study is the area bounded by (latitude (LAT) :-3.33\({}^{\circ}\)\(\sim\) -4.39\({}^{\circ}\), longitude (LON) :-54.48\({}^{\circ}\)\(\sim\) 55.2\({}^{\circ}\)), which is one of the areas where deforestation occurs very frequently in the Amazon. The region comprises a portion of dense tropical Amazon rainforest in Para, Brazil, containing thousands of species of broad-leaved evergreen trees. Historically, this has been one of the areas with the highest tree loss rate in the Amazon region, with pastures in the region nearly doubling between 1997 and 2007. As shown in Figure Fig. 1 (b) from PRODES (Brazilian Amazon Rainforest Monitoring Program by Satellite) and the DETER (Real-time Deforestation Detection System), it can be seen that deforestation in the area of interest has expanded before and after 2007 [8].
We used four datasets obtained from Sentinel-1 (Sen1), Sentinel-2 (Sen2), Landsat 5 (Land5), and Landsat 8 (Land8). The four types of satellites are equipped with different sensors, and have different temporal/spatial resolutions and acquisition cycles. Specific specifications for the utilization dataset are described in Tab. 1.
The final size of all dataset images is 256 x 256, and in the labeling image, the deforestation pixels are set to 1 and the forest or background pixels are set to 0.
## 3 Methodology
The deforestation estimation challenge of MultiEarth is to determine whether a region is deforested or not. To solve the problem, we adopted Masked-attention Mask Transformer (Mask2Former) [5], a new architecture capable of addressing any image segmentation task as depicted in Fig. 2. With the Mask2Former deep neural network, we focused on data pre-processing and post-processing to improve performance. The overall procedures for deforestation estimation method is as follows.
### Pre-processing
To use diverse data augmentation libraries, which are mostly support only 3-channel image, we only select RGB bands of Sen2 and Land8 imagery data. To make 3-channel image, a mock band filled with 0 is inserted to Sen1 imagery data. The pre-processing procedures of training dataset are summarized as follows.
1. RGB bands for Sen2 and Land8 are selected and VV/VH bands are selected for Sen1.
2. Generate a training input image with selected bands. a mock band is inserted to Sen1 to make it 3 channel image. The shape of the processed images are (3, 256, 256) for all types of satellite.
3. Remove lowest 2% and highest 2% data in the image and normalized to [0,1].
### Training Network
To detect the deforested area in this challenge, we tried to adopt several networks using MMSegmentation [6] which is an open source semantic segmentation toolbox based on PyTorch. Through multiple experiments, we have discovered that Mask2Former [5] is the most suitable network for this challenge. In addition, there were differences in pixel accuracy depending on the backbone network of the mask2former network, as shown in Tab. 2. Therefore, we selected the best backbone network for each satellite.
In this challenge, there are two detection classes. The deforested area, which is the target, is labeled as class 1, while the forested/other area is labeled as class 0. To compensate for the class imbalance, we adopted the combination of
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Satellite** & **Time Range** & **Bands** & **Resolution** \\ \hline
**Landsat 5** & 1984-2012 & 8 & 30 m \\
**Landsat 8** & 2013-2021 & 9 & 30 m \\
**Sentinel-1** & 2014-2021 & 2 & 10 m \\
**Sentinel-2** & 2018-2021 & 12 & 10 m \\
**Labeling** & 2016-2021 & 1 & 10 m \\ \hline \hline \end{tabular}
\end{table}
Table 1: Satellites and data specification.
Figure 1: (a) Deforestation map of Brazil, (b) deforestation in test region until 2007 highlighted in yellow, and (c) deforestation in test region since 2007 highlighted in red.
Binary Cross-Entropy loss [13] and the Dice loss [11]. In addition, the model training is performed with AdamW optimization, and the learning rate is adjusted by checking for validation loss every two epochs.
### Post-processing
The deforestation estimation test queries consist of satellite images captured at specific times and locations, expressed in latitude and longitude coordinates. However, there are many cases where there are no satellite images available at the given query, and in some cases, there are only one or two images available as shown in Fig. 3. In addition, test images data of the same region at different times are supported. Therefore, to address the absence of image data or insufficient information at the given query, we leveraged time-series data that incorporated images captured at the same location during similar periods.For this post processing, the deforestation estimation output of image from the adjacent month for a specific query are generated using different network according to Sen1, Sen2, and Land8. To enhance the importance of the images captured in the current month, higher weights were assigned compared to the preceding and succeeding months.
On the other hands, the clouds in the image can lead to misleading results when estimating deforestation areas. To enhance the deforestation estimation performance, we exclude images that contain a significant amount of clouds. Especially, it was observed that the accuracy of the estimation decreased for Land8 and Sen2 images with a significant presence of clouds, as shown in Fig. 4. This was attributed to the selection of RGB bands unlike Sen1. Therefore, for Land8 and Sen2, which are affected by clouds, images with a significant cloud presence were removed. The removal criterion was determined by a simple rule: images in which all RGB values exceeded 160 were classified as clouds, with the threshold set at over 50% of the total image pixel. In case of Sen1 images, no cloud removal was performed because it ware acquired using SAR.
Despite learning from the same network, some of the prediction results were completely black (no deforestation), and some present plausible deforestation results. Therefore, two-step filtering was applied to the output result to ensure the overall detection performance as shown in Fig. 5. The clear outliers were filtered without three-sigma range (\(\mu\pm 3\sigma\)) at the first filtration step. Second filtration used one-sigma range (\(\mu\pm\sigma\)) using predicted deforestation percentage in the images. Here, \(\mu\) is the mean and \(\sigma\) is the variation of deforestation ratio for the predicted output.
Using the filtered output images from the previous step, all images were averaged to create a single binary image. In this image, each pixel was assigned a value of 1 if the probability of deforestation in that pixel exceeded a specific threshold (we set the threshold at 40%). Conversely, if the probability was below the threshold, the pixel value was assigned as 0.
Finally, the noise in binary image is removed with the morphology opening operation \(\odot\) as described in Eq. (1) that is dilation operation of erosion result.
\[I\circ M=(I\ominus M)\oplus M, \tag{1}\]
where, \(I,M\) are the original image and a structuring element and \(\ominus\), \(\oplus\) are the erosion and dilation operation respectively. The erosion operation eliminates small objects and dilation restores the size and shape of the remaining objects in image.
## 4 Results
The test set comprises 1,000 queries spanning from August 2016 to August 2021, covering 135 regions. The amount of image information provided for each query was inconsistent. Therefore, we focus on the post processing to improve the pixel accuracy. To evaluate the post-processing performance, each method is assessed from a evaluation website. Especially, With the cloud removal post processing, the pixel accuracy is increased up to 90.546. In addition, when there are only a few satellite images available at the given query time, considering time-series data increases pixel accuracy.
With cloud removal and time adjacent image data in the post-processing procedure as described in Sec. 3.3, final detection performance has been improved from the initial results. As a result, the proposed method finally achieves
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Method** & **ResNet-50** & **Swin-L** \\ \hline Landsat 8 & 79.445 & 80.84 \\ Sentinel-1 & 82.13 & 79.42 \\ Sentinel-2 & 78.145 & 77.41 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Pixel accuracy comparison according to backbone network
Figure 2: Mask2Former network architecture [5].
91.13 pixel accuracy, 0.88 F1-score and 0.81 IoU for the test set as represented in Tab. 3.
## 5 Conclusion
In the MultiEarth 2023 deforestation estimation challenge, three types of satellite imagery (Sentinel-1, Sentinel-2, and Landsat 8) are provided as a multi-modal dataset. The MMsegmentation framework is used to adopt the latest deep neural networks and diverse augmentation libraries. Since data augmentation libraries mostly support three-channel images, we select the RGB bands of satellite images. Two different backbones for the Mask2Former method are compared, and the better backbones are selected for each satellite data. Finally, post-processing is performed to remove cloud-covered images, filter outlier prediction results, average them with previous and next month data, and denoising operations. The proposed method achieves the
Figure 4: Post processing for cloud removal.
Figure 3: Post processing for the adjacent month.
best scores in all evaluation metrics (pixel accuracy, F1, and IoU).
|
2302.03691 | $\mathscr Q$-Sets and Friends: Regarding Singleton and Gluing
Completeness | This work is largely focused on extending D. Higgs' $\Omega$-sets to the
context of quantales, following the broad program of U. H\"ohle, we explore the
rich category of $\mathscr Q$-sets for strong, integral and commutative
quantales, or other similar axioms. The focus of this work is to study the
different notion of 'completeness' a $\mathscr Q$-set may enjoy and their
relations, completion functors, resulting reflective subcategories, their
relations to relational morphisms. We establish the general equivalence of
singleton complete $\mathscr Q$-sets with functional morphisms and the category
of $\mathscr Q$-sets with relational morphisms; we provide two
characterizations of singleton completeness in categorical terms; we show that
the singleton complete categorical inclusion creates limits. | José Goudet Alvim, Caio de Andrade Mendes, Hugo Luiz Mariano | 2023-02-06T21:26:08Z | http://arxiv.org/abs/2302.03691v1 | # \(\mathscr{Q}\)-Set \(\mathscr{E}\) Friends -
###### Abstract
This work is largely focused on extending D. Higgs' \(\Omega\)-sets to the context of quantales, following the broad program of [7], we explore the rich category of \(\mathscr{Q}\)-sets for strong, integral and commutative quantales, or other similar axioms. The focus of this work is to study the different notion of "completeness" a \(\mathscr{Q}\)-set may enjoy and their relations, completion functors, resulting reflective subcategories, their relations to relational morphisms.
We establish the general equivalence of singleton complete \(\mathscr{Q}\)-sets with functional morphisms and the category of \(\mathscr{Q}\)-sets with relational morphisms; we provide two characterizations of singleton completeness in categorical terms; we show that the singleton complete categorical inclusion creates limits.
###### Contents
* 1 Introduction
* 2 Preliminaries: Quantales
* 3 Preliminaries on \(\mathscr{Q}\)-Sets
* 3.1 Some Examples
* 3.2 The Underlying Graph Functor
* 4 Gluing Completeness
* 4.1 Some Results about Gluings and Compatible Families
* 4.2 Gluing-Completion
* 5 Singleton/Scott-Completeness
* 5.1 Basics about Singletons
* 5.2 Scott Completion
* 6 Connection between Completeness Conditions
* 6.1 Scott-Completeness and Relational Morphisms
Introduction
### History _&_ Motivation
The notion of sheaf on a topological space depends only on the (complete) lattice of the open sets of the space, thus it is straightforward to define sheaves for "spaces without points", that is, for a category \(H,\leq)\) for a given locale \(H\) (see [3]).
In the 1970s, the topos of sheaves over a locale (= complete Heyting algebra) \(\mathbb{H}\) was described, alternatively, as a category of \(\mathbb{H}\)-sets [5]. More precisely, in [3], there were three categories whose objects were locale valued sets that are equivalent to the category of sheaves over a locale \(\mathbb{H}\). Two different notions of separability and completeness have been proposed. On the one hand, the traditional notions of these properties in \(\mathbf{Sh}(\mathbb{H})\) can be translated to appropriate definitions in \(\mathbb{H}\)-sets. In addition, Scott's notion of singletons, a definition that is inspired from the ordinary singleton set, leads alternative notions of completeness and separability. Despite some folkloric misconceptions about those notions, a simple counter-example (in a finite boolean algebra) shows that these two definitions of completeness are not logically equivalent - and do not give rise to equivalent full subcategories either.
This wealth of definitions and notions, however, has had the unfortunate effect of muddying any discussion concerning those kinds of objects. A veritably deep folklore has taken root in the field which hinders careful thought and frightens - almost to the point of panic - anyone who is paying attention.
Nevertheless, there is a non-commutative and non-idempotent generalization of locales called "quantales", introduced by C.J. Mulvey [10]. Quantales show up in logic [17], and in the study of \(C^{*}\)-algebras [14].
Many notions of sheaf over a quantal and quantal-valued set are studied in many works ([16], [2], [**mulveyquantale**], [9], [8], [4][7], [6], [12], [13], [15]). In many cases, the base quantales are _right-sided and idempotent_. Herein we continue our study of quantale-valued set on _commutative and semicartesian_ quantales, initiated in [1]. Our approach is similar to the last one but, since every idempotent semicartesian quantale is a locale (Proposition 2.1), our axioms and theirs are orthogonal in some sense.
The goal of the present work is to examine these two notions of completeness (and separability): (i) via (unique) gluing of compatible families (gluing completeness of \(\mathscr{Q}\)-sets); (ii) via (unique) representation of (_strict_) singletons (Scott completeness). their relations and the properties of the full subcategories of their examples.
### Main results and the paper's structure
We have shown that:
1. Scott-completeness implies gluing-completeness;
2. Both full subcategories of gluing-complete and Scott-complete \(\mathscr{Q}\)-sets with functional morphisms are reflective;
3. For "strong" [7] quantales, it makes sense to speak of "the Scott completion" of a given \(\mathscr{Q}\)-set;
4. Every \(\mathscr{Q}\)-set is relationally isomorphic to its own Scott completion, completion is invariant under functional isomorphisms;
5. For strong quantales, the categories of Scott-complete \(\mathscr{Q}\)-sets with relational morphisms and the one with morphisms are isomorphic.
6. For strong quantales, that \(X\) being Scott-complete is equivalent to the its functional representable functor "being the same" as its relational representable functor.
7. For strong quantales, that \(X\) being Scott-complete is equivalent to its representable functor being "invariant" under the completion endofunctor.
8. For strong quantales, that the full inclusion of Scott-complete \(\mathscr{Q}\)-sets not only preserves (due to it being a right adjoint) but also _creates_ limits. And specifically, that limits of complete \(\mathscr{Q}\)-sets can be computed "pointwise".
## 2 Preliminaries: Quantales
**Definition 2.1**::
A _quantale_ is a type of structure \(\mathscr{Q}=(|\mathscr{Q}|,\leq,\otimes)\) for which \((|\mathscr{Q}|,\leq)\) is a complete lattice; \((|\mathscr{Q}|,\otimes)\) is a semigroup1; and, moreover, \(\mathscr{Q}\) is required to satisfy the following distributive laws: for all \(a\in\mathscr{Q}\) and \(B\subseteq\mathscr{Q}\),
Footnote 1: _i.e._ the binary operation \(\otimes:\mathscr{Q}\times\mathscr{Q}\to\mathscr{Q}\) (called multiplication) is associative.
\[a\otimes\left(\bigvee_{b\in B}b\right) =\bigvee_{b\in B}\left(a\otimes b\right)\] \[\left(\bigvee_{b\in B}b\right)\otimes a =\bigvee_{b\in B}\left(b\otimes a\right)\]
We denote by \(\mathrm{E}\,\mathscr{Q}\) the subset of \(\mathscr{Q}\) comprised of its idempotent elements.
**Remark 2.1**::
1. In any quantale \(\mathscr{Q}\) the multiplication is increasing in both entries;
2. Since \(\bot\) is also the supremum of \(\emptyset\), for any \(a\), \(a\otimes\bot=\bot=\bot\otimes a\)
3. Since \(\top\) is \(\sup\mathscr{Q}\), then \(\top\otimes\top=\sup_{a,b}a\otimes b=\sup\mathbf{img}\otimes\)
**Remark 2.2**::
If \((\mathscr{Q},\leq)\) is a complete lattice for which the binary infimum satisfies the above distributive laws, the resulting quantale has \(\top\) as its unit and is - in fact - a locale. Conversely, every locale is a unital quantale in such a manner.
**Definition 2.2**::
A quantale \(\mathscr{Q}\) is said to be
* _bidivisible_ when \[a\leq b\implies\exists\lambda,\rho:a\otimes\rho=b=\lambda\otimes a\] left (right) divisibility means to drop the \(\rho\) (\(\lambda\)) portion of the axiom.
* _integral_ when \(\top\otimes a=a=a\otimes\top\). we say it's right-sided when the right equality holds, and left-sided when the left equality holds.
* _unital_ when \(\otimes\) has a unit;
* _semicartesian_ when \(a\otimes b\leq a\wedge b\)
* _commutative_ when \(\otimes\) is;
* _idempotent_ when \(a\otimes a=a\);
* _linear and strict_ when \(\leq\) is a linear order and for \[a\neq\bot\ and\ a\otimes b=a\otimes c\ or\ b\otimes a=c\otimes a\implies b=c\]
* _strong_ when for any \(e\) and \(A\) [7, cf. p. 30], \[e=e\otimes e\implies e\leq\bigvee_{a\in A}a\implies e\leq\bigvee_{a\in A}a\otimes a\]
We offer the following diagram to explain some of the relations between those definitions:
\((R|L)\)-_sided_\((L|R)\)-_\(\bullet\)_\(
**Example 2.1:**
Locales are - perhaps - the best example of quantales that are commutative, idempotent, integral (and hence both semicartesian and right-sided), divisible and strong (both trivially). Among which, and of special significance to Sheaf Theory, are the locales of open subsets of a topological space \(X\), where the order relation is given by the inclusion, the supremum is the union, and the finitary infimum is the intersection.
**Example 2.2:**
We list below some examples of unital quantales that are not locales:
1. The extended half-line \([0,\infty]\) with order the inverse order - \(\geq\) -, and the usual sum of real numbers as the multiplication. Since the order relation is \(\geq\), the top element is \(0\) and the bottom elements is \(\infty\). We call this the Lawvere quantale due to its relation to Lawvere spaces (related to metric spaces);
2. The extended natural numbers \(\mathbb{N}\cup\{\infty\}\), as a restriction of the the Lawvere quantale (related to distance on graphs);
3. The set \(\mathcal{I}(R)\) of ideals of a commutative and unital ring \(R\) with order \(\subseteq\), and the multiplication as the multiplication of ideals. The supremum is the sum of ideals, the top element is \(R\) and the trivial ideal is the bottom;
4. The set \(\mathcal{R}\mathcal{I}(R)\) of right (or left) ideals of an unital ring \(R\) with the same order and multiplication of the above example. Then the supremum and the top and the bottom elements are also the same of \(\mathcal{I}(R)\);
5. The set of closed right (or left) ideals of a unital \(C^{*}\)-algebra, the order is the inclusion of closed right (or left) ideals, and the multiplication is the topological closure of the multiplication of the ideals.
For more details and examples we recommend [14].
**Remark 2.3:**
The first three examples we introduced in 2.2 are commutative and integral quantales. The last two examples are neither commutative nor semicartesian. The forth is not idempotent but the fifth is, and both are right-sided (resp. left-sided) quantales [14, cf.].
The main examples of strong quantales are Heyting algebras, \(([0,1],\leq,\cdot)\), strict linear quantales. Some MV-Algebras, like the Chang's \(([0,1],\wedge,\vee,\oplus,\otimes,0,1)\) [**chang**]'s, are not strong.
**From now on, we assume all quantales to be commutative.**
**Definition 2.3:**
Let \(\mathscr{Q}\) be a a quantale, we define an alternative partial order \(\preceq\) given by
\[a\preceq b\iff a=a\otimes b\]
**Remark 2.4**::
1. Let \(\mathscr{Q}\) be a unital quantale, then it is integral iff it is semicartesian;
2. \(a\preceq b\leq c\) implies \(a\preceq c\);
3. If \(e\in\operatorname{E}\mathscr{Q}\), \((e\preceq a)\iff(e\leq a)\).
**Proposition 2.1**::
If \(\mathscr{Q}\) is semicartesian and idempotent, it is in fact a complete distributive lattice and \(\otimes=\wedge\). In other words, it is a locale.
Proof.: Suppose \(\mathscr{Q}\) is in fact idempotent, we have - because \(\otimes\) is increasing in both arguments - that
\[a\leq b\implies a\leq c\implies a\leq b\otimes c\]
Hence, if \(a\) is less than both \(b\) and \(c\), then it must be smaller than \(b\otimes c\); but since \(\mathscr{Q}\) is semicartesian, by remark 2.4 above, \(\otimes\leq\wedge\). This means that \(b\otimes c\) is a lowerbound for \(b\) and \(c\), but what we had gotten from idempotency means it's the greatest such upper bound.
Thus the multiplication satisfies the universal property of infima. The above is just a particular case of [11, Proposition 2.1].
## 3 Preliminaries on \(\mathscr{Q}\)-Sets
**Remark 3.1**::
Hereon we are working exclusively with commutative semicartesian quantales, as opposed to ones without those properties.
Given a quantale \(\mathscr{Q}\), one may form - roughly speaking - a \(\mathscr{Q}\)-space, wherein distances between points are measured by elements of \(\mathscr{Q}\) as opposed to - say - \([0,\infty)\) as we often do. This definition is made precise in the notion of a \(\mathscr{Q}\)-set.
**Definition 3.1**::
A \(\mathscr{Q}\)-set is a set endowed with a \(\mathscr{Q}\)-distance operation usually denoted by \(\delta\). _i.e_ a \(\mathscr{Q}\)-set is a set \(X\) with a map \(\delta:X^{2}\to X\) satisfying:
1. \(\delta(x,y)=\delta(y,x)\);
2. \(\delta(x,y)\otimes\delta(y,z)\leq\delta(x,z)\);
3. \(\delta(x,x)\otimes\delta(x,y)=\delta(x,y)\).
and it is usual to denote \(\delta(x,x)\) by simply the "extent of \(x\)" written as \(\operatorname{E}x\).
A couple of things might jump out to reader in the above definition. (i) \(\delta\) is symmetric, even though we have thrown out all but the vaguest notions
tying ourselves to metric spaces; (ii) Why is the triangle inequality upside down? (iii) \(\operatorname{E}x\otimes\delta(x,y)=\delta(x,y)\), why not just ask that \(\operatorname{E}x=\top\)?
Those questions are all valid - and answering the first and last ones differently has been done in the past and is the main difference between \(\mathscr{Q}\)-sets and \(\mathscr{Q}\)-enriched categories from a definitional perspective. The question of order being inverse is more one of sanity: since we treat a \(\mathscr{Q}\)-set as a set with \(\mathscr{Q}\)-valued equality, it makes sense to think that \(\operatorname{E}x\) is the maximally valid equality to \(x\) and hence the triangular inequality needs to be turned upsidedown - and turned into the transitivity of equality.
**Remark 3.2**::
When we speak of properties of the type \(P(\vec{x})\leq Q(\vec{x})\) in \(\mathscr{Q}\)-sets, it is often more insightful to think that the logically equivalent (but notationally less helpful) statement
\[P(\vec{x})\to Q(\vec{x})\ (=\top)\]
There are two main category structures that one can canonically endow the collection of all \(\mathscr{Q}\)-sets with. One is taking maps to be co-contractions (_i.e._ they make \(\delta\) bigger) - the other is to consider well behaved \(\mathscr{Q}\)-valued relations between the underlying sets.
**Definition 3.2**::
A functional morphism \(f:X\to Y\) is a function \(f\) between the underlying sets of \(X\) and \(Y\) such that \(f\) increases \(\delta\) and preserves \(\operatorname{E}\); that is to say
\[\delta_{X}\leq\delta_{Y}\circ(f\times f)\]
\[\operatorname{E}_{X}=\operatorname{E}_{Y}\circ f\]
A relational morphism \(\varphi:X\to Y\) is a function \(\varphi:|X|\times|Y|\to\mathscr{Q}\) satisfying
\[\delta(x,x^{\prime})\otimes\varphi(x,y) \leq\varphi(x^{\prime},y)\] \[\varphi(x,y)\otimes\delta(y,y^{\prime}) \leq\varphi(x,y^{\prime})\] \[\varphi(x,y)\otimes\varphi(x,y^{\prime}) \leq\delta(y,y^{\prime})\] \[\varphi(x,y)\otimes\operatorname{E}y =\varphi(x,y)\] \[\operatorname{E}x\otimes\varphi(x,y) =\varphi(x,y)\] \[\bigvee_{y\in Y}\varphi(x,y) =Ex\]
The reader should beware we don't often distinguish between \(\delta_{X}\) and \(\delta_{Y}\) and instead rely on suggestively named variables so as to indicate their type and hence the \(\delta\) they refer to. In other words, the reader is expected to be familiar with Koenig lookup2.
Footnote 2: Which, to quote a great website – cppreference.com –
We denote by \(\mathscr{Q}\)-\(\mathbf{Set}_{r}\) the category \(\mathscr{Q}\)-sets and relational morphisms between them and by \(\mathscr{Q}\)-\(\mathbf{Set}_{f}\) the category with the same objects but functional morphisms between them instead.
**Proposition 3.1**::
It should be observed that both notions of morphism actually do form a category. For functional morphisms, we take composition to be the usual function composition - since functional morphisms are obviously closed under it - and the identity function as the identity. For relational morphisms, the identity on \(X\) becomes \(\delta_{X}\) and composition is the (perhaps obvious) relational composition:
\[[\psi\circ\varphi](x,z)=\bigvee_{y\in Y}\varphi(x,y)\otimes\psi(y,z)\]
Since proving that functional morphisms do indeed form a category would be trivial, we shall instead prove a stronger result afterwards.
Proof.: Firstly we should observe that \(\delta\) is indeed a relational morphism. This is easy enough, as the first three axioms are direct applications of the triangular inequality, the fourth and fifth are direct applications of the extension axiom of \(\mathscr{Q}\)-sets, and the last one is trivially true once we realize that
\[\delta(x,y)\leq\operatorname{E}x=\delta(x,x)\]
Once we have that \(\delta\) is indeed a morphism, we can wonder about the action of (pre-)composing with it. In which case we obtain
\[[\delta\circ\varphi](x,y) =\bigvee_{y^{\prime}\in Y}\varphi(x,y^{\prime})\otimes\delta(y^{ \prime},y)\] \[\geq\varphi(x,y)\otimes\delta(y,y)\] \[=\varphi(x,y)\]
on the other hand, we could have applied the appropriate relational morphism axiom inside the \(\bigvee\) thus getting that the composite was smaller than \(\varphi(x,y)\) instead. This proves they were in fact equal all along. The same goes for the other composite:
\[[\varphi\circ\delta](x,y) =\bigvee_{x^{\prime}\in X}\delta(x,x^{\prime})\otimes\varphi(x^{ \prime},y)\] \[\geq\delta(x,x)\otimes\varphi(x,y)\] \[=\varphi(x,y)\]
Hence it only remains to see that \(\circ\) as defined for relational morphisms is indeed a composition, in that it is associative and that the composite of two morphisms is a morphism. Firstly suppose that \(\varphi\) and \(\psi\) are composable
morphisms - we ought to show that their composite is also a morphism:
\[[\psi\circ\varphi](x,z)\otimes\delta(z,z^{\prime}) =\bigvee_{y}\varphi(x,y)\otimes\psi(y,z)\otimes\delta(z,z^{\prime})\] \[\leq\bigvee_{y}\varphi(x,y)\otimes\psi(y,z^{\prime})\] \[=[\psi\circ\varphi](x,z^{\prime})\]
And likewise, _mutatis mutandis_ one can show that axioms 1-2 and 4-5 hold. Axiom 6 can be seen to hold easily as well:
\[\bigvee_{z}[\psi\circ\varphi](x,z) =\bigvee_{y}\varphi(x,y)\otimes\bigvee_{z}\psi(y,z)\] \[=\bigvee_{y}\varphi(x,y)\otimes\operatorname{E}y\] \[=\bigvee_{y}\varphi(x,y)\] \[=\operatorname{E}x\]
Axiom 3 actually requires commutativity, which is unfortunately the first of many times it is very necessary to make use of it;
\[[\psi\circ\varphi](x,z)\stackrel{{\psi\circ\varphi}}{{ \otimes}}(x,z^{\prime}) =\bigvee_{y}\bigvee_{y^{\prime}}\varphi(x,y)\otimes\psi(y,z) \otimes\varphi(x,y^{\prime})\otimes\psi(y^{\prime},z)\] \[=\bigvee_{y}\bigvee_{y^{\prime}}\psi(y,z)\otimes\varphi(x,y) \otimes\varphi(x,y^{\prime})\otimes\psi(y^{\prime},z)\] \[\leq\bigvee_{y}\bigvee_{y^{\prime}}\psi(y,z)\otimes\delta(y,y^{ \prime})\otimes\psi(y^{\prime},z)\] \[\leq\bigvee_{y}\bigvee_{y^{\prime}}\psi(y,z)\otimes\psi(y,z)\] \[\leq\bigvee_{y}\bigvee_{y^{\prime}}\delta(z,z^{\prime})=\delta(z, z^{\prime})\]
Associativity can obviously be seen to hold when we realize that one can rearrange the terms of \((\psi\circ\varphi)\circ\chi\) and \(\psi\circ(\varphi\circ\chi)\) to be in the form of
\[\bigvee_{x,y}\chi(w,x)\otimes\varphi(x,y)\otimes\psi(y,z)\]
**Definition 3.3**::
Instead of proving the category axioms for functional morphisms we promised to prove a stronger result - which is incidently useful for another paper of ours
in the works - which is to prove that \(e\)-morphisms form a category (given a generic commutative unital quantale) and that functional morphisms form a wide subcategory of \(I\)-morphisms.
So, let us define \(e\)-morphisms: given an idempotent element \(e\) of \(\mathscr{Q}\), a \(e\)-morphism is a functional morphism "up to error \(e\)":
\[e\otimes\delta(x,x^{\prime})\leq\delta(f(x),f(x^{\prime}))\]
\[\operatorname{E}f(x)=e\otimes\operatorname{E}f(x)\]
**Proposition 3.2**::
We claim that the collection of \(\langle e,\varphi\rangle\) where \(\varphi\) is an \(e\)-morphism constitutes a category under the obvious composition laws. Furthermore, the identity function is a \(I\)-morphism where \(I\) is the unit of the quantale, and further still: \(I\)-morphisms are closed under composition and form a subcategory which is definitionally equal to \(\mathscr{Q}\)-\(\operatorname{\mathbf{Set}}_{f}\).
Proof.: Firstly, the obvious composition takes an \(e\)-morphism \(f\) and an \(e^{\prime}\)-morphism \(g\) to a \((e\otimes e^{\prime})\) morphism \(g\circ f\). Associativity is due to functional (in \(\operatorname{\mathbf{Set}}\), that is) \(\circ\) associativity and the fact that \(\otimes\) makes \(\mathscr{Q}\) a semigroup. The fact that \(g\circ f\) is a \((e\otimes e^{\prime})\)-morphism is rather obvious and the proof is ommited.
The identity is evidently a \(I\)-morphism - and of course that composing \(I\)-morphisms gives a \(I\otimes I=I\)-morphism.
### Some Examples
**Example 3.1** (Initial Object)::
The empty set is - vacuously - a \(\mathscr{Q}\)-set, and since morphisms are functions, it also happens to be the initial object.
**Example 3.2** (Terminal Object)::
The set of idempotent elements of \(\mathscr{Q}\), denoted \(\operatorname{E}\mathscr{Q}\) naturally has a structure of a \(\mathscr{Q}\)-set - given by \(\delta(a,b)=a\otimes b=a\wedge b\)3. It is trivial to see that \(\otimes\) satisfies all \(\mathscr{Q}\)-set laws. More interestingly, however, \(\operatorname{E}\mathscr{Q}\) must be the terminal object because \(\operatorname{E}e=e\): since functional morphisms preserve extents, one has that \(\operatorname{E}f(x)=\operatorname{E}x\) - however, \(\operatorname{E}f(x)=f(x)\) and thus \(f(x)=\operatorname{E}x\). This proves that there is at most one morphism \(X\to\operatorname{E}\mathscr{Q}\).
Footnote 3: This is the infimum in the subposet \(\operatorname{E}\mathscr{Q}\) that turns out to be a locale.
On the other hand, \(\delta(x,y)\leq\operatorname{E}x\wedge\operatorname{E}y\) which just happens to make \(x\mapsto\operatorname{E}x\) a functional morphism. And thus, \(\operatorname{E}\mathscr{Q}\) is a \(\mathscr{Q}\)-set.
**Remark 3.3**::
One cannot use the whole of \(\mathscr{Q}\) in \(\operatorname{E}\mathscr{Q}\)'s stead, as \(\delta(x,y)\otimes\operatorname{E}x\) would not hold. The only reason it holds in the above example is because for idempotents \(\otimes=\wedge\). However, one can obtain a \(\mathscr{Q}\)-set where the underlying set is \(\mathscr{Q}\) itself
**Example 3.3:**
Much akin to how Lawvere's quantale is a Lawvere space, a (integral and commutative) quantale \(\mathscr{Q}\) is a \(\mathscr{Q}\)-set. This is achieved with the following:
\[\delta(x,y)=(x\to y)\wedge(y\to x)\]
which is roughly equivalent to \(|x-y|\) for real numbers. This isn't necessarily the best \(\mathscr{Q}\)-set structure we can give them, as \(\operatorname{E}x=\top\) for any \(x\).
Ways to mitigate this phenomenon, which is specially strange for \(\bot\), involve taking into account idempotents above \(x\). An important quantallic property is the existence of an operation \((\_)^{-}\) taking an element \(x\) to the value \(\sup\left\{e\in\operatorname{E}\mathscr{Q}\ |\ e\preceq x\right\}\). Multiplying \(\delta(x,y)\) by \(x^{-}\otimes y^{-}\) guarantees - for instance - that the above construction coincides with the terminal object when \(\mathscr{Q}\) is a locale.
Another way to correct this, is to incorporate \(\operatorname{E}\mathscr{Q}\) more directly, considering the space with underlying set \(\mathscr{Q}\times\operatorname{E}\mathscr{Q}\) and \(\delta\) given by
\[\delta((x,e),(y,a))=a\otimes e\otimes\left[(x\to y)\wedge(y\to x)\right]\]
We write this \(\mathscr{Q}\)-set as \(\mathscr{Q}_{\operatorname{E}}\).
**Example 3.4:**
A construction that is explored in this work's sister-article [1] but deserves to be mentioned here in passing \(X\boxtimes X\), given by the underlying set \(|X|\times|X|\) and with \(\delta\) given by the product of the \(\delta\) of the coordinates. The reason this construction is relevant here is because \(\delta:X\boxtimes X\to\mathscr{Q}_{\operatorname{E}}\) in a natural way:
\[(x,y)\mapsto(\delta(x,y),\operatorname{E}x\otimes\operatorname{E}y)\]
And this happens to be a functional morphism.
**Remark 3.4:**
Monomorphisms are always injective functions, and epimorphisms are always surjective.
**Example 3.5** (Regular Subobjects):
A monomorphism is regular when it is an equalizer of a pair of parallel arrows. Suppose \(f,g:A\to B\), one way to conceive of an equalizer is as being the maximal subobject of \(A\) making the diagram commute. It is quite trivial to see that subobjects in general _must_ be subsets with a point-wise smaller \(\delta\). Hence, the largest subobject of \(A\) that equalizes the pair is simply the subset of \(A\) where the functions agree on, with the largest possible \(\delta\): that being \(A\)'s.
Hence, we see a pattern where equalizers - in general - are simply subsets with \(\delta\) coming from a restriction of the original object. Importantly, though, we refer to "regular subobjects" as monomorphisms that preserve \(\delta\) - as they have been equivalently characterized.
The skeptical reader might question that we have merely shown that regular monos preserve \(\delta\), as opposed to showing such to be a sufficient condition.
In which case, given the fact that monos are injective functions, \(\delta\)-preserving monos are simply subsets with its superset's \(\delta\); consider one such mono: \(m:A\hookrightarrow X\), with \(A\subseteq X\), one takes \((X\amalg X)/\sim\) with \(\sim\) defined so as to identify both copies of \(A\).
\[\delta(\left[\!\left[(x,i)\right]\!\right],\left[\!\left[(y,j)\right]\!\right] )=\begin{cases}\delta(x,y),&i=j\\ \bigvee_{a\in A}\delta(x,a)\otimes\delta(a,y),&i\neq j\end{cases}\]
There are two obvious inclusions of \(X\) into this set, namely the upper branch and the lower branch. And they coincide exactly on the section corresponding to \(A\), so the equalizer of those two arrows must be the subset inclusion of \(A\) into \(X\), the \(\delta\) must be the biggest possible, which is the restriction of \(X\)'s so as to remain a morphism.
**Example 3.6**::
Suppose \((X,d)\) is a pseudo-metric space, then \((X,d)\) is a \([0,\infty]\)-set where \(\delta(x,y)\neq\bot=\infty,\forall x,y\in X\).
**Example 3.7**::
Given a nonempty index set \(I\), we have a \(\mathscr{Q}\)-set \(\coprod_{i\in I}\top\), given by
\[\delta((e,i),(e^{\prime},i^{\prime}))=\begin{cases}e\wedge e^{\prime},&\text{ if }i=i^{\prime};\\ \bot,&\text{otherwise}.\end{cases}\]
**Example 3.8**::
Given a commutative ring \(A\), let the set of its (left) ideals be denoted \(\mathscr{I}_{A}\). \(\mathscr{I}_{A}\) is a quantale. Given a left \(A\)-module \(M\), we can endow it with the structure of a \(\mathscr{I}_{A}\)-set:
\[\delta(x,y)=\bigvee\left\{I\in\mathscr{I}_{A}\ |\ I\cdot x=I\cdot y\right\}\]
In fact, that supremum is attained at with a particular ideal. Moreover, \(Ex=A=\max\mathscr{I}_{A}\).
**Example 3.9**::
Suppose that \(\mathscr{Q}\) is a quantale with "idempotent upper approximations"4:
Footnote 4: In [15] are described sufficient conditions for \(\mathscr{Q}\) to have such a property.
\[\forall q\in\mathscr{Q}:\exists q^{+}\in\mathrm{E}\,\mathscr{Q}:q^{+}=\min \left\{e\in\mathrm{E}\,\mathscr{Q}\ |\ q\preceq e\right\}\]
Then
\[\delta(x,y)=\begin{cases}x\otimes y,&x\neq y;\\ x^{+},&x=y.\end{cases}\]
defines a \(\mathscr{Q}\)-set structure on \(\mathscr{Q}\) itself.
### The Underlying Graph Functor
**Definition 3.4**::
There is a functor \(\mathcal{G}_{\mathbb{R}}:\mathscr{Q}\text{-}\mathbf{Set}_{f}\to\mathscr{Q}\text{-} \mathbf{Set}_{r}\) which is the underlying _graph_ functor, because it takes functional morphisms to their graph relation. More precisely, \(\mathcal{G}_{\mathbb{R}}(X)=X\) and given \(f:X\to Y\)
\[(\mathcal{G}_{\mathbb{R}}\,f)(x,y)=\delta(f(x),y)\]
**Proposition 3.3**::
As defined above, \(\mathcal{G}_{\mathbb{R}}\,f\) is indeed a functional morphism and \(\mathcal{G}_{\mathbb{R}}\) is indeed a functor.
Proof.: It is clear, from \(f\) being a functional morphism, that \(\mathcal{G}_{\mathbb{R}}\,f\) should at least satisfy the \(\delta\) and \(\operatorname{E}\) axioms for relational morphism axioms; the \((\Sigma)\) axiom holds because of the triangular inequality; the strictness axiom holds as taking \(b=f(a)\) gives \(\operatorname{E}f(a)\) which is \(\operatorname{E}a\).
Regarding functoriality, \(\mathcal{G}_{\mathbb{R}}\,\mathbf{id}=\delta\) and hence the identity in the relational category; moreover,
\[(\mathcal{G}_{\mathbb{R}}\,g)\circ(\mathcal{G}_{\mathbb{R}}\,f)(x,z) =\bigvee_{y}(\mathcal{G}_{\mathbb{R}}\,f)(x,y)\otimes(\mathcal{G} _{\mathbb{R}}\,g)(y,z)\] \[=\bigvee_{y}\delta(f(x),y)\otimes\delta(g(y),z)\] \[=\delta(g\circ f(x),z)\] \[=\mathcal{G}_{\mathbb{R}}(g\circ f)\]
## Interlude
In the general course of studying \(\Omega\text{-}\mathbf{Set}\) for a frame/locale \(\Omega\) one is bound to eventually come across a certain notion called "completeness". Completeness is, largely speaking, a property necessary to transition between \(\Omega\text{-}\mathbf{Set}\)-world to \(\mathbf{Sh}\Omega\)-world [3, prop. 2.7.10, cf. def. 2.9.1]. This notion of completeness we call Scott-completeness and refers to a thing called singletons.
Sheaves, almost by definition, are about gluing partial data - in that morally speaking, compatible elements over a covering admit exactly one gluing which lies over the covered open. This gives a second natural notion of completeness - which we have come to call Gluing-completeness.
In these next sections, therefore, we shall give the precise definitions pertaining to both notions, make explicit some of the relations between them.
Gluing Completeness
**Definition 4.1** (Compatible Family):
Given a \(\mathscr{Q}\)-set \(X\), a family \(A\) of its elements is said to be _compatible_ when \(\forall a,b\in A\),
\[\delta(a,b)=\operatorname{E}a\otimes\operatorname{E}b\]
naturally, since extents are idempotent, and for idempontents \(\otimes=\wedge\), this is the same as saying \(\delta(a,b)=\operatorname{E}a\wedge\operatorname{E}b\).
Note that the empty family of a \(\mathscr{Q}\)-set is vacuously compatible.
In light of our usual interpretation for \(\delta\) being 'the place where two fragments of data agree', this says that \(a\) and \(b\) agree in the interesction of their extents - their domains of definition and degrees of certainty, so to speak. Hence, they are compatible.
**Definition 4.2** (Gluing Element):
Given a \(\mathscr{Q}\)-set \(X\) and a family of its elements \(A\), we say that \(x_{A}\in X\) is a _gluing of \(A\)_ or that _it glues_\(A\) when \(\forall a\in A\),
\[\delta(a,x_{A})=\operatorname{E}a\] \[\operatorname{E}x_{A}=\bigvee_{a\in A}\operatorname{E}a\]
This definition then captures the idea that an amalgamation of local data agrees integrally with the data it aggregates, and - importantly - does not make choices or provide information not present in the family \(A\) it glues.
**Proposition 4.1:**
If a family \(A\) admits a gluing element, it is compatible. Moreover, any gluing elements are "morally unique" - that is to say, they are \(\delta\)-equivalent.
Proof.: Suppose \(x_{A}\) glues \(A\); then let \(a\) and \(b\) be members of the family \(A\); observe that
\[\delta(x_{A},a) =\operatorname{E}a\] \[\delta(x_{A},b) =\operatorname{E}b\]
hence
\[\operatorname{E}a\otimes\operatorname{E}b=\delta(x_{A},a)\otimes\delta(x_{A},b)\]
but then,
\[\operatorname{E}a\otimes\operatorname{E}b\leq\delta(a,b)\]
but we know the following always holds
\[\delta(a,b)\leq\operatorname{E}a\otimes\operatorname{E}b\]
therefore they are the same.
This suffices for compatibility, now for the second claim: consider \(y\in X\) and let us compare two possible gluings - say \(x\) and \(x^{\prime}\).
\[\delta(x,y) =\delta(x,y)\otimes\operatorname{E}x\] \[=\bigvee_{a\in A}\delta(x,y)\otimes\operatorname{E}a\] \[=\bigvee_{a\in A}\delta(x,y)\otimes\operatorname{E}a\otimes \operatorname{E}a\] \[=\bigvee_{a\in A}\delta(x,y)\otimes\delta(x^{\prime},a)\otimes \delta(x,a)\] \[\leq\bigvee_{a\in A}\delta(x,y)\otimes\delta(x^{\prime},x)\] \[\leq\delta(x^{\prime},y)\]
since the same holds swapping \(x\) for \(x^{\prime}\) and vice versa, it follows that \(\delta(x,y)=\delta(x^{\prime},y)\) for _any_\(y\); hence, they are essentially the same.
**Definition 4.3** (\(\delta\)-equivalence)::
The above property is somewhat important, as - as often is the case - equality is _evil_ and equivalence is _good_. Two elements \(x,y\) of a \(\mathscr{Q}\)-set \(X\) are said to be _\(\delta\)-equivalent_ whenever either of the following equivalent conditions hold:
\[\forall z\in X:\delta(x,z)=\delta(y,z)\]
\[\delta(x,y)=\operatorname{E}x=\operatorname{E}y\]
That the former implies the latter is rather obvious, simply substitute \(z\) for \(y\) and for \(x\). That the latter yields the former is also simple to show:
\[\delta(x,z) =\delta(x,z)\otimes\operatorname{E}x\] \[=\delta(x,z)\otimes\delta(x,y)\] \[\leq\delta(y,z)\]
and _mutatis mutandis_ one obtains the other required inequality.
**Remark 4.1**::
It is evident that this is an equivalence relation.
Two \(\delta\)-equivalent points cannot - for instance - be taken to non-equivalent points via a functional morphism; and relational morphisms can't distinguish them. This is codified in the following lemma
**Lemma 4.1:**
The relation of \(\delta\)-equivalence is 'congruential' for either type of morphism. In more precise terms: provided with equivalent elements \(x\) and \(x^{\prime}\);
1. For functional morphisms, congruential means \(f(x)\sim f(x^{\prime})\);
2. For relational morphisms, it means that both \(\varphi(x,y)=\varphi(x^{\prime},y)\) and \(\psi(w,x)=\psi(w,x^{\prime})\).
Proof.: \[\operatorname{E}x=\operatorname{E}x^{\prime}=\delta(x,x^{\prime}) \leq\delta(f(x),f(x^{\prime}))\] \[\leq\operatorname{E}f(x)=\operatorname{E}x\] \[=\operatorname{E}x^{\prime}=\operatorname{E}f(x^{\prime})\]
As for relational morphisms,
\[\varphi(x,y) =\varphi(x,y)\otimes\operatorname{E}x\] \[=\varphi(x,y)\otimes\delta(x,x^{\prime})\] \[\leq\varphi(x^{\prime}y)\]
The dual inequality holds for the same reasons; and similarly, the same equality can be obtained for \(\psi\).
We can state the gluing completeness as follows
**Definition 4.4** (Gluing Completeness): \(\mathcal{Q}\)-set \(X\) is said to be _gluing complete_ when every compatible family has exactly one gluing. This is equivalent to asking that all compatible families admit \(a\) gluing and for \(\delta\)-equivalence to be "extensionally equal to equality" - which sounds very pretentious.
The equivalence can be seen to be true by realizing that \(\{x,x^{\prime}\}\) is glued by both \(x\) and \(x^{\prime}\) if and only if they are \(\delta\)-equivalent.
**Definition 4.5** (Extensionality): \(\operatorname{Extensionality}\) is the name of the property of \(\delta\)-equivalence being extensionally equal to equality. That is, to say:
\[\delta(x,y)=\operatorname{E}x=\operatorname{E}y\implies x=y\]
or equivalently
\[\delta(\_,x)=\delta(\_,y)\implies x=y\]
**Example 4.1:**
It is quite easy to see that the terminal \(\top\,\mathcal{Q}\)-set is gluing complete. Given that \(\operatorname{E}a\otimes\operatorname{E}b=\delta(a,b)\) always holds, any set of idempotent elements is compatible. Moreover, the gluing of such a set is its supremum, as can easily be seen:
\[x_{A}=\operatorname{E}x_{A}=\bigvee_{a\in A}\operatorname{E}a=\bigvee_{a\in A}a\]
**Example 4.2:**
The initial object, the empty set, is trivially extensional but it fails to satisfy the gluing condition as the empty family is also trivially _compatible_ and there is no element in the empty set that actually glues it. Hence, completeness doesn't hold, whereas extensionaly does.
**Example 4.3:**
In an extended pseudometric space \((X,d)\) seen as a \([0,\infty]\)-set, points at infinity (with minimal extent) are always compatible with every other point, and normal points are compatible if and only if they are morally the same (ie. their distance is \(0\)).
Hence, one such extended pseudometric space satisfies the gluing condition if and only if it has a point at infinity, and is extensional if and only if it is morally a metric space (ie. it is a metric space if considered sans points at infinity) and it has exactly one point at infinity.
**Example 4.4:**
For nontrivial index sets \(I\) (namely: with more than one element) and \(\mathscr{Q}\)-sets \(X_{i}\) with elements \(\bot_{i}\) at infinity (null-extensional) \(\coprod_{i\in I}X\) is never extensional, as \(\delta((\bot_{i},i),(\bot_{j},j))=\bot\) for multiple different \(\bot_{i}\). In general, any \(\mathscr{Q}\)-set with more than exactly one element with null extent cannot be extensional.
**Example 4.5:**
1. If \(A\) is a commutative ring and \(M\) is a left \(A\)-module, then the set over \(\mathscr{Q}=Ideals(A)\), \(X_{M}=(M,\delta)\), is such that for each \(x,y\in M\), \(x\sim_{\delta}y\) iff \(A.x=A.y\), thus it is not an extensional \(\mathscr{Q}\)-set, in general: any member of a non-empty compatible family in \((M,\delta)\) is a gluing for that family. Moreover, \(M\) is a divisible \(A\)-module iff \(\sim_{\delta}=\{(0,0)\}\cup(M\setminus\{0\})\times(M\setminus\{0\})\).
2. If \(card(I)>1\), then the \(\mathscr{Q}\)-set \(I.1_{\mathscr{Q}}-set\) is not extensional, since \(\delta((\bot,i),(\bot,j))=\bot\) and there are \(i,j\in I\) such that \(i\neq j\). If \(I^{\prime}\subseteq I\), then the family \(\{(\bot,i):i\in I^{\prime}\}\) is compatible and, for any \(j\in I\), \((\bot,j)\) is a gluing for that family.
3. If \(\mathscr{Q}\) is a integral quantale with upper idempotent approximation, then the associated \(\mathscr{Q}\)-set on \(\mathscr{Q}\) is extensional. A family \(S\) on that \(\mathscr{Q}\)-set is compatible iff for each \(x,y\in S\), \(x\otimes y=x^{+}\otimes y^{+}\).
**Proposition 4.2:**
The functor \(\mathcal{G}_{\mathbb{R}}\) is faithful exactly on those pairs of objects where the codomain is extensional. This is to say: if \(Y\) is extensional, then the following map is injective for all \(X\)
\[\mathscr{Q}\text{-}\textbf{Set}_{f}(X,Y)\xrightarrow{\mathcal{G}_{\mathbb{R}} }\mathscr{Q}\text{-}\textbf{Set}_{r}(X,Y)\]
Moreover, extensional \(\mathscr{Q}\)-sets are the largest class of objects such that you can quantify universally over \(X\).
Proof.: First let's prove the latter claim: let \(Y\) be any \(\mathscr{Q}\)-set; and suppose \(y\) and \(y^{\prime}\) are \(\delta\)-equivalent. Now let \(X=\{*\}\), setting \(\operatorname{E}*=\operatorname{E}y\).
There are two obvious maps \(X\to Y\), namely \(*\mapsto y\) and \(*\mapsto y^{\prime}\). Call them \(f\) and \(g\) respectively. By the equivalent definition of \(\delta\)-equivalence, \(y\) and \(y^{\prime}\) have the same \(\delta\)s, so
\[\mathcal{G}_{\!\operatorname{R}}(*\mapsto y)=\delta(\_,y)=\delta(\_,y^{\prime })=\mathcal{G}_{\!\operatorname{R}}(*\mapsto y^{\prime})\]
And hence, if \(y\neq y^{\prime}\), then \(\mathcal{G}_{\!\operatorname{R}}\) must send different functional morphisms to the same relational one and therefore fail injectivity.
Now, that being said we still must prove that the functor is injective on **homs** when the codomain is extensional. But this is simple. Suppose \(\mathcal{G}_{\!\operatorname{R}}(f)=\mathcal{G}_{\!\operatorname{R}}(g)\) - and thus that for all \(x\) and \(y\),
\[\delta(f(x),y)=\delta(g(x),y)\]
by taking \(y\) to be \(f(x)\) we get that \(\delta(g(x),f(x))=\operatorname{E}f(x)\) and _mutatis mutandis_ the same can be done to show that \(f(x)\) is always \(\delta\)-equivalent to \(g(x)\). Because \(Y\) is extensional we have that \(g(x)=f(x)\) for all \(x\) and therefore they are they one and the same functional morphism.
**Remark 4.2**:: Completeness, be it gluing-wise or singleton-wise, is a rather pointless notion in the category of relational morphisms; this is because every \(\mathscr{Q}\)-\(\operatorname{\mathbf{Set}}_{\!\operatorname{r}}\) is equivalent to its full subcategories whose objects are complete in either notion.
Hence, any categorical property we _could ever prove_ about those subcategories would also be true of the larger relational category.
In fact, completeness can - in a way - be understood as the property one can impose on objects so as to make their functional morphisms representable functor "the same" as their relational morphisms representable functor. This will be elaborated in a subsequent section.
In the light of the remark above, we shall focus ourselves on the functional morphisms category, since talking about complete \(\mathscr{Q}\)-sets under relational morphisms is just a roundabout way of talking about general \(\mathscr{Q}\)-sets under those same morphisms.
### Some Results about Gluings and Compatible Families
**Lemma 4.2**::
Family compatibility is preserved by direct images of functional morphisms.
Proof.: Given \(A\) compatible for \(X\) and \(f:X\to Y\); take any \(a,b\in A\); it suffices to show that \(\delta(f(a),f(b))=\operatorname{E}f(a)\otimes\operatorname{E}f(b)\).
\[\delta(a,b) \leq\delta(f(a),f(b))\] \[=\delta(f(a),f(b))\otimes\operatorname{E}f(a)\] \[=\delta(f(a),f(b))\otimes\operatorname{E}f(a)\otimes \operatorname{E}f(b)\] \[=\delta(f(a),f(b))\otimes\delta(a,b)\] \[\leq\delta(a,b)\] (semi-cartesian)
**Lemma 4.3**::
Gluing elements are preserved by direct images of functional morphisms
Proof.: Should \(x\) glue \(A\), we already know that \(f[A]\) is compatible - in light of the previous lemma and the fact that a family that admits a gluing must be compatible - so it merely suffices to show that \(f(x)\) glues \(f[A]\). Take \(a\in A\),
\[\operatorname{E}a =\delta(x,a)\] \[\leq\delta(f(x),f(a))\] \[\leq\operatorname{E}f(a)=\operatorname{E}a\]
Thus establishing the first condition.
\[\operatorname{E}f(x) =\operatorname{E}x\] \[=\bigvee_{a\in A}\operatorname{E}a\] \[=\bigvee_{a\in A}\operatorname{E}f(a)\] \[=\bigvee_{\alpha\in f[a]}\operatorname{E}\alpha\]
Thus completing the proof.
We shall see that there is a natural way of'making' a \(\mathscr{Q}\)-set gluing complete. This what we shall focus ourselves on in the next subsection.
### Gluing-Completion
Completion problems often come in the shape of "all \(X\) induce a \(Y\), but not all \(Y\) come from an \(X\) - can we make it so \(X\) and \(Y\) are in correspondence?" and the solution to those kind of problems can often be found in looking at the collection of all \(Y\)s or quotients of it.
In that spirit, we shall give the collection of all compatible families the structure of a \(\mathscr{Q}\)-set; we shall embed the original \(\mathscr{Q}\)-set in it; and then we "throw away" redundant points in an appropriate quotient.
**Definition 4.6**::
Given a \(\mathscr{Q}\)-set \(X\), we say \(\mathfrak{G}(X)\) is the \(\mathscr{Q}\)-set given by
\[|\,\mathfrak{G}(X)| =\{A\subseteq|X|\ :\ A\text{ is compatible}\}\] \[\delta(A,B) =\bigvee_{\begin{subarray}{c}a\in A\\ b\in B\end{subarray}}\delta(a,b)\]
Obviously, we claim that the above does indeed define a \(\mathscr{Q}\)-set.
Proof.: \[\delta(A,B)\otimes\delta(B,C) =\bigvee_{a\in A}\bigvee_{b\in B}\bigvee_{\beta\in B}\bigvee_{c \in C}\delta(a,b)\otimes\delta(\beta,c)\] \[=\bigvee_{a\in A}\bigvee_{b\in B}\bigvee_{\beta\in B}\bigvee_{c \in C}\delta(a,b)\otimes\operatorname{E}b\otimes\operatorname{E}\beta\otimes \delta(\beta,c)\] \[=\bigvee_{a\in A}\bigvee_{b\in B}\bigvee_{\beta\in B}\bigvee_{c \in C}\delta(a,b)\otimes\delta(b,\beta)\otimes\delta(\beta,c)\] \[\leq\bigvee_{a\in A}\bigvee_{c\in C}\delta(a,c)\] \[=\delta(A,C)\]
Which suffices for transitivity/triangular inequality. It is obvious that the definition is symmetric, so it only remains to show that extents behave as they should.
\[\delta(A,A)\otimes\delta(A,B) =\bigvee_{a,a^{\prime}\in A}\bigvee_{\alpha\in A,b\in B}\delta(a, a^{\prime})\otimes\delta(\alpha,b)\] \[\geq\bigvee_{\alpha\in A,b\in B}\delta(\alpha,\alpha)\otimes \delta(\alpha,b)\] \[=\delta(A,B)\]
and thus
\[\operatorname{E}A\otimes\delta(A,B) =\delta(A,B)\] (semi-cartesian)
Thus proving it is actually a \(\mathscr{Q}\)-set as desired.
**Definition 4.7** (The \(\mathscr{Q}\)-set of Compatible Families)::
We extend \(\mathfrak{G}^{5}\) from an object-map to a dignified functor, by taking functional morphisms to their direct image; this has the surreptitious implicit premise that direct images are indeed functional morphisms between the compatible families \(\mathscr{Q}\)-sets.
Proof.: \[\delta(A,B) =\bigvee_{a,b}\delta(a,b)\] \[\leq\bigvee_{a,b}\delta(f(a),f(b))\] \[=\delta(f[A],f[B])\]
For extents it's even more immediate
\[\operatorname{E}A=\bigvee_{a\in A}\operatorname{E}a=\bigvee_{a\in A} \operatorname{E}f(a)=\operatorname{E}f[A]\]
As mentioned in the previous footnote, \(\mathfrak{G}\) doesn't take objects to gluing-complete \(\mathscr{Q}\)-sets; think of it in the same vein of object as Cauchy sequences sans the quotient. We first show that every compatible family over \(\mathfrak{G}(X)\) has a gluing.
**Lemma 4.4**::
If \(\mathcal{A}\) is a compatible family over \(\mathfrak{G}(X)\), then \(\bigcup\mathcal{A}\) is compatible over \(X\).
Proof.: Take \(A,B\in\mathcal{A}\) and \(a\in A\) and \(b\in B\); it suffices to show that \(\delta(a,b)\geq\operatorname{E}a\otimes\operatorname{E}b\) since we know the dual inequality always holds.
The following is always true:
\[\delta(a,b)\geq\bigvee_{a^{\prime}\in A}\bigvee_{b^{\prime}\in B}\delta(a,a^{ \prime})\otimes\delta(b,b^{\prime})\otimes\delta(a^{\prime},b^{\prime})\]
forcing us to accept the remaining argument
\[=\bigvee_{a^{\prime}\in A}\bigvee_{b^{\prime}\in B}\operatorname{E }a\otimes\operatorname{E}a^{\prime}\otimes\operatorname{E}b\otimes \operatorname{E}b^{\prime}\otimes\delta(a^{\prime},b^{\prime})\] \[=\operatorname{E}a\otimes\operatorname{E}b\otimes\bigvee_{a^{ \prime},b^{\prime}}\delta(a^{\prime},b^{\prime})\] \[=\operatorname{E}a\otimes\operatorname{E}b\otimes\delta(A,B)\] \[=\operatorname{E}a\otimes\operatorname{E}b\otimes\operatorname{E }A\otimes\operatorname{E}B\]
since for extents \(\otimes=\wedge\), and since \(\operatorname{E}a\leq\operatorname{E}A\) and likewise for \(b\) and \(B\),
\[=\operatorname{E}a\otimes\operatorname{E}b\]
**Proposition 4.3** (Compatible Families of \(\mathfrak{G}\) have Gluings)::
We must give at least _one_ gluing, but in general we should have many such gluings. That's a defect we shall correct by taking an appropriate quotient later.
Proof.: Given \(\mathcal{A}\), we claim that \(\bigcup\mathcal{A}\) is a canonical candidate for its gluing. We shall now verify that it indeed is up for the task. First, let \(A\in\mathcal{A}\) and denote by \(X\) the family given by \(\bigcup\mathcal{A}\); let us proceed.
\[\delta(X,A) =\bigvee_{x\in X,a\in A}\delta(x,a)\] \[\geq\bigvee_{x\in A,a\in A}\delta(x,a)\] \[=\delta(A,A)=\operatorname{E}A\] \[\geq\delta(X,A)\]
Remaining for us simply to show that the extent matches what we wanted.
\[\operatorname{E}X =\bigvee_{x\in X}\operatorname{E}x\] \[=\bigvee_{A\in\mathcal{A}}\bigvee_{x\in A}\operatorname{E}x\] \[=\bigvee_{A\in\mathcal{A}}\operatorname{E}A\]
The last piece of the puzzle of gluing completion is ensuring extensionality; and this is done by another functor, the \(\delta\)-quotient functor.
**Definition 4.8** (\(\delta\)-quotient)::
We define the functor naturally by quotienting together \(\delta\)-equivalent elements. It does indeed take \(\mathscr{Q}\)-sets to \(\mathscr{Q}\)-sets if we take \(\delta(\llbracket x\rrbracket,\llbracket y\rrbracket)=\delta(x,y)\). It does indeed take morphisms to morphisms, due to \(\delta\) equivalence being congruential.
**Lemma 4.5**::
A compatible family of has a gluing if and only if (any) section of that family has a gluing in \(X\). That is to say, if \(A\) is our compatible family and \(\bar{A}\) is such that \(\natural[\bar{A}]=A\), then \(A\) has a gluing if and only if \(\bar{A}\) does. Moreover, if any section has a gluing, then every section is also glued by it.
Proof.: Suppose that \(A\) has a gluing, call it \(\llbracket x\rrbracket\); by definition,
So, if \(\bar{A}\) is a section of \(A\), then
\[\operatorname{E}x=\bigvee_{a\in\bar{A}}\operatorname{E}a\] \[\delta(x,a)=\delta(\llbracket x\rrbracket,\llbracket a\rrbracket)= \operatorname{E}\llbracket a\rrbracket=\operatorname{E}a\]
Thus, \(x\) glues any section of \(A\). Now suppose instead that \(x\) glues some section \(\bar{A}\) of \(A\). Of course \(\llbracket x\rrbracket\) must now glue \(A\):
\[\operatorname{E}\,\llbracket x\rrbracket=\operatorname{E}x=\bigvee_{a\in\bar{A }}\operatorname{E}a=\bigvee_{a\in\bar{A}}\operatorname{E}\,\llbracket a \rrbracket=\bigvee_{\llbracket a\rrbracket\in A}\operatorname{E}\,\llbracket a \rrbracket=\operatorname{E}A\]
**Remark 4.3**::
Evidently, every compatible family over \(X\)\(\bigvee_{\delta}\) has a section that is a compatible family over \(X\).
**Lemma 4.6**::
The functor \(\bigtriangledown_{\delta}\) makes \(\mathscr{Q}\)-sets extensional (in fact it's "the best" extensional approximation).
Proof.: Suppose
\[\delta(\llbracket x\rrbracket\,,\llbracket y\rrbracket)=\operatorname{E} \,\llbracket x\rrbracket=\operatorname{E}\,\llbracket y\rrbracket\]
Then \(\delta(x,y)=\operatorname{E}x=\operatorname{E}y\) and immediately we have \(x\sim y\) and as such, \(\llbracket x\rrbracket=\llbracket y\rrbracket\).
The claim that it is "the best" at doing this is harder to state and prove, and it could/should be a theorem.
**Theorem 4.1** (it became a theorem)::
The functor \(\mathscr{Q}\)-\(\operatorname{\mathbf{Set}}_{f\operatorname{Ext}}\hookrightarrow\mathscr{Q}\)-\(\operatorname{\mathbf{Set}}_{f}\) has a left adjoint and it is \(\bigtriangledown_{\delta}\).
Proof.: As will become a trend throughout this article, we go about looking for units and counits and proving the triangle/zig-zag identites for those.
The counit is obvious, as applying \(\bigtriangledown_{\delta}\) to an already extensional object yields an isomorphic object whose elements are singleton sets containing exactly the elements of the original \(\mathscr{Q}\)-set. So we simply unwrap them and that's the counit.
The unit is also simple, it's just the quotient map \(\natural\) - taking elements to their \(\delta\)-equivalence class.
In this case, there is no real point to showing the zig-zag identities hold; but the reader may if they feel so inclined.
**Definition 4.9** (Gluing Completion)::
The gluing completion functor \(\mathcal{G}\) is simply \((\bigtriangledown_{\delta})\circ\mathfrak{G}\).
**Proposition 4.4**::
The Gluing-completion of a \(\mathscr{Q}\)-set is indeed gluing-complete.
Proof.: We know that every compatible family over \(\mathcal{G}\,X\) has at least one gluing, but we also know they are extensional; and thus he have that they are gluing complete.
**Definition 4.10** (\(\mathcal{G}\)_2-Set_):
Here we give the name \(\mathcal{G}\)_2-Set_ to the full subcategory of \(\mathcal{Q}\)-\(\mathbf{Set}_{f}\) whose objects are gluing complete. We drop the \(f\) subscript as this notion isn't particularly useful for relational morphisms anyway, so a it's pointless datum. In a later subsection the \(f\) subscript will reappear as will a \(r\) subscript, so that we may prove that
\[\mathcal{G}\mathcal{Q}\text{-}\mathbf{Set}_{r}\simeq\mathcal{Q}\text{-} \mathbf{Set}_{r}\]
In the light of the previous proposition, we are entitled to say that the signature of \(\mathcal{G}\) is actually
\[\mathcal{G}:\mathcal{Q}\text{-}\mathbf{Set}_{f}\to\mathcal{G}\mathcal{Q}\text{ -}\mathbf{Set}\]
**Lemma 4.7**::
Equivalent compatible families (in the sense of the canonical \(\delta\) we have defined for the set of such families) have exactly the same gluings - and they are equivalent to a singleton set if and only if its element is one of their gluings.
Proof.: It suffices that we show \(x\) glues \(A\) iff. \(\{x\}\sim A\) - as the \(\sim\) being an equivalence takes care of the rest of the claim for us.
So, let's suppose that \(x\) does indeed glue \(A\); it follows that
\[\delta(\{x\},A)=\bigvee_{a\in A}\delta(x,a)=\bigvee_{a\in A}\operatorname{E} a=\operatorname{E}A=\operatorname{E}\{x\}\]
Now instead suppose that \(\{x\}\sim A\); In particular \(\operatorname{E}x=\operatorname{E}\{x\}\) must be \(\operatorname{E}A\) and thus \(\bigvee_{a\in A}\operatorname{E}a\). So that takes care of the last condition; the first condition for gluing still remains. But not for long:
\[\operatorname{E}a =\operatorname{E}a\otimes\operatorname{E}A\] \[=\operatorname{E}a\otimes\delta(\{x\},A)\] \[=\bigvee_{a^{\prime}\in A}\operatorname{E}a\otimes\delta(x,a^{ \prime})\] \[=\bigvee_{a^{\prime}\in A}\operatorname{E}a\otimes\operatorname{ E}a^{\prime}\otimes\delta(x,a^{\prime})\] \[=\bigvee_{a^{\prime}\in A}\delta(a,a^{\prime})\otimes\delta(x,a^ {\prime})\] \[\leq\delta(x,a)\]
**Theorem 4.2** (The Gluing Completion Adjunction):
It's not particularly useful to have a functor that makes \(\mathcal{Q}\)-sets gluing-complete if that completion isn't actually "universal" in that it is the reflector of the subcategory we are interested in.
Proof.: Again, we proceed by showing a unit-counit pair satisfying the zig-zag identities. For (my) sanity, let us name \(L=\mathcal{G}\) and \(R\) the fully faithful inclusion \(\mathcal{G}\mathscr{Q}\mathscr{Q}\text{-}\mathbf{Set}\hookrightarrow\mathscr{Q} \mathscr{Q}\text{-}\mathbf{Set}_{f}\).
Let's find \(\eta:\mathbf{id}\to R\circ L\); what it must do is take an element \(x\) of \(X\) to something in the completion of \(X\); the completion of \(X\) is comprised of equivalence classes of compatible families, so it is enough that we find a compatible family to assign \(x\) to. \(\{x\}\) is the natural candidate as
\[\delta(x,y)=\delta(\{x\},\{y\})=\delta(\llbracket\{x\}\rrbracket\,,\llbracket \{y\}\rrbracket)\]
Of course, we have only defined components, and don't know if they form a natural transformation together; naturality holds quite trivially though:
We know that the action of \(\mathscr{T}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Remaining now only the need to show \(\varepsilon\) is natural transformation \(L\circ R\to\mathbf{id}\);
since gluing elements are preserved by functional morphisms, the square does indeed commute; hence, the family of maps is actually natural transformation and we can proceed in showing that the selected transformations satisfy the zig-zag identities. Recapping,
\[x\xmapsto{\eta_{X}}[\![\{x\}]\!]\] \[[\![A]\!]\xmapsto{\varepsilon_{X^{\prime}}}x_{A}\]
and we aim to show that
\[(\varepsilon L)\circ(L\eta) =\mathbf{id}\] \[(R\varepsilon)\circ(\eta R) =\mathbf{id}\]
Given the below, it should then become obvious that we do indeed have witnesses to the adjunction we have set out to show to hold.
\[L\eta:L\xrightarrow{}L\circ R\circ L\] \[(X\in\mathscr{Q}\text{-}\mathbf{Set})\mapsto(\llbracket A\rrbracket \xmapsto{(L\eta)_{X}}[\![\{[A]\}]\!])\]
\[\varepsilon L:L\circ R\circ L\xrightarrow{}L\] \[(X\in\mathscr{Q}\text{-}\mathbf{Set})\mapsto(\llbracket A\rrbracket \rrbracket\xmapsto{(\varepsilon L)_{X}}[\![A]\!])\]
\[R\varepsilon:R\circ L\circ R\xrightarrow{}R\] \[(K\in\mathscr{Q}\mathscr{Q}\text{-}\mathbf{Set})\mapsto( \llbracket A\rrbracket\xmapsto{(R\varepsilon)_{K}}x_{A})\]
\[\eta R:R\xrightarrow{}R\circ L\circ R\] \[(K\in\mathscr{Q}\mathscr{Q}\text{-}\mathbf{Set})\mapsto(x \xmapsto{(\eta R)_{K}}[\![\{x\}]\!])\]
**Remark 4.4**::
The adjunction counit is very easily seen to be a natural _isomorphism_ (because \(R\) was fully faithful), however the unit isn't: Consider the \(\mathscr{Q}\)-set given naturally by
\[X:=\{\bot\}\amalg\amalg\{\bot\}\]
It obviously won't be extensional, because of this, \(\mathscr{G}(X)\cong\{\bot\}\) and there is no possible _functional_ isomorphism between those, because functional isomorphisms are - in particular - also bijections. So the adjunction above is not an adjoint equivalence. For relational morphisms there are those kinds of isomorphisms, and so the adjunction unit will be a natural isomorphism too.
## 5 Singleton/Scott-Completeness
As previously mentioned, there is another notion completeness; it relates to a concept called singleton, which we will talk about now.
A singleton on a \(\mathscr{Q}\)-set \(X\), morally speaking, is a \(\mathscr{Q}\)-valued characteristic function. Although it isn't _per se_ a construction in the category \(\mathscr{Q}\)-**Set** - it does lead to categorically relevant results.
**Definition 5.1** (Singleton)::
A _singleton_ on \(X\) is a map between the underlying set and \(\mathscr{Q}\): \(\sigma:|X|\to\mathscr{Q}\). This map is expected to satisfy the following axioms
1. \(\sigma(x)\otimes\operatorname{E}x=\sigma(x)\);
2. \(\sigma(x)\otimes\delta(x,y)\leq\sigma(y)\);
3. \(\sigma(x)\otimes\sigma(y)\leq\delta(x,y)\);
4. \(\sigma(x)\otimes\bigvee_{y}\sigma(y)=\sigma(x)\);
the first two being called the "subset axioms" the third being the "singleton condition" and the last axiom being called the "strictness condition".
Intuitively, singletons represent a membership relation to a set - this is governed by the subset axioms; the singleton condition then states that the simultaneous membership of two different points implies their similitude; the last axiom, strictness, is more technical in nature.
It has been known in the literature for quite a few years [7, p. 30] - but its meaning is somewhat obscure, in that it is a necessary condition for a singleton to be representable (to be defined next), but it doesn't have a neat interpretation. The best we have arrived at is that strictness gives us that the supremum \(\sup_{y}\sigma(y)\) "behaves like an extent" in that it is idempotent.
### Basics about Singletons
**Definition 5.2** (Representable Singleton)::
A singleton \(\sigma\) is said to be representable (or represented by an element \(x_{\sigma}\)) when
there is one such element \(x_{\sigma}\) such that
\[\sigma(y)=\delta(y,x_{\sigma})\]
**Proposition 5.1:**
If \(A\) is a compatible family, then \(\delta(\{\_\},A)\) is a singleton. In particular, \(\delta(\_,x)\) is always a singleton (by taking \(A=\{x\}\)).
Proof.: The subset axioms are immediately satisfied by the fact that \(\mathfrak{G}(X)\) is always a \(\mathscr{Q}\)-set. The Singleton condition is trivial too:
\[\sigma_{A}(x)\otimes\sigma_{A}(y)=\delta(\{x\},A)\otimes\delta(\{y\},A)\leq \delta(\{x\},\{y\})=\delta(x,y)\]
obviously, it is also strict considering that
\[\bigvee_{y\in X}\sigma_{A}(y)\geq\bigvee_{a\in A}\sigma_{A}(a)=\bigvee_{a\in A }\mathrm{E}\,a=\mathrm{E}\,A\geq\bigvee_{y\in X}\sigma_{A}(y)\]
**Remark 5.1:**
Quite obviously, if the singleton above is representable, then the representing element glues \(A\). And if an element glues \(A\), then it is bound to represent the singleton.
**Proposition 5.2:**
If \(A\) is a compatible family, and \(\sigma\) is some singleton over \(A\)'s \(\mathscr{Q}\)-set, we have that if \(x_{A}\) glues \(A\), then
\[\sigma(x_{A})=\bigvee_{a\in A}\sigma(a)\]
Proof.: \[\sigma(a)= \sigma(a)\otimes\mathrm{E}\,a\] \[= \sigma(a)\otimes\delta(a,x_{A})\] \[\leq \sigma(x_{A})\]
\[\sigma(x_{A})= \sigma(x_{A})\otimes\mathrm{E}\,x_{A}\] \[= \sigma(x_{A})\otimes\bigvee_{a\in A}\mathrm{E}\,a\] \[= \sigma(x_{A})\otimes\bigvee_{a\in A}\delta(a,x_{A})\] \[= \bigvee_{a\in A}\sigma(x_{A})\otimes\delta(a,x_{A})\] \[\leq \bigvee_{a\in A}\sigma(a)\]
**Definition 5.3** (Scott-Completeness):
We say that \(X\) is Scott-complete, or singleton-complete, when every one of its singletons have exactly one representing element.
**Example 5.1:**
The terminal object is Scott-complete. An indirect way of seeing this is to skip through the paper to where we prove that the inclusion of Scott-complete \(\mathscr{Q}\)-sets creates limits. A more direct answer comes from the fact that since \(\top\in\operatorname{E}\mathscr{Q}\) is the gluing of the compatible family \(\operatorname{E}\mathscr{Q}\), thanks to prop. 5.2, \(\sigma(\top)=\bigvee_{e}\sigma(e)\). And hence:
\[\sigma(e)=\sigma(e)\otimes e\leq\sigma(\top)\otimes e=\sigma(\top)\otimes \delta(e,\top)\leq\sigma(e)\]
And therefore, \(\sigma(\top)\) represents \(\sigma\). Uniqueness is quite obvious: if for some \(x\), \(\forall e:\sigma(e)=e\otimes x\), in particular \(\sigma(\top)=\top\otimes x=x\) by integrality.
**Example 5.2:**:
1. If \(A\) is a commutative ring and \(M\) is a left \(A\)-module, then the set over \(\mathscr{Q}=Ideals(A)\), \(X_{M}=(M,\delta)\), is such that for each \(x,y\in M\), \(x\sim_{\delta}y\) iff \(A.x=A.y\), so in general, is not a extensional \(\mathscr{Q}\)-set.
**Remark 5.2:**
Scott-completeness is invariant under isomorphisms in \(\mathscr{Q}\)-\(\mathbf{Set}_{f}\) - but not in \(\mathscr{Q}\)-\(\mathbf{Set}_{r}\), this will be explained in a later subsection.
The first claim is easy to show, as functional isomorphisms are simply bijections preserving \(\delta\).
**Theorem 5.1:**
Scott-Completness implies Gluing-Completeness
Proof.: Take a compatible family \(A\), consider the singleton defined in the previous proposition; suppose \(x_{A}\) represents \(\sigma_{A}\); thus,
\[\bigvee_{a\in A}\delta(x_{A},a)=\sigma_{A}(x_{A})=\delta(x_{A},x_{A})= \operatorname{E}x_{A}\]
\[\delta(a,x_{A})=\sigma_{A}(a)=\bigvee_{a^{\prime}\in A}\delta(a,a^{\prime})= \operatorname{E}a\]
and hence \(x_{A}\) glues \(A\). Now suppose that \(x_{A}\) glues \(A\), then
\[\delta(y,x_{A}) \geq\delta(x_{A},a)\otimes\delta(a,y)\] \[\geq\operatorname{E}a\otimes\delta(a,y)=\delta(a,y)\]
\[\delta(y,x_{A}) \geq\bigvee_{a\in A}\delta(y,a)\] \[\geq\bigvee_{a\in A}\delta(a,x_{A})\otimes\delta(y,x_{A})\] \[=\delta(y,x_{A})\otimes\bigvee_{a\in A}\delta(a,x_{A})\] \[=\delta(y,x_{A})\otimes\bigvee_{a\in A}\operatorname{E}a\] \[=\delta(y,x_{A})\otimes\operatorname{E}x_{A}\] \[=\delta(y,x_{A})\]
hence \(x_{A}\) represents \(\sigma_{A}\). Thus, if all singletons have exactly one representing element, then all compatible families have exactly one gluing.
**Theorem 5.2**::
There are gluing complete \(\mathscr{Q}\)-sets over \(\mathscr{Q}=\mathscr{P}(2)\) which _are not_ Scott-complete - so those two concepts _are not_ logically equivalent.
Proof.: Let \(\mathscr{Q}\) be the following partial order
First, let \(S\subseteq\mathscr{Q}\) and let \(\delta=\wedge\); \(S\) is gluing-complete if and only if \(S\) is complete as a lattice:
If \(A\) is a subset of \(S\), it must be compatible (!) since \(\operatorname{E}x=x\) and \(\delta(x,y)=x\wedge y\) and thus \(\delta(x,y)=\operatorname{E}x\wedge\operatorname{E}y\). Moreover, if \(x_{A}\) glues \(A\), then
\[\operatorname{E}x_{A}=x_{A}=\bigvee_{a\in A}\operatorname{E}a=\bigvee_{a\in A}a\]
and hence \(x_{A}\) is \(\sup A\). It's easy to verify that the supremum is - if it exists in \(S\) - the gluing of \(A\).
So, the set \(S=\{\bot,a,\top\}\) is gluing complete, obviously. But the singleton \(\sigma(x)=\neg a\wedge x\) (which is the restriction of the singleton represented by \(\neg a\)) has no representing element.
\[\sigma(\bot)=\bot\qquad\sigma(a)=\bot\qquad\sigma(\top)=\neg a\]
And so \(S\) is gluing-complete but not Scott-complete.
**Definition 5.4** (The \(\mathscr{Q}\)-set of Singletons):
It is, in general, not possible to form a \(\mathscr{Q}\)-set of singletons over a \(\mathscr{Q}\)-set \(X\). There is some trouble in defining \(\delta\) for singletons in a way akin to [3, thm. 2.9.5]. This is due to the fact that \(\bigvee_{x}\sigma(x)\otimes\sigma(x)\) not being "extent-like" necessarily.
We must - seemingly - add the axiom of Strength to our quantale \(\mathscr{Q}\) to guarantee we can form a \(\mathscr{Q}\)-set of singletons. This enables us to define the following:
\[|\,\mathbf{\Sigma}(X)|=\{\sigma\ |\ \sigma\text{ is a singleton over }X\}\] \[\delta(\sigma,\xi)=\bigvee_{x\in X}\sigma(x)\otimes\xi(x)\]
**Proposition 5.3:**
Obviously, we must prove this is indeed a \(\mathscr{Q}\)-set.
Proof.: Firstly, we know that \(\bigvee_{x}\sigma(x)\) is idempotent, and hence
\[\bigvee_{x}\sigma(x)\leq\bigvee_{x}\sigma(x)\]
since \(\mathscr{Q}\) is hypothesized to be strong, we have
\[\bigvee_{x}\sigma(x)\leq\bigvee_{x}\sigma(x)\otimes\sigma(x)\]
and since \(\mathscr{Q}\) is semi-cartesian
\[\bigvee_{x}\sigma(x)\otimes\sigma(x)\leq\bigvee_{x}\sigma(x)\] (semi-cartesian)
and therefore, \(\operatorname{E}\sigma=\bigvee_{x}\sigma(x)\). Now we move onto the \(\mathscr{Q}\)-set axioms. \(\delta\) as defined is obviously symmetric; let's show extensionality:
\[\delta(\sigma,\xi)\otimes\operatorname{E}\sigma=\bigvee_{x}\sigma(x)\otimes \xi(x)\otimes\operatorname{E}\sigma\]
now recall \(\sigma\)'s strictness condition
\[=\bigvee_{x}\sigma(x)\otimes\xi(x)\]
Now we need only to show triangle inequality/transitivity
\[\delta(\sigma,\xi)\otimes\delta(\xi,\psi) =\bigvee_{x}\bigvee_{y}\sigma(x)\otimes\xi(x)\otimes\xi(y) \otimes\psi(y)\] \[\leq\bigvee_{x}\bigvee_{y}\sigma(x)\otimes\delta(x,y)\otimes\psi(y)\] \[\leq\bigvee_{y}\sigma(y)\otimes\psi(y)\] \[=\delta(\sigma,\psi)\]
**Lemma 5.1:**
A relational morphism \(\varphi:X\to Y\) induces a functional map \(\check{\varphi}:X\to\mathbf{\Sigma}(Y)\) given by
\[\check{\varphi}(x)=\varphi(x,\_)\]
Proof.: We ought to show that the map does indeed take every \(x\) to a singleton \(\check{\varphi}(x)\) and that the map is indeed a morphism.
\[\delta(x,x^{\prime})\otimes\check{\varphi}(x) =\delta(x,x^{\prime})\otimes\varphi(x,\_)\] \[\leq\varphi(x^{\prime},\_)\]
\[\operatorname{E}x\otimes\check{\varphi}(x) =\operatorname{E}x\otimes\varphi(x,\_)\] \[=\varphi(x,\_)\]
\[[\check{\varphi}(x)](y)\stackrel{{\check{\varphi}(x)}}{{ \otimes}}(y^{\prime}) =\varphi(x,y)\otimes\varphi(x,y^{\prime})\] \[\leq\delta(y,y^{\prime})\]
\[[\check{\varphi}(x)](y)\otimes\bigvee_{y^{\prime}}[\check{\varphi}(x)](y^{ \prime}) =\varphi(x,y)\otimes\bigvee_{y}\varphi(x,y^{\prime})\] \[=\varphi(x,y)\otimes\operatorname{E}x\] \[=\varphi(x,y)\] \[=\check{\varphi}(x^{\prime})\]
Establishing that \(\check{\varphi}(x)\) is always a singleton over \(Y\). Now, with regards to it being a morphism, just above we have also shown that \(\operatorname{E}\check{\varphi}(x)=\operatorname{E}x\), so only one axiom remains:
\[\delta(\check{\varphi}(x),\check{\varphi}(x^{\prime})) =\bigvee_{y}\varphi(x,y)\otimes\varphi(x^{\prime},y)\] \[\geq\bigvee_{y}\varphi(x,y)\otimes\varphi(x,y)\otimes\delta(x,x^ {\prime})\] \[\geq\delta(x,x^{\prime})\otimes\bigvee_{y}\varphi(x,y)\otimes \varphi(x,y)\] \[=\delta(x,x^{\prime})\otimes\operatorname{E}x\] \[=\delta(x,x^{\prime})\]
**Remark 5.3:**
It is not _necessary_ for \(\mathscr{Q}\) to be strong for the question of Scott completeness
to make sense; but it is not apparent how one may perform Scott-_completion_ without that added strength. We will talk about completion now.
**Definition 5.5:**
We denote by \(\mathbf{\Sigma}\mathscr{Q}\)-**Set** the full subcategory of \(\mathscr{Q}\)-**Set\({}_{f}\)** whose objects are Scott-complete.
**Definition 5.6** (Separability): **:**
A \(\mathscr{Q}\)-set is said to be _separable_ when the mapping \(x\mapsto\delta(\_,x)\) is _injective_; _ie_. it is the completeness condition sans surjectivity.
**Proposition 5.4:**
Separability is equivalent to Extensionality.
Proof.: Immediate from what was proven in the definition of \(\delta\)-equivalence.
### Scott Completion
Scott-completion is, unsurprisingly, going to be a reflector functor of the from \(\mathscr{Q}\)-**Set** to the full subcategory \(\mathbf{\Sigma}\mathscr{Q}\)-**Set**; in this subsection we therefore will show that \(\mathbf{\Sigma}(X)\) is indeed Scott-complete and that object assignment can be made functorial - in such a way to get a left adjoint to the fully faithful subcategory inclusion.
**Lemma 5.2** ([3] cf. p. 158): **:**
If \(\sigma_{x}\) denotes \(\delta(\_,x)\), then
\[\delta(\sigma_{x},\xi)=\xi(x)\]
a result akin to the Yoneda lemma, wherein
\[\mathbf{hom}(\curlyeq(x),F)\cong F(x)\]
Proof.: \[\delta(\sigma_{x},\xi) =\bigvee_{y}\delta(y,x)\otimes\xi(y)\] \[\leq\xi(x)\] \[=\operatorname{E}x\otimes\xi(x)\] \[=\delta(x,x)\otimes\xi(x)\] \[\leq\bigvee_{z}\delta(z,x)\otimes\xi(z)\] \[=\delta(\sigma_{x},\xi)\]
**Remark 5.4:**
Notice that this means \(\delta(\sigma_{x},\sigma_{y})=\delta(x,y)\) and thus the mapping \(x\mapsto\sigma_{x}\) preserves \(\delta\) and hence is a morphism - it is in fact a regular monomorphism.
**Proposition 5.5**::
\(\mathbf{\Sigma}(X)\) is, indeed, Scott-complete.
Proof.: In this proof, we shall refer to members of \(\mathbf{\Sigma}(X)\) as \(1\)-singletons and use lowercase \(\xi,\psi,\varphi\) and refer to members of \(\mathbf{\Sigma}\circ\mathbf{\Sigma}(X)\) as \(2\)-singletons and use uppercase \(\Sigma\)_etc._
To go about doing this we must show two things: that two \(1\)-singletons cannot represent the same \(2\)-singleton; and that every \(2\)-singleton comes from a \(1\)-singleton.
For the purpose of showing that \(\xi\mapsto\sigma_{\xi}\) is injective, suppose that \(\xi\) and \(\psi\) are such that \(\sigma_{\xi}=\sigma_{\psi}\). Let \(x\in X\) and realize
\[\xi(x)=\delta(\sigma_{x},\xi)=\sigma_{\xi}(\sigma_{x})=\sigma_{\psi}(\sigma_{ x})=\delta(\sigma_{x},\psi)=\psi(x)\]
thence they are extensionally equal.
Now take a \(2\)-singleton \(\Sigma\), we shall define a \(1\)-singleton that we hope represents it; namely:
\[\varphi(x)=\Sigma(\sigma_{x})\]
one way to become convinced that \(\varphi\) is indeed a singleton over \(X\) is that \(\sigma_{\mbox{\rm\ \
\[\begin{split}\operatorname{E}\Sigma&=\bigvee_{\xi}\Sigma( \xi)\otimes\Sigma(\xi)\\ &=\bigvee_{\xi}\Sigma(\xi)\\ &=\bigvee_{\xi}\Sigma(\xi)\otimes\operatorname{E}\xi\\ &=\bigvee_{x}\bigvee_{\xi}\Sigma(\xi)\otimes\xi(x)\\ &=\bigvee_{x}\bigvee_{\xi}\Sigma(\xi)\otimes\delta(\xi,\sigma_{x })\\ &\leq\bigvee_{x}\Sigma(\sigma_{x})\\ &=\bigvee_{x}\Sigma(\sigma_{x})\otimes\Sigma(\sigma_{x})\\ &=\operatorname{E}\varphi\end{split}\] (Strength)
Having then proved that \(\boldsymbol{\Sigma}(X)\) is indeed Scott-complete, we now must move into the associated completion functor.
**Definition 5.7**::
We extend the action of \(\boldsymbol{\Sigma}\) by acting on functional morphisms as follows:
\[\begin{split} f:X&\to Y\\ \boldsymbol{\Sigma}\,f:\boldsymbol{\Sigma}(X)&\to \boldsymbol{\Sigma}(Y)\\ \xi&\mapsto\bigvee_{x\in X}\delta(f(x),\underline{ \ })\otimes\xi(x)\end{split}\]
Morally speaking, if \(\xi(x)\) measures how much \(x\) is akin to some abstract point, then \((\boldsymbol{\Sigma}\,f)(\xi)\) measures how close a point in \(y\) is to the "image" of that abstract point under \(f\).
**Proposition 5.6**::
Associated to the previous definition, we must prove that \(\boldsymbol{\Sigma}\,f(\xi)\) is indeed a singleton; that this assignment is a morphism; and that the action on morphisms is functorial.
Proof.: First of all, for notational sanity denote \(\boldsymbol{\Sigma}\,f(\xi)\) by \(\xi_{f}\).
\[\xi_{f}(y)\otimes\operatorname{E}y =\bigvee_{x}\delta(f(x),y)\otimes\xi(x)\otimes\operatorname{E}y\] \[=\bigvee_{x}\delta(f(x),y)\otimes\xi(x)\] \[=\xi_{f}(y)\]
\[\xi_{f}(y)\otimes\delta(y,y^{\prime}) =\bigvee_{x}\delta(f(x),y)\otimes\delta(y,y^{\prime})\otimes\xi(y)\] \[\leq\bigvee_{x}\delta(f(x),y^{\prime})\otimes\otimes\xi(y)\] \[=\xi_{f}(y^{\prime})\]
\[\xi_{f}(y)\otimes\xi_{f}(y^{\prime}) =\bigvee_{x,x^{\prime}}\delta(f(x),y)\otimes\delta(f(x^{\prime}),y^{\prime})\otimes\xi(x)\otimes\xi(x^{\prime})\] \[\leq\bigvee_{x,x^{\prime}}\delta(f(x),y)\otimes\delta(f(x^{ \prime}),y^{\prime})\otimes\delta(x,x^{\prime})\] \[\leq\bigvee_{x,x^{\prime}}\delta(f(x),y)\otimes\delta(f(x^{ \prime}),y^{\prime})\otimes\delta(f(x),f(x^{\prime}))\] \[\leq\bigvee_{x^{\prime}}\delta(f(x^{\prime}),y)\otimes\delta(f(x ^{\prime}),y^{\prime})\] \[\leq\delta(y,y^{\prime})\]
\[\xi_{f}(y)\otimes\bigvee_{y^{\prime}}\xi_{f}(y^{\prime}) =\bigvee_{y^{\prime}}\bigvee_{x^{\prime}}\xi_{f}(y)\otimes\delta( f(x^{\prime}),y)\otimes\xi(x^{\prime})\] \[=\bigvee_{x^{\prime}}\xi_{f}(y)\otimes\delta(f(x^{\prime}),f(x^{ \prime}))\otimes\xi(x^{\prime})\] \[=\bigvee_{x^{\prime}}\xi_{f}(y)\otimes\operatorname{E}x^{\prime} \otimes\xi(x^{\prime})\] \[=\bigvee_{x^{\prime}}\xi_{f}(y)\otimes\xi(x^{\prime})\] \[=\bigvee_{x}\delta(f(x),y)\otimes\xi(x)\otimes\bigvee_{x^{\prime}} \xi(x^{\prime})\] \[=\bigvee_{x}\delta(f(x),y)\otimes\xi(x)\] \[=\xi_{f}(y)\]
The second thing we ought to do is showing that the map \(\boldsymbol{\Sigma}\,f\) is a morphism; that is rather easy actually. Let \(\psi_{f}\) denote the image of \(\psi\) under \(\boldsymbol{\Sigma}\,f\) as we had
done with \(\xi\).
\[\delta(\xi_{f},\psi_{f}) =\bigvee_{y}\xi_{f}(y)\otimes\psi_{f}(y)\] \[=\bigvee_{y}\bigvee_{x,x^{\prime}}\delta(f(x),y)\otimes\delta(f(x^{ \prime}),y)\otimes\xi(x)\otimes\varphi(x^{\prime})\] \[\geq\bigvee_{y}\bigvee_{x}\delta(f(x),y)\otimes\delta(f(x),y) \otimes\xi(x)\otimes\varphi(x)\] (by letting \[x=x^{\prime}\]) \[\geq\bigvee_{x}\operatorname{E}f(x)\otimes\operatorname{E}f(x) \otimes\xi(x)\otimes\varphi(x)\] (by letting \[y=f(x)\] ) \[=\bigvee_{x}\xi(x)\otimes\varphi(x)\] \[=\delta(\xi,\varphi)\]
\[\xi_{f}(y) \geq\xi_{f}(y)\otimes\bigvee_{y^{\prime}}\xi_{f}(y^{\prime})\] (semi-cartesian) \[=\bigvee_{y^{\prime}}\bigvee_{x^{\prime}}\xi_{f}(y)\otimes \delta(f(x^{\prime}),y^{\prime})\otimes\xi(x^{\prime})\] \[\geq\bigvee_{x^{\prime}}\xi_{f}(y)\otimes\delta(f(x^{\prime}),f( x^{\prime}))\otimes\xi(x^{\prime})\] (by letting \[y^{\prime}=f(x^{\prime})\] ) \[=\bigvee_{x^{\prime}}\xi_{f}(y)\otimes\operatorname{E}f(x^{ \prime})\otimes\xi(x^{\prime})\] \[=\xi_{f}(y)\otimes\bigvee_{x^{\prime}}\xi(x^{\prime})\] \[=\bigvee_{x}\delta(f(x),y)\otimes\xi(x)\otimes\bigvee_{x^{\prime }}\xi(x^{\prime})\] \[=\bigvee_{x}\delta(f(x),y)\otimes\xi(x)\] \[=\xi_{f}(y)\]
The final problem to be unravelled is functoriality. We must show that \(\operatorname{\mathbf{id}}\) is sent to \(\operatorname{\mathbf{id}}\) and composition is preserved. \(\operatorname{\boldsymbol{\Sigma}}\operatorname{\mathbf{id}}\) is defined as
\[[\operatorname{\boldsymbol{\Sigma}}\operatorname{\mathbf{id}} (\xi)](x) =\bigvee_{x^{\prime}}\delta(x,x^{\prime})\otimes\xi(x^{\prime})\] \[\geq\operatorname{E}x\otimes\xi x=\xi(x)\] \[\geq\delta(x^{\prime},x)\otimes\xi(x),\ \forall x^{\prime}\]
and hence by universality of the supremum, we have the desired equality:
\[(\operatorname{\boldsymbol{\Sigma}}\operatorname{\mathbf{id}})(\xi)=\xi\]
Now, take \(X\xrightarrow{f}Y\xrightarrow{g}Z\)
\[[\boldsymbol{\Sigma}(f\circ g)(\xi)](z) =\bigvee_{x}\delta(g\circ f(x),z)\otimes\xi(x)\] \[=\bigvee_{x}\bigvee_{y}\delta(f(x),y)\otimes\xi(x)\otimes\delta(g(y ),z)\] \[=\bigvee_{y}\delta(g(y),z)\otimes\left(\bigvee_{x}\delta(f(x),y) \otimes\xi(x)\right)\] \[=\bigvee_{y}\delta(g(y),z)\stackrel{{\boldsymbol{ \Sigma}\,f(\xi)}}{{\otimes}}(y)\] \[=[(\boldsymbol{\Sigma}\,g)((\boldsymbol{\Sigma}\,f)(\xi))](z)\] \[=[(\boldsymbol{\Sigma}\,g)\circ(\boldsymbol{\Sigma}\,f)](z)\]
Thus, we have shown that \(\boldsymbol{\Sigma}\) is a functor from \(\mathscr{Q}\)-\(\mathbf{Set}_{f}\) to \(\boldsymbol{\Sigma}\mathscr{Q}\)-\(\mathbf{Set}\).
**Theorem 5.3**::
We aim to show that \(\boldsymbol{\Sigma}\) indeed a completion, in the sense that it is the left adjoint to an full inclusion functor - of \(\boldsymbol{\Sigma}\mathscr{Q}\)-\(\mathbf{Set}\); yielding that if \(K\) is Scott-complete,
\[\mathscr{Q}\text{-}\mathbf{Set}(X,K)\cong\mathscr{Q}\text{-}\mathbf{Set}( \boldsymbol{\Sigma}\,X,K)\]
Proof.: Again, we shall proceed by finding units and counits. Once more let us rename \(\boldsymbol{\Sigma}\) by \(L\) dub the inclusion functor simply \(R\), after their role in the adjunction. We ought to find \(\eta\) and \(\varepsilon\) as we had in Gluing-completion proof.
The unit, \(\eta\) is straightforward, it is the associated singleton morphism \(x\mapsto\delta(\underline{\phantom{x}},x)\) - which we know to be a morphism already, and hence we need only to show naturality from \(\mathbf{id}\) to \(R\circ L\).
\(X\)\(\eta\)\(\boldsymbol{\Sigma}(X)\)\(\eta\)\(\boldsymbol{\Sigma}(Y)\)\(\eta\)\(\boldsymbol{\Sigma}\)\(f(x)\)\(\eta\)\(\boldsymbol{\Sigma}(Y)\)\(X\)\
The counit is also rather straightforward; we start with a Scott complete \(X\), we forget all about completeness and perform a completion. Since we know that \(x\mapsto\sigma_{x}\) is bijective (by definition of completeness) and that this mapping preserves \(\delta\), we know that \(X\) is isomorphic to \(\mathbf{\Sigma}(X)\). So we need only show that this map is a _natural_ isomorphism. Here \(\varepsilon\) takes a singleton (which is always representable) to its unique representing element:
Which is obviously commutative. Recapping the definitions,
\[\eta_{X}:X \rightarrow\mathbf{\Sigma}(X)\] \[x \mapsto\delta(\underline{\phantom{x}},x)\]
\[L\eta:L \xrightarrow{}L\circ R\circ L\] \[(X\in\mathscr{Q}\text{-}\mathbf{Set})\mapsto(\xi\xrightarrow{ (L\eta)_{X}}\bigvee_{x}\delta(\underline{\phantom{x}},\eta(x))\otimes\xi(x))\]
As we know from the Yoneda-esque lemma, the above can be simplified into the following:
\[L\eta:L \xrightarrow{}L\circ R\circ L\] \[(X\in\mathscr{Q}\text{-}\mathbf{Set})\mapsto(\xi\xrightarrow{ (L\eta)_{X}}\bigvee_{x}(x)\otimes\xi(x))\]
which we know to be simply \(\delta(\underline{\_},\xi)\). As for \(\varepsilon L\), we first recall what is the representing element of a 2-singleton \(\Sigma\). It is what we had called \(\varphi\) - which was the action of \(\Sigma\) on representable singletons, or more helpfully, the composite \(\Sigma(\sigma\underline{\_})\) or even better: \(\Sigma\circ\eta\).
\[\varepsilon L:L\circ R\circ L\xrightarrow{\ \ }L\] \[(X\in\mathscr{Q}\mathscr{Q}\text{-}\mathbf{Set})\mapsto(\Sigma \xmapsto{(R\varepsilon)_{X}}x)\] \[R\varepsilon:R\circ L\circ R\xrightarrow{\ \ }R\] \[(X\in\mathscr{G}\mathscr{Q}\mathscr{Q}\text{-}\mathbf{Set}) \mapsto(\delta(\underline{\_},x)\xmapsto{(R\varepsilon)_{X}}x)\] \[\eta R:R\xrightarrow{\ \ }R\circ L\circ R\] \[(X\in\mathscr{G}\mathscr{Q}\text{-}\mathbf{Set}) \mapsto(x\xmapsto{(R\eta)_{X}}\delta(\underline{\_},x))\]
Tracing \(x\) along \((R\varepsilon)\circ(\eta R)\) we find
\[x\xmapsto{(\eta R)_{X}}\delta(\underline{\_},x)\xmapsto{(R\varepsilon)_{X}}x\]
and now, tracing \(\xi\) along \((\varepsilon L)\circ(L\eta)\), we get
\[\xi\xmapsto{(L\eta)_{X}}\delta(\underline{\_},\xi)\xmapsto{(\varepsilon L)_{ X}}\delta(\sigma\underline{\_},\xi)\]
which, again, by the Yoneda-esque lemma is \(\xi(\underline{\_})\) which is extensionally equal to \(\xi\), of course. And thus, our two composites were equal to the appropriate identities; thus establishing the adjunction.
We already knew the counit was an isomorphism, but since it is the counit of an adjunction with a fully faithful right adjoint that would have proved it as well.
## 6 Connection between Completeness Conditions
**Theorem 6.1**::
Let \((X,\delta)\) be an extensional \(\mathscr{Q}\)-set. The following conditions are equivalent:
1. \((X,\delta)\) is Scott-complete.
2. \((X,\delta)\) is gluing-complete and, for each singleton \(\sigma\) over \((X,\delta)\), it holds the condition below: \((*_{\sigma})\) For each \(x\in X\), there is \(y\in X\) such that \(\sigma(x)\leq\sigma(y)=\operatorname{E}y\).
We have already mentioned that Scott-completeness implies Gluing-completeness; and hence there is another fully faithful subcategory inclusion at play, forming the triangle:
The question, visually posed in the diagram by the gray dotted arrow, is "is there a functor left adjoint to that lonely inclusion?" The, answer, not surprisingly, is yes. And it is given by the only composite going in that direction. Now, this isn't very spectacular, and it comes directly from **hom** isomorphism and the categories involved being full _etc._
The more interesting categorical property is the characterization of Scott-completeness; which we shall work towards in this next subsection.
### Scott-Completeness and Relational Morphisms
Scott-completeness is deeply tied to relational morphisms - which might be surprising since we have deliberately not dealt with \(\mathscr{Q}\text{-}\mathbf{Set}_{r}\) in the context of completeness thus far. The reason is that singleton completeness is _too_ deeply tied to relational morphisms.
**Theorem 6.2:**
\[\mathbf{\Sigma}\mathscr{Q}\text{-}\mathbf{Set}_{r}\simeq\mathscr{Q}\text{-} \mathbf{Set}_{r}\]
Proof.: We proceed by providing a functor fully faithful functor that is essentially surjective; in this case, it is easier to show that the full subcategory inclusion is essentially surjective, since by necessity is fully faithful.
This then amounts to us provinding an isomorphism between any object and a Scott-complete one. In this case we have an obvious candidate, that being \(\mathbf{\Sigma}(X)\). Define
\[\varphi:X \to\mathbf{\Sigma}(X)\] \[(x,\xi) \mapsto\xi(x)\]
\[\varphi^{-1}:\mathbf{\Sigma}(X) \to X\] \[(\xi,x) \mapsto\xi(x)\]
We ought to prove that both of them are morphisms first, but should the reader grant us a moratorium on that we could argue that
\[\varphi\circ\varphi^{-1}(\xi,\psi) =\bigvee_{x}\xi(x)\otimes\psi(x)\] \[=\delta(\xi,\psi)\]
\[\varphi^{-1}\circ\varphi(x,x^{\prime}) =\bigvee_{\xi}\xi(x)\otimes\xi(x^{\prime})\] \[\geq\delta(x,x)\otimes\delta(x,x^{\prime})\] \[=\delta(x,x^{\prime})\]
hence, their composite is pointwise greater than the identity; but we know that under strong quantales, the pointwise order on **hom**s is in fact discrete. So they are actually equal.
It remains to be seen that as defined, both \(\varphi\) and its supposed inverse are actually morphisms. To it, then. It should be obvious why the extensionality axioms should hold:
\[\xi(x)\otimes\operatorname{E}\xi =\xi(x)\otimes\bigvee_{x^{\prime}}\xi(x^{\prime})\] \[=\xi(x)\] \[=\xi(x)\otimes\operatorname{E}x\]
The \(\delta\) laws also come for free:
\[\delta(x,x^{\prime})\otimes\xi(x) \leq\xi(x)\] (subset axiom) \[\delta(\xi,\psi)\otimes\xi(x) =\delta(\xi,\psi)\otimes\delta(\sigma_{x},\xi)\] \[\leq\delta(\sigma_{x},\psi)\] \[=\psi(x)\]
The singleton condition holds trivially as well - due to how \(\delta(\xi,\psi)\) is defined.
\[\varphi(x,\xi)\otimes\varphi(x,\psi) =\xi(x)\otimes\psi(x)\] \[\leq\bigvee_{x}\xi(x)\otimes\psi(x)\] \[=\delta(\xi,\psi)\]
\[\varphi^{-1}(\xi,x)\otimes\varphi^{-1}(\xi,x^{\prime}) =\xi(x)\otimes\xi(x^{\prime})\] ( \[\xi\] is a singleton) \[\leq\delta(x,x^{\prime})\]
remaining only to show strictness
\[\bigvee_{\xi}\varphi(x,\xi) =\bigvee_{\xi}\xi(x)\] (now let
\[\xi=\sigma_{x}\]
) \[\geq\delta(x,x)=\operatorname{E}x\] \[\geq\xi(x)\ \forall\xi\]
and hence \(\varphi\) is indeed strict. Now for \(\varphi^{-1}\):
\[\bigvee_{x}\varphi^{-1}(\xi,x) =\bigvee_{x}\xi(x)\] \[=\operatorname{E}\xi\]
**Theorem 6.3**:: \[\boldsymbol{\Sigma}\mathscr{Q}\text{-}\mathbf{Set}_{f}\cong\boldsymbol{ \Sigma}\mathscr{Q}\text{-}\mathbf{Set}_{r}\]
Recall the underlying graph functor \(\mathcal{G}_{\mathbb{R}}\); we propose that its restriction to \(\boldsymbol{\Sigma}\mathscr{Q}\text{-}\mathbf{Set}_{f}\) has an inverse, and we give it explicitly now:
\[\mathcal{F}_{\mathbb{N}}:\boldsymbol{\Sigma}\mathscr{Q}\text{-}\mathbf{Set}_ {r}\to\boldsymbol{\Sigma}\mathscr{Q}\text{-}\mathbf{Set}_{f}\]
is a functor that "remembers" that relational morphisms between \(\mathscr{Q}\)-sets are always functional. It's action on objects is the identity. As we had mentioned in the previous subsection, relational morphisms \(\varphi:X\to Y\) induce a functional morphism \(\tilde{\varphi}:X\to\boldsymbol{\Sigma}(Y)\) given by \(x\mapsto\varphi(x,\underline{\phantom{x}})\).
Since \(Y\) is Scott complete, it is naturally isomorphic to \(\boldsymbol{\Sigma}(Y)\) and hence each \(\tilde{\varphi}(x)\) corresponds to some \(y_{\tilde{\varphi}(x)}\) which is the unique point in \(y\) representing \(\tilde{\varphi}(x)\). We claim that this mapping is a functional morphism \(X\to Y\) (easy) and that this action on morphisms is functorial, and assign it to \(\mathcal{F}_{\mathbb{N}}\); we further claim that \(\mathcal{F}_{\mathbb{N}}\) is the inverse of \(\mathcal{G}_{\mathbb{R}}\mid_{\boldsymbol{\Sigma}\mathscr{Q}\text{-}\mathbf{ Set}_{f}}^{\boldsymbol{\Sigma}\mathscr{Q}\text{-}\mathbf{Set}_{r}}\) - which is a nasty thing to typeset, so we'll just say \(\mathcal{G}_{\mathbb{R}}\).
Proof.: There is quite a lot to unpack; formally, we say that if we define \(\mathcal{F}_{\mathbb{N}}(\varphi)=\varepsilon\circ\tilde{\varphi}\), then this action is both functorial and makes \(\mathcal{F}_{\mathbb{N}}\) and \(\mathcal{G}_{\mathbb{R}}\) mutual inverses -- where \(\varepsilon\) is the counit of the adjunction, which takes singletons over Scott-complete \(\mathscr{Q}\)-sets to their representing elements.
Let's write, for a given \(\varphi\), \(\mathcal{F}_{\mathbb{N}}(\varphi)\) as \(f_{\varphi}\) for simplicity sake. First, note that \(\varphi(x,f_{\varphi}(x))=\operatorname{E}x\) - as \(\varphi(x,\underline{\phantom{x}})=\delta(\underline{\phantom{x}},f_{\varphi}(x))\) and \(\operatorname{E}f_{\varphi}(x)=\operatorname{E}x\). Obviously, \(\mathcal{F}_{\mathbb{N}}\) preserves identities, as
\[f_{\mathbf{id}} =\varepsilon(x\mapsto(\underline{\phantom{x}},x))\] \[=x\mapsto x\] \[=\mathbf{id}\]
functoriality can now be seen to hold: given \(x\), consider the singleton
\[\bigvee_{y}\varphi(x,y)\otimes\psi(y,\_) \geq\varphi(x,f_{\varphi}(x))\otimes\psi(f_{\varphi}(x),\_)\] \[=\operatorname{E}f_{\varphi}(x)\otimes\psi(f_{\varphi}(x),\_)\] \[=\psi(f_{\varphi}(x),\_)\] \[\geq\psi(y,\_)\otimes\delta(f_{\varphi}(x),y) (\text{for any }y)\] \[=\psi(y,\_)\otimes\varphi(x,y)\]
and hence \([\psi\circ\varphi](x,\_)=\psi(f_{\varphi}(x),\_)\), from which we can conclude that
\[f_{\psi\circ\varphi}=f_{\varphi}\circ f_{\varphi}\]
We ought to show that \(\mathcal{G}_{\mathbb{R}}\circ\mathcal{F}_{\mathbb{N}}(\varphi)=\varphi\) and \(\mathcal{F}_{\mathbb{N}}\circ\mathcal{G}_{\mathbb{R}}(f)=f\). Let's go about doing it in the order we've written. Take some relational morphism \(\varphi:X\to Y\) and let's consider the following:
\[[\mathcal{G}_{\mathbb{R}}\circ\mathcal{F}_{\mathbb{N}}(\varphi)] (x,y) =[\mathcal{G}_{\mathbb{R}}(\varepsilon\circ\hat{\varphi})](x,y)\] \[=\delta(\varepsilon\circ\hat{\varphi}(x),y)\] \[=\delta(\hat{\varphi}(x),\sigma_{y})\] \[=\hat{\varphi}(x)(y)\] \[=\varphi(x,y)\]
Now take some functional morphism \(f:X\to Y\)
\[[\mathcal{F}_{\mathbb{N}}\circ\mathcal{G}_{\mathbb{R}}(f)](x) =[\mathcal{F}_{\mathbb{N}}(\delta(f(\__1),\_2))](x)\] \[=[f(\_1)](x)\] \[=f(x)\]
And therefore, we have shown that the composites are indeed actually the identities and \(\mathcal{F}_{\mathbb{N}}\) witnesses the fact that \(\mathbf{\Sigma}\mathscr{D}\mathbf{\text{-}Set}_{r}\cong\mathbf{\Sigma} \mathscr{D}\mathbf{\text{-}Set}_{f}\).
**Lemma 6.1**:: \[\mathcal{G}_{\mathbb{R}}\circ\mathbf{\Sigma}\cong\mathcal{G}_{\mathbb{R}}\]
Where \(\mathcal{G}_{\mathbb{R}}\circ\mathbf{\Sigma}\) really ought to be the composite of \(\mathbf{\Sigma}\) with the appropriate restriction of \(\mathcal{G}_{\mathbb{R}}\) to the full subcategory that is the image of \(\mathbf{\Sigma}\). Or, alternatively, \(\mathcal{G}_{\mathbb{R}}\circ i\circ\mathbf{\Sigma}\).
Proof.: We must provide an \(\alpha_{X}\) for every \(X\) such that
which simplifies to finding \(\alpha_{X}\) such that the following commutes, thus justifying our lack of precision in the "clean" statement of the lemma:
if we then let \(\alpha\) be the relational isomorphism we already know exists between \(X\) and \(\boldsymbol{\Sigma}(X)\) there is a good chance naturality holds by magic. We need to verify that
\[\alpha\circ[\mathcal{G}_{\mathbb{R}}(f)]=[\mathcal{G}_{\mathbb{R}}\circ \boldsymbol{\Sigma}(f)]\circ\alpha\]
We do so by _cheating_, and proving instead that
\[\alpha\circ[\mathcal{G}_{\mathbb{R}}(f)]\geq[\mathcal{G}_{\mathbb{R}}\circ \boldsymbol{\Sigma}(f)]\circ\alpha\]
and then we remember that for relational morphisms \(\leq\)_is_\(=\) because \(\mathscr{Q}\) is **strong**.
\[[\mathcal{G}_{\mathbb{R}}\circ\boldsymbol{\Sigma}(f)]\circ \alpha(x,\psi) =\bigvee_{\xi}\xi(x)\otimes\delta(\boldsymbol{\Sigma}\,f(\xi),\psi)\] \[=\bigvee_{\xi}\xi(x)\otimes\bigvee_{y}\bigvee_{x}\delta(f(x),y) \otimes\xi(x)\otimes\psi(y)\] \[=\bigvee_{y}\bigvee_{x^{\prime}}\bigvee_{\xi}\xi(x)\otimes \delta(f(x^{\prime}),y)\otimes\xi(x^{\prime})\otimes\psi(y)\] \[\leq\bigvee_{y}\bigvee_{x^{\prime}}\delta(x,x^{\prime})\otimes \delta(f(x^{\prime}),y)\otimes\psi(y)\] \[\leq\bigvee_{y}\bigvee_{x^{\prime}}\delta(f(x),f(x^{\prime})) \otimes\delta(f(x^{\prime}),y)\otimes\psi(y)\] \[\leq\bigvee_{y}\delta(f(x),y)\otimes\psi(y)\] \[=\alpha\circ[\mathcal{G}_{\mathbb{R}}(f)](x,\psi)\]
Now we are ready to state a more categorical description of Scott-completeness, merely in terms of representable functors _etc_.
**Theorem 6.4**::
Let \(\upharpoonright\mathcal{G}_{\mathbb{R}}\) denote the funtor between the presheaf categories of \(\mathscr{Q}\)-\(\mathbf{Set}_{r}\) and \(\mathscr{Q}\)-\(\mathbf{Set}_{f}\) as below:
\[\begin{CD}\mathscr{Q}\mathscr{Q}\text{-}\mathbf{Set}_{f}@>{\mathscr{Q}_{ \mathbb{R}}}>{}>\mathscr{Q}\text{-}\mathbf{Set}_{r}\\ @V{\boldsymbol{\mathsf{PSh}}}V{\boldsymbol{\mathsf{PSh}}}V\\ \star\end{CD}\]
\(\star\)\(
Given by precomposing presheaves with the functor \(\mathcal{G}_{\mathbb{R}}\) so as to change their domains appropriately. Then
\[\left[\raisebox{-1.0pt}{\scalebox{1.2}{$\:\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
thence, by Yoneda we obtain that
\[\boldsymbol{\Sigma}(X)\cong_{f}X\]
But Scott-completeness is invariant under functional isomorphisms. So \(X\) was Scott-complete to begin with and thus ends our proof.
We then give a useful tool to show Scott-completeness in terms of an object's representable functor. Namely
**Theorem 6.5**:: \[X\]
is Scott-complete
\[\iff\mathscr{Q}\text{-}\mathbf{Set}_{f}(\_,X)\cong\mathscr{Q}\text{-}\mathbf{ Set}_{f}(\boldsymbol{\Sigma}(\_),X)\]
here again we actually wrote something slightly informal and we really mean that, \(\mathscr{Q}\text{-}\mathbf{Set}_{f}(i\circ\boldsymbol{\Sigma}(\_),X)\) is isomorphic to \(\mathscr{Q}\text{-}\mathbf{Set}_{f}(\_,X)\). We can rewrite it as
\[\raisebox{-0.86pt}{$\times$}\hskip-0.86pt\raisebox{-0.86pt}{$\times$} \hskip-0.86pt\raisebox{-0.86pt}{$\times$}\hskip-0.86pt\raisebox{-0.86pt}{$f$}(X) \circ i\circ\boldsymbol{\Sigma}\]
Proof.: Scott-completeness implies the existence of the isomorphism since \(X\) as an object of \(\mathscr{Q}\text{-}\mathbf{Set}_{f}\) is just \(i(X)\) for \(X\) an object of \(\boldsymbol{\Sigma}\mathscr{Q}\text{-}\mathbf{Set}\), and hence adjunction applies:
\[\mathscr{Q}\text{-}\mathbf{Set}_{f}(\_,X) =\mathscr{Q}\text{-}\mathbf{Set}_{f}(\_,i(X))\] \[\cong\boldsymbol{\Sigma}\mathscr{Q}\text{-}\mathbf{Set}( \boldsymbol{\Sigma}(\_),X)\] \[=\mathscr{Q}\text{-}\mathbf{Set}_{f}(i\circ\boldsymbol{\Sigma}(\_ ),i(X))\] \[=\mathscr{Q}\text{-}\mathbf{Set}_{f}(\boldsymbol{\Sigma}(\_),X)\]
Now, provided with the isomorphism but no guarantee that \(X\) is complete, we have to find sufficient reason for \(X\) to be complete.
\[\raisebox{-0.86pt}{$\times$}\hskip-0.86pt\raisebox{-0.86pt}{$\times$} \hskip-0.86pt\raisebox{-0.86pt}{$f$}(X) \cong\raisebox{-0.86pt}{$\times$}\hskip-0.86pt\raisebox{-0.86pt}{$f$}(X) \circ i\circ\boldsymbol{\Sigma}\] \[=\boldsymbol{\Sigma}\mathscr{Q}\text{-}\mathbf{Set}_{f}( \boldsymbol{\Sigma}(\_),X)\] \[\cong\boldsymbol{\Sigma}\mathscr{Q}\text{-}\mathbf{Set}_{r}( \mathscr{G}\circ\boldsymbol{\Sigma}(\_),\mathscr{G}(X))\] \[=\boldsymbol{\Sigma}\mathscr{Q}\text{-}\mathbf{Set}_{r}( \mathscr{G}\circ\boldsymbol{\Sigma}(\_),X)\] \[\cong\boldsymbol{\Sigma}\mathscr{Q}\text{-}\mathbf{Set}_{r}( \mathscr{G}(\_),X)\] \[=\raisebox{-0.86pt}{$\times$}\hskip-0.86pt\raisebox{-0.86pt}{$r$}(X) \upharpoonright\mathscr{G}\]
**Proposition 6.1**:: \[\text{The true force of the theorem above is that $i$ in $\boldsymbol{\Sigma}\dashv i$ not only preserves limits but indeed it creates them.}\]
Proof.: Take a diagram \(D\) on \(\boldsymbol{\Sigma}\mathscr{Q}\text{-}\mathbf{Set}\),
\[\raisebox{-0.86pt}{$\times$}\hskip-0.86pt\raisebox{-0.86pt}{$f$}( \lim i\circ D) \cong\lim\raisebox{-0.86pt}{$\times$}\hskip-0.86pt\raisebox{-0.86pt}{$f$}(i \circ D)\] \[\cong\lim\raisebox{-0.86pt}{$\times$}\hskip-0.86pt\raisebox{-0.86pt}{$(i \circ D)\circ i\circ\boldsymbol{\Sigma}$}\] \[\cong\raisebox{-0.86pt}{$\times$}\hskip-0.86pt\raisebox{-0.86pt}{$( \lim i\circ D)\circ i\circ\boldsymbol{\Sigma}$}\]
This shows that the external (ie. in \(\mathscr{Q}\)-\(\mathbf{Set}_{f}\)) the limit of complete \(\mathscr{Q}\)-sets is itself complete.
Since any cone for \(D\) - say, with vertex \(X\) - would _be_ a cone for \(i\circ D\) (as \(i\) is simply an inclusion, they literally are) - by universality there is exactly one morphism \(\lim i\circ D\to X\) making everything commute.
Since the external limit is now known to be complete, by necessity, it also _is_ the limit in \(\mathbf{\Sigma}\mathscr{Q}\)-\(\mathbf{Set}\) as well. Hence, \(i\) both preserves all limits from its domain, and reflects all limits in its codomain.
|
2304.07779 | The Contour integral method for Feynman-Kac equation with two internal
states | We develop the contour integral method for numerically solving the
Feynman-Kac equation with two internal states [P. B. Xu and W. H. Deng, Math.
Model. Nat. Phenom., 13 (2018), 10], describing the functional distribution of
particle's internal states. The striking benefits are obtained, including
spectral accuracy, low computational complexity, small memory requirement, etc.
We perform the error estimates and stability analyses, which are confirmed by
numerical experiments. | Fugui Ma, Lijing Zhao, Yejuan Wang, Weihua Deng | 2023-04-16T13:25:53Z | http://arxiv.org/abs/2304.07779v2 | # The Contour integral method for Feynman-Kac equation with two internal states
###### Abstract
We develop the contour integral method for numerically solving the Feynman-Kac equation with two internal states [P. B. Xu and W. H. Deng, Math. Model. Nat. Phenom., 13 (2018), 10], describing the functional distribution of particle's internal states. The striking benefits are obtained, including spectral accuracy, low computational complexity, small memory requirement, etc. We perform the error estimates and stability analyses, which are confirmed by numerical experiments.
keywords: Contour integral method, time marching scheme, Feynman-Kac equation, two internal states Msc: [2020] 65B15, 65E10, 65L30, 65R10, 65Z05, 68Q17 +
Footnote †: journal: Elsevier
## 1 Introduction
The weak singularity at starting point of the solution and the non-locality of the time evolution operator of the Feynman-Kac equation [23] bring challenges on computational efficiency in numerically solving the equation. One of the most effective techniques to overcome the challenges is to analytically get the solution in frequency domain and then numerically do the inverse Laplace transform [13; 14; 17]. The contour integral method (CIM) is an efficient numerical method for solving the inverse Laplace transform [18; 17; 21].
Let us briefly introduce this method through the following toy model. Consider the time fractional initial value problem [15; 2]
\[\prescript{C}{0}D_{t}^{\sigma}\mathbf{u}(t)+A\mathbf{u}(t)=\mathbf{b}(t),\ \mathbf{u}(0)=\mathbf{u}_{0}, \tag{1}\]
where \(A\) is a matrix and \(\prescript{C}{0}D_{t}^{\sigma}\) is the Caputo fractional derivative [15] with \(t\in(0,T]\) and \(0<\alpha<1\). Taking Laplace transform on (1), one can get the solution of the system in Laplace space, namely,
\[\widetilde{\mathbf{u}}(z)=(z^{\alpha}I+A)^{-1}\left(z^{\alpha-1}\mathbf{u}_{0} +\widetilde{\mathbf{b}}(z)\right). \tag{2}\]
Further by performing the inverse Laplace transform on the solution in Laplace space, one get the solution of the system (1), i.e.,
\[\mathbf{u}(t)=\frac{1}{2\pi i}\int_{\sigma-i\infty}^{\sigma+i\infty}e^{\sigma ^{2}}\widetilde{\mathbf{u}}(z)dz,\ \ \ \ \sigma>\sigma_{0}, \tag{3}\]
where \(\sigma_{0}\) is called convergent abscissa. In practice, due to the complexity of \(\widetilde{\mathbf{u}}(z)\) and the high dimension of matrix \(A\), it is hard to get the analytical solution of (1) by using the inverse Laplace transform (3). Hence, numerical methods are usually used to approximate (3). The CIM is one of the most efficient numerical methods to solve this indefinite integral.
The earliest discussion for CIM seems appeared in [18] by A. Talbot. Then, J. A. C. Weideman and other researchers gradually improved it and made it more efficient and widely applicable (see, e.g., [10; 16], etc). The basic
idea of the CIM method is to deform the integral line, since the original integration path of the inverse Laplace transform is a vertical line from negative infinity to positive infinity, which has many numerical challenges, e.g., the high frequency oscillation of the integrand. Fortunately, by deforming the vertical line into a curve, which starts and ends in the left half complex plane, the exponential decay of the integrand can be obtained by the exponential factor \(e^{zt}\). Such a deformed line and the exponential decay of the integrand make it possible for an abnormal integral to be solved numerically, and Cauchy integral theorem can ensure that such a deformation can be carried out. More specifically, after deforming the integral line of (3) into a contour, which satisfies \(Re(z)\to-\infty\) at each end, then, the exponential factor \(e^{zt}\) forces a rapid decay of the integrand as \(Re(z)\to-\infty\), which greatly benefits to the convergence speed of numerical integral methods for solving the inverse Laplace transform.
Based on the aforementioned idea, we return to the original time fractional initial value problem (1). Suppose that there is an appropriate contour (for (1)) parameterized by
\[\Gamma:z=z(\phi),\ -\infty<\phi<\infty. \tag{4}\]
Then the solution of (3) can be rewritten as
\[\mathbf{u}(t)=\frac{1}{2\pi i}\int_{-\infty}^{\infty}e^{z(\phi)}\widetilde{ \mathbf{u}}(z(\phi))z^{\prime}(\phi)d\phi. \tag{5}\]
Approximating it by the trapezoidal rule with uniform step-length \(h\), there is
\[\mathbf{u}(t)\approx\frac{h}{2\pi i}\sum_{k=-\infty}^{\infty}e^{z_{k}t^{ \prime}}\widetilde{\mathbf{u}}_{k}z_{k}^{\prime}, \tag{6}\]
where \(z_{k}=z(\phi_{k})\), \(z_{k}^{\prime}=z^{\prime}(\phi_{k})\), \(\widetilde{\mathbf{u}}_{k}=\widetilde{\mathbf{u}}(z_{k})\) with \(\phi_{k}=kh\). If the contour \(\Gamma\) is symmetric with respect to the real axis, \(A\) is a real matrix, then \(\widetilde{u}(z)=\widetilde{u}(z)\), and after truncation, there is
\[\mathbf{u}(t)\approx\frac{h}{\pi}\text{Im}\left\{\sum_{k=0}^{N-1}e^{z_{k}t} \widetilde{\mathbf{u}}_{k}z_{k}^{\prime}\right\}. \tag{7}\]
This is the CIM scheme of (1).
The key to design an efficient CIM scheme is to find the spectrum distribution of the matrix \(A\), which determines how to choose an appropriate contour integral curve \(\Gamma\).
The non-locality and the weakly singular kernel of the time fractional operator results to the \(\mathcal{O}\left(N^{2}\right)\) computational cost for time-marching scheme and weak singularity of the solution. A lot of efforts have been made to efficiently deal with this difficulty (see, e.g., [9, 4] etc). Compared with the time-stepping methods (see eg. [4, 29]), the CIM scheme has the following cons and pros, when solving the nonlocal problems.
* Generally, the time-stepping methods need the memory of \(\mathcal{O}(N)\) and have the computational complexity of \(\mathcal{O}\left(N^{2}\right)\). While for the CIM scheme, the required memory is \(\mathcal{O}(1)\) and the computational complexity is \(\mathcal{O}(N)\).
* For the time-stepping methods, the solution at a given later time depends on the previous ones. While, for the CIM scheme, the solution can be directly computed at any desired time, without the information on earlier time.
* The computation cost of the CIM scheme mainly lies in solving the system (3), which can be parallelly computed with the rate of 100%.
* For the time-stepping methods, low regularity of the solution will make it hard to get a high convergence rate. This issue has little influence on the CIM scheme.
* There is nothing perfect. Although, the CIM works well for the linear model, it is difficult to deal with the nonlinear one directly.
Above all, the CIM is a simple, time-saving, and efficient numerical method. To build a CIM scheme, the key is to choose an appropriate integral contour, which depends on the spectrum distribution of the matrix \(A\). Currently, there are four types of popular integral contours used for the CIM, namely, Talbot's contour [18], parabolic contour e.g., [21], hyperbolic contour e.g., [7; 24], and other simple, closed, and positively oriented curves, e.g., [16; 5]. The CIM with these contours can be used to solve parabolic problems, e.g., [10; 16], integral differential equation with convolution memory kernel [11; 28], Black-Scholes and Heston equations [8], and other problems, e.g., [25; 26]. During these applications, the CIM behaves high numerical performance. This paper will develop the CIM scheme into the time fractional differential system, i.e., the Feynman-Kac equation with two internal states [23].
Feynman-Kac equation usually describes the distribution of a particular type of statistical observables, e.g., functional of the particle trajectory [1; 22; 23]. The model considered in this paper characterizes a specific functional: \(A=\int_{0}^{t}U(j(\tau))d\tau\), where \(j(\tau)\) represents the \(j\)-th internal state at time \(\tau\) with values belonging to \(\{1,2,\cdots,m\}\). The distribution of \(A\) in the frequency domain is governed by
\[\begin{cases}\mathbf{M}^{T}\frac{\partial}{\partial t}\mathbf{G}=\left( \mathbf{M}^{T}-\mathbf{I}\right)\mathrm{diag}\left(B_{\alpha_{1}}^{-1},B_{ \alpha_{2}}^{-1},\cdots,B_{\alpha_{m}}^{-1}\right)\mathrm{diag}\left(\mathfrak{ D}_{t}^{1-\alpha_{1}},\mathfrak{D}_{t}^{1-\alpha_{2}},\cdots,\mathfrak{D}_{t}^{1- \alpha_{m}}\right)\mathbf{G}\\ \qquad\qquad-\rho\mathbf{M}^{T}\mathrm{diag}\left(U(1),U(2),\cdot\cdot\cdot,U (m)\right)\mathbf{G},\quad t\in(0,T],\\ \mathbf{G}(\cdot,0)=\mathbf{G_{0}},\end{cases} \tag{8}\]
where \(M\) is the transition matrix of a Markov chain with dimension \(m\times m\); \(B_{\alpha_{j}}^{-1}\), \(j=1,2,\cdot\cdot\cdot,m\), are given positive real numbers; \(\mathbf{G}=[G_{1},G_{2},\cdot\cdot\cdot,G_{m}]^{T}\) denotes the solution of the model (8) with \(G_{j}:=G_{j}(\rho,t)\) represents the Laplace transform of \(G_{j}(A,t)\) w.r.t. \(A\); and \(G_{j}(A,t)\) is the PDF of finding the particle with the functional \(A\) in the \(j\)-th internal state at time \(t\); \(I\) is the identity matrix; 'diag' represents the diagonal matrix formed by its vector arguments; and \(\mathfrak{D}_{t}^{1-\alpha_{j}}\), \(j=1,2,\cdot\cdot\cdot,m\), are the fractional substantial derivatives, defined as
\[\mathfrak{D}_{t}^{1-\alpha_{j}}G_{j}(\rho,t):=\left(\frac{\partial}{\partial t }+\rho U(j)\right)\frac{1}{\Gamma(\alpha_{j})}\int_{0}^{\epsilon}\frac{ \exp\left[-(t-\tau)\rho U(j)\right]}{(t-\tau)^{1-\alpha_{j}}}G_{j}(\rho,\; \tau)d\tau \tag{9}\]
with \(0<\alpha_{j}<1,\;j=1,2,\cdot\cdot\cdot,m\).
This paper is organized as follows. In Section 2, we give the regularity estimates on the solution of (8). In Section 3, the CIMs with parabolic contour and hyperbolic contour for the system (8) are built respectively. Also we perform the error estimate and stability analysis for these schemes. In addition, the parameters in parabolic and hyperbolic contours are optimally determined. To verify the efficiency of the CIMs, we also construct a time-marching scheme To provide a reference solution. Some numerical experiments are performed in Section 4 to show the high numerical performance of the CIMs in solving such a non-local system. Concluding remarks are presented in Section 5.
## 2 The continuous problem
In this section, we perform the regularity analysis for Problem (8).
### Solution representations
We consider the Feynman-Kac equation with two internal states. Without loss of generality, the transition matrix \(M\) can be written as
\[M=\begin{bmatrix}p&1-p\\ 1-b&b\end{bmatrix}, \tag{10}\]
where \(p\), \(b\in[0,1]\), and \(p+b\neq 1\). Then Problem (8) reduces to
\[\begin{cases}p\left(\frac{\partial}{\partial t}+U(1)\rho\right)G_{1}+(1-b) \left(\frac{\partial}{\partial t}+U(2)\rho\right)G_{2}=(p-1)B_{\alpha_{1}}^{-1 }\mathfrak{D}_{t}^{1-\alpha_{1}}G_{1}+(1-b)B_{\alpha_{2}}^{-1}\mathfrak{D}_{t} ^{1-\alpha_{2}}G_{2},\;t\in(0,T],\\ (1-p)\left(\frac{\partial}{\partial t}+U(1)\rho\right)G_{1}+b\left(\frac{ \partial}{\partial t}+U(2)\rho\right)G_{2}=(1-p)B_{\alpha_{1}}^{-1}\mathfrak{D }_{t}^{1-\alpha_{1}}G_{1}+(b-1)B_{\alpha_{2}}^{-1}\mathfrak{D}_{t}^{1-\alpha _{2}}G_{2},\;t\in(0,T],\\ \mathbf{G}(\cdot,0)=\mathbf{G_{0}},\end{cases} \tag{11}\]
where \(\mathbf{G}_{0}=[G_{1,0},G_{2,0}]^{T}\) is the initial value. After simple calculations, we can obtain from (11) that
\[\left(\frac{\partial}{\partial t}+U(1)\rho\right)G_{1}+\left(\frac{\partial}{ \partial t}+U(2)\rho\right)G_{2}=0.\]
Then, there is
\[\begin{cases}\left(\frac{\partial}{\partial t}+U(1)\rho\right)G_{1}=\frac{1-p }{1-p-b}B_{\alpha_{1}}^{-1}\mathfrak{D}_{t}^{1-\alpha_{1}}G_{1}+\frac{1-b}{p+ b-1}B_{\alpha_{2}}^{-1}\mathfrak{D}_{t}^{1-\alpha_{2}}G_{2},\\ \left(\frac{\partial}{\partial t}+U(2)\rho\right)G_{2}=\frac{1-p}{p+b-1}B_{ \alpha_{1}}^{-1}\mathfrak{D}_{t}^{1-\alpha_{1}}G_{1}+\frac{1-b}{1-p-b}B_{ \alpha_{2}}^{-1}\mathfrak{D}_{t}^{1-\alpha_{2}}G_{2}.\end{cases} \tag{12}\]
Denote \(m_{1}:=\frac{1-p}{1-p-b}\), \(m_{2}:=\frac{1-b}{1-p-b}\). For \(0<\alpha_{1},\alpha_{2}<1\), if \(G_{j}\in I_{0^{*}}^{1-\alpha_{j}}[L_{1}(0,T)]:=\{f:f(x)={}_{0}I_{t}^{1-\alpha_ {j}}\varphi(t),\varphi(t)\in L_{1}(0,T)\}\), then \({}_{0}D_{t}^{-\alpha_{j}}(e^{\nu U(j)t}G_{j})\big{\}}_{t=0}=0\), \(j=1,2\) (we note that the space \(I_{0^{*}}^{1-\alpha_{j}}[L_{1}(0,T)]\) only excludes some extreme functions such as \(t^{-\beta}\), \(\beta<\alpha_{j}\), so it is not a harsh requirement for \(G_{j}\), see more details in [27]). By this, taking the Laplace transform on both sides of (12), we deduce
\[\begin{cases}\widehat{G}_{1}=\left((z+\rho U(1))-m_{1}B_{\alpha_{1}}^{-1}\left( z+\rho U(1)\right)^{1-\alpha_{1}}\right)^{-1}\left(-m_{2}B_{\alpha_{2}}^{-1} \left(z+\rho U(2)\right)^{1-\alpha_{2}}\widehat{G}_{2}+G_{1,0}\right),\\ \widehat{G}_{2}=\left((z+\rho U(2))-m_{2}B_{\alpha_{2}}^{-1}\left(z+\rho U(2) \right)^{1-\alpha_{2}}\right)^{-1}\left(-m_{1}B_{\alpha_{1}}^{-1}\left(z+\rho U (1)\right)^{1-\alpha_{1}}\widehat{G}_{1}+G_{2,0}\right).\end{cases} \tag{13}\]
Let
\[H(z):=\left\{[(z+\rho U(1))^{\alpha_{1}}-m_{1}B_{\alpha_{1}}^{-1}][(z+\rho U( 2))^{\alpha_{2}}-m_{2}B_{\alpha_{2}}^{-1}]-m_{1}m_{2}B_{\alpha_{1}}^{-1}B_{ \alpha_{2}}^{-1}\right\}^{-1}, \tag{14}\]
\[H_{\alpha_{1}}(z):=H(z)\left((z+\rho U(2))^{\alpha_{2}}-m_{2}B_{\alpha_{2}}^{-1 }\right), \tag{15}\]
and
\[H_{\alpha_{2}}(z):=H(z)\left((z+\rho U(1))^{\alpha_{1}}-m_{1}B_{\alpha_{1}}^{- 1}\right). \tag{16}\]
Then, the system (11) can be decoupled in Laplace space as
\[\begin{cases}\widehat{G}_{1}=H_{\alpha_{1}}(z)(z+\rho U(1))^{\alpha_{1}-1}G_ {1,0}-m_{2}B_{\alpha_{2}}^{-1}H(z)(z+\rho U(1))^{\alpha_{1}-1}G_{2,0},\\ \widehat{G}_{2}=H_{\alpha_{2}}(z)(z+\rho U(2))^{\alpha_{2}-1}G_{2,0}-m_{1}B_{ \alpha_{1}}^{-1}H(z)(z+\rho U(2))^{\alpha_{2}-1}G_{1,0}.\end{cases} \tag{17}\]
### Regularity
Define the sectors
\[\Sigma_{\theta,\delta}:=\left\{z\in\mathbb{C}:|z|>\delta>0,|\arg(z)|\leq\theta \right\},\quad\theta\in(\pi/2,\pi).\]
Take an integral contour
\[\Gamma_{\theta,\delta}:=\left\{z\in\mathbb{C}:|z|=\delta>0,|\arg(z)|\leq\theta \right\}\cup\left\{z\in\mathbb{C}:z=re^{\pm i\theta},r\geq\delta>0\right\},\]
oriented with an increasing imaginary part. Based on these analytic settings, we have the following estimates related to (11) or (17). See A for the proofs in details.
**Lemma 2.1**.: _Let \(z\in\Sigma_{\theta,\delta}\) and \(\delta\geq 2\max\{|\rho U(1)|,|\rho U(2)|\}\). For \(0<\alpha_{1},\alpha_{2}<1\), there hold_
\[\left|(z+\rho U(1))^{-\alpha_{1}}\right|\leq 2|z|^{-\alpha_{1}},\ \left|(z+\rho U(2))^{- \alpha_{2}}\right|\leq 2|z|^{-\alpha_{2}}.\]
**Lemma 2.2**.: _Let \(z\in\Sigma_{\theta,\delta}\) and \(\delta\geq\max\left\{2|\rho U(1)|,2|\rho U(2)|,4^{1/\alpha_{1}}|m_{1}B_{\alpha_ {1}}^{-1}|^{1/\alpha_{1}},4^{1/\alpha_{2}}|m_{2}B_{\alpha_{2}}^{-1}|^{1/\alpha_{2 }}\right\}\). For \(0<\alpha_{1},\alpha_{2}<1\), there hold_
\[\left|((z+\rho U(1))^{\alpha_{1}}-m_{1}B_{\alpha_{1}}^{-1})^{-1}\right|\leq 4|z| ^{-\alpha_{1}},\ \left|((z+\rho U(2))^{\alpha_{2}}-m_{2}B_{\alpha_{2}}^{-1})^{-1}\right| \leq 4|z|^{-\alpha_{2}}.\]
**Lemma 2.3**.: _Let \(z\in\Sigma_{\theta,\delta}\), \(\delta\geq\max\left\{2|\rho U(1)|,2|\rho U(2)|,4^{1/\alpha_{1}}|m_{1}B_{\alpha_{1} }^{-1}|^{1/\alpha_{1}},4^{1/\alpha_{2}}|m_{2}B_{\alpha_{2}}^{-1}|^{1/\alpha_{2} },\left(32|m_{1}m_{2}B_{\alpha_{1}}^{-1}B_{\alpha_{2}}^{-1}\right|^{1/(\alpha_ {1}+\alpha_{2})}\right)\). For \(0<\alpha_{1},\alpha_{2}<1\), there hold_
\[|H(z)|\leq 32|z|^{-\alpha_{1}-\alpha_{2}},\ \left|H_{\alpha_{1}}(z)\right|\leq 8|z| ^{-\alpha_{1}},\ \left|H_{\alpha_{2}}(z)\right|\leq 8|z|^{-\alpha_{2}},\]
_where \(H(z)\), \(H_{\alpha_{1}}(z)\), and \(H_{\alpha_{2}}(z)\) are defined in (14), (15), and (16), respectively._
**Lemma 2.4**.: _Let \(z\in\Sigma_{\theta,\delta}\) and \(\delta\) satisfy the conditions in Lemma 2.3. For \(0<\alpha_{1},\alpha_{2}<1\), there hold_
\[\left|H_{\alpha_{1}}(z)(z+\rho U(1))^{\alpha_{1}-1}\right|\leq 16|z|^{-1},\ \left|H_{\alpha_{2}}(z)(z+\rho U(2))^{\alpha_{2}-1}\right|\leq 1 6|z|^{-1},\]
\[\left|H(z)(z+\rho U(1))^{\alpha_{1}-1}\right|\leq 64|z|^{-1-\alpha_{2}},\ \left|H(z)(z+\rho U(2))^{\alpha_{2}-1}\right|\leq 6 4|z|^{-1-\alpha_{1}}.\]
Based on the above results, one can get the estimates on the solutions of (11).
**Theorem 2.1**.: _Under the conditions in Lemma 2.3, for given initial values \(G_{1,0}\) and \(G_{2,0}\) and \(t>1/\delta\), the solutions of (11) satisfies_
\[\left|G_{1}^{(q)}(t)\right|\leq\frac{8}{\pi}\left((-1/\cos(\theta))t^{-q}e^{ \cos(\theta)}+2\theta\delta^{q}e^{qT}\right)|G_{1,0}|+\frac{32\left|m_{2}B_{ \alpha_{2}}^{-1}\right|}{\pi}\left((-1/\cos(\theta))t^{\alpha_{2}-q}e^{\cos( \theta)}+2\theta\delta^{q-\alpha_{2}}e^{qT}\right)|G_{2,0}|,\]
\[\left|G_{2}^{(q)}(t)\right|\leq\frac{32\left|m_{1}B_{\alpha_{1}}^{-1}\right|}{ \pi}\left((-1/\cos(\theta))t^{\alpha_{1}-q}e^{\cos(\theta)}+2\theta\delta^{q- \alpha_{1}}e^{qT}\right)|G_{1,0}|+\frac{8}{\pi}\left((-1/\cos(\theta))t^{-q}e ^{\cos(\theta)}+2\theta\delta^{q}e^{qT}\right)|G_{2,0}|,\]
_for \(q=0,1\), where \(G_{1}^{(q)}(t)\) and \(G_{2}^{(q)}(t)\) denote \(\frac{\partial^{q}}{\partial t^{q}}G_{1}(t)\) and \(\frac{\partial^{q}}{\partial t^{q}}G_{2}(t)\), respectively._
Theorem 2.1 discovers the weak singularity of the solution bear to the origin, which usually weakens the performance of the time-marching schemes but, we will see in the following context, has no influence on the CIMs.
## 3 The schemes and error estimates for the Feynman-Kac system
In this section, for the system (11), the CIMs with two kinds of contours, i.e., parabolic contour and hyperbolic contour, are given. With careful analysis of the analytical domain of the solution in frequency domain, the parameters used in the contours are determined. The error estimates and stability analysis are presented. To give the reference solution for verifying the effectiveness of the CIMs, a time marching scheme is also designed.
### The CIM schemes
Here we discuss two different integral contours [21] for the CIMs, which are parameterized by
\[\text{(Parabolic contour)}\ \ \Gamma_{1}:\ z(\phi)=\eta_{1}(i\phi+1)^{2},\ \ -\infty<\phi<\infty, \tag{18}\]
and
\[\text{(Hyperbolic contour)}\ \ \Gamma_{2}:\ z(\phi)=\eta_{2}(1+\sin(i\phi- \alpha)),\ \ -\infty<\phi<\infty, \tag{19}\]
where \(\eta_{1}\), \(\eta_{2}>0\) and \(\alpha>0\) are the parameters to be determined. With these, the solutions \(G_{1}(t)\), \(G_{2}(t)\) can be represented as the infinite integrals with respect to \(\phi\), i.e.,
\[G_{j}(t)=I_{j}:=\int_{-\infty}^{+\infty}v_{j}(t,\phi)d\phi,\quad j=1,2, \tag{20}\]
where
\[v_{j}(t,\phi):=\frac{1}{2\pi i}e^{z(\phi)t}\widetilde{G}_{j}(z(\phi))z^{ \prime}(\phi),\ \ j=1,2 \tag{21}\]
with \(\widehat{G}_{j}\) defined in (13).
Applying the trapezoidal rule to compute the integral (20) with uniform steps \(h_{m}\) (\(m=1,2\)) for \(\phi\), one can numerically get the approximate solutions \(G_{1,N}^{(m)}(t)\) and \(G_{2,N}^{(m)}(t)\). Since the contours \(\Gamma_{1}\) and \(\Gamma_{2}\) are symmetric with respect to the real axis and \(\widehat{G}_{j}(\overline{z(\phi)})=\widehat{\widehat{G}_{j}(z(\phi))}\), \(j=1,2\), (20) can be approximated by
\[\begin{split} G_{1,N}^{(m)}(t)&\approx I_{1,h_{m}, N}:=h_{m}\sum_{k=1-N}^{N-1}v_{1}(t,\phi_{k})=\frac{h_{m}}{\pi}\mathrm{Im}\left\{ \sum_{k=0}^{N-1}e^{\pi^{2}t}\widehat{G}_{1,k}\zeta_{k}^{\prime}\right\},\\ G_{2,N}^{(m)}(t)&\approx I_{2,h_{m},N}:=h_{m}\sum_{k=1 -N}^{N-1}v_{2}(t,\phi_{k})=\frac{h_{m}}{\pi}\mathrm{Im}\left\{\sum_{k=0}^{N-1} e^{\pi^{2}t}\widehat{G}_{2,k}\zeta_{k}^{\prime}\right\},\end{split} \tag{22}\]
where \(m=1,2\) denote the two different choices of the integral contours \(\Gamma_{1}\) and \(\Gamma_{2}\).
It can be seen from (22) that the numerical solutions at current time are computed without knowing its information at previous times.
### Quadrature error
The key issue to ensure the effectiveness and efficiency of the CIMs is to make all singular points of \(\widehat{G}_{j}\) locate at the left side of \(\Gamma_{m}\) and that the integrand of the indefinite integrals (20) has wide analytical open strips see, e.g., [21; 19].
#### 3.2.1 Determination of the open strip
Since \(z(\phi)\) in both of the integral contours (18) and (19) are analytical, the analytical properties of \(v_{j}(t,\phi)\) in (21) is completely determined by \(\widehat{G}_{j}(z(\phi))\). According to the expression of \(\widehat{G}_{j}(z(\phi))\) in (17), the singularity mainly comes from \(H(z)\).
Here we consider the case \(U(j)\geq 0\), \(j=1,2\), and denote \(C_{1}:=m_{1}B_{\alpha_{1}}^{-1}\), \(C_{2}:=m_{2}B_{\alpha_{2}}^{-1}\). There are \(C_{1}\neq 0\), \(C_{2}\neq 0\), the analytical domain of \(H(z)\) are determined by the next proposition.
**Proposition 3.1**.: _Let \(z\in\sum_{\theta}\). If_
\[\begin{cases}Re\left((z+\rho U(1))^{\alpha_{1}}\right)>2C_{1},&\text{or}\ \ \ \left\{\begin{aligned} &|Im\left((z+\rho U(1))^{\alpha_{1}}\right)|>|C_{1}|,\\ &|Im\left((z+\rho U(2))^{\alpha_{2}}\right)|>|C_{2}|,\end{aligned}\right.\end{cases} \tag{23}\]
_then \(H(z)\) is analytic._
See the proof in B.1.
**Remark 1**.: From (23), it can be further obtained that \(H(z)\) is analytic if
\[\mathrm{Re}(z)>d_{1}\ \ \mathrm{or}\ \ |\mathrm{Im}(z)|>d_{2}, \tag{24}\]
where \(d_{1}:=\max\{(2|C_{j}|/\cos(\alpha_{j}\pi/2))^{1/\alpha_{j}}-\mathrm{Re}(\rho U (j),j=1,2\}\) and \(d_{2}:=\max\{(|C_{j}|/\cos(\alpha_{j}\pi/2))^{1/\alpha_{j}}-\mathrm{Im}(\rho U (j),j=1,2\}\). One can see B.2 for more details.
Given open strip
\[S^{(1)}:=\{\phi=x+iy\in\mathbb{C}:-d<y<a<1,x\in\mathbb{R}\ \mathrm{and}\ a,d>0\}\]
or
\[S^{(2)}:=\{\phi=x+iy\in\mathbb{C}:-\alpha<y<\pi/2-\alpha-\delta,x\in\mathbb{R },0<\alpha,\delta<\pi/2\ \mathrm{and}\ \alpha<\pi/2-\delta\},\]
then the integral contour \(\Gamma_{1}\) or \(\Gamma_{2}\) maps it into a neighbourhood, defined as \(N^{(m)}:=\{z(\phi):\phi\in S^{(1)}\ \mathrm{or}\ S^{(2)}\}\). According to (24), there exists a domain \(D^{(m)}\), such that \(H(z)\) is analytical in the domain \(\mathbb{C}\setminus D^{(m)}\), shown in Figure 3.1 (_right_). Hence, for any \(z(\phi)\in N^{(m)}\subset\mathbb{C}\setminus D^{(m)}\), the integrand \(v_{j}(t,\phi)\) is analytical. Since the integral contours \(\Gamma_{1}\) and \(\Gamma_{2}\) are holomorphic mappings, so, once \(N^{(m)}\) are specified, the strips \(S^{(1)}\) and \(S^{(2)}\) can be accordingly determined.
For the consideration of convergence, we need to select the analytical domain \(N^{(m)}\) as large as possible. Correspondingly, the optimum parameter \(a\) in \(S^{(1)}\) is \(a=1-\left[(d_{1}+(d_{1}^{2}+d_{2}^{2})^{1/2})/(2\eta_{1})\right]^{1/2}\) and the biggest value of \(d\) is given by (31); the optimal parameter \(\delta\) in \(S^{(2)}\) is \(\delta=\arctan\left(d_{2}/(\eta_{2}-d_{1})\right)\) and \(\alpha\) is determined by maximizing \(Q(\alpha)=\frac{\pi^{2}-2\pi a-2\pi\delta}{A(\alpha)}\) (see Subsection 3.2.3). For more details, one can see B.3.
#### 3.2.2 Stability
Paved by the front part, the integrands \(v_{j}(t,\phi)\) are analytical with respect to \(\phi\) in the open strips \(S^{(1)}\) and \(S^{(2)}\). Now, for the CIMs (22) with the integral contours \(\Gamma_{1}\) and \(\Gamma_{2}\), we have the following stability analyses.
**Lemma 3.1** ([6], Lemma 1).: _Take \(L(x):=1+\left\lfloor\ln(1-e^{-x})\right\rfloor\), \(x>0\), there hold_
\[\int_{0}^{+\infty}e^{-\gamma\cosh(x)}dx\leq L(\gamma),\ \gamma>0,\]
_and_
\[\int_{\sigma}^{+\infty}e^{-\gamma\cosh(x)}dx\leq(1+L(\gamma))e^{-\gamma\cosh( \sigma)},\ \gamma>0\ \mathrm{and}\ \sigma>0.\]
**Lemma 3.2**.: _Let \(v_{1}(t,\phi)\) be defined in (21) and \(z(\phi)\) defined in (18) with \(\phi=x+ir\in S^{(1)}\). For \(t>0\) and \(B>0\) defined in (B.9), we have_
\[|v_{1}(t,\phi)|\leq\frac{Be^{\eta_{1}t}}{\pi}\left(|G_{1,0}|+\eta_{1}^{- \alpha_{2}}|G_{2,0}|\right)e^{-\eta_{1}tx^{2}}\ \ \forall\ \phi\in S^{(1)}. \tag{25}\]
_Let \(z(\phi)\) defined in (19) with \(\phi=x+iy\in S^{(2)}\). For \(t>0\) and \(0<y<\min\{\alpha,\pi/2-\alpha-\delta\}\), there holds_
\[|v_{1}(t,\phi)|\leq\frac{8e^{\eta_{1}t}}{\pi}\sqrt{\frac{1+\sin(\alpha+y)}{1- \sin(\alpha+y)}}\left(|G_{1,0}|+\frac{4\left\lvert m_{1}B_{\alpha_{1}}^{-1} \right\rvert}{\eta_{2}^{\alpha_{2}}(1-\sin(\alpha+y))^{\alpha_{2}}}|G_{2,0}| \right)e^{-\eta_{2}t\sin\alpha\cosh x}\ \ \forall\ \phi\in S^{(1)}. \tag{26}\]
_For the integrand \(v_{2}(t,\phi)\), similar estimates can be obtained._
The proof of Lemma 3.2 is given in B.4.
It can be seen from Lemma 3.2 that the decay of the solution mainly depends on the size of the real part. The stability results of the CIMs are given as follows.
**Theorem 3.1**.: _Let \(G_{1,N}^{(1)}(t)\) be defined in (22) with uniform steps \(h_{1}\). For \(t>0\) and \(B>0\) defined in (B.9), there holds_
\[\left\lvert G_{1,N}^{(1)}(t)\right\rvert\leq B\sqrt{\frac{\eta_{1}}{\pi}} \left(\eta_{1}^{-1}|G_{1,0}|+\eta_{1}^{-(1+\alpha_{2})}|G_{2,0}|\right)t^{-1/2 }e^{\eta_{1}t}.\]
_Let \(G_{1,N}^{(2)}(t)\) be defined in (22) with uniform steps \(h_{2}\). For \(t>0\) and \(0<y<\min\{\alpha,\pi/2-\alpha-\delta\}\), there holds_
\[\left\lvert G_{1,N}^{(2)}(t)\right\rvert\leq\frac{16}{\pi}\sqrt{\frac{1+\sin (\alpha+y)}{1-\sin(\alpha+y)}}\left(|G_{1,0}|+\frac{4\left\lvert m_{1}B_{ \alpha_{1}}^{-1}\right\rvert}{\eta_{2}^{\alpha_{2}}(1-\sin(\alpha+y))^{\alpha _{2}}}|G_{2,0}|\right)L(\eta_{2}t\sin(\alpha))e^{\eta_{2}t}.\]
_For \(\left\lvert G_{2,N}^{(m)}(t)\right\rvert\), \(m=1,2\), similar estimates can be obtained._
The proof of Theorem 3.1 can be found in B.5. Notice that \(L(x)\) given in Lemma 3.1 is decreasing; \(L(x)\to 1\) as \(x\to\infty\) and \(L(x)\sim|\ln x|\) as \(x\to 0^{+}\). It can be seen that the CIMs are unconditionally stable with respect to the initial values.
Figure 3.1: The schematic diagrams of open strip \(S^{(m)}\) and the corresponding neighbourhood \(N^{(m)}\); (_left_) is the horizontal open strip \(S^{(m)}\); (_right_) is the neighborhood \(N^{(m)}\) obtained by mapping the horizontal strip \(S^{(m)}\) by the conformal transformation \(z(\phi)\).
#### 3.2.3 Error estimates and determination of the optimal parameters
Here, we will determine the optimal parameters in the parabolic integral contour \(\Gamma_{1}\) and hyperbolic contour \(\Gamma_{2}\), respectively, and prove that the CIMs (22) of the Feynmann-Kac equation with two internal states have spectral accuracy.
Denote \(I_{j,h_{m}}:=h_{m}\sum_{k=-\infty}^{\infty}v_{j}(t,kh_{m})\). Then the error of the CIMs can be expressed as
\[E_{j,N}^{(m)}:=\big{|}I_{j}-I_{j,h_{m},N}\big{|}\leq DE^{(m)}+TE^{(m)},\ \ m,j=1,2,\]
where \(TE^{(m)}=\big{|}I_{j,h_{m}}-I_{j,h_{m},N}\big{|}\) is the truncation error and \(DE^{(m)}:=DE_{+}^{(m)}+DE_{-}^{(m)}:=|I_{j}-I_{j,h_{m}}|\) is the discretization error. The standard estimates of the discretization error are shown in the following lemma.
**Lemma 3.3** ([21] Theorem 2.1).: _Let \(w=u+iv\), with \(u\) and \(v\) real. Suppose \(g(w)\) is analytic in the strip \(-d<v<c\), for some \(c>0\), \(d>0\), with \(g(w)\to 0\) uniformly as \(|w|\to\infty\) in this strip. Suppose further that for some \(M_{+}>0\), \(M_{-}>0\) the function \(g(w)\) satisfies_
\[\int_{-\infty}^{\infty}|w(u+ir)|du\leq M_{+},\ \ \int_{-\infty}^{\infty}|w(u-is) |du\leq M_{-},\]
_for all \(0<r<c\), \(0<s<d\). Then, for any \(\tau>0\), \(I_{\tau}\) defined as \(I_{\tau}=\tau\sum_{k=-\infty}^{\infty}w(x_{k})\)\((x_{k}=k\tau)\) exists and satisfies_
\[|I-I_{\tau}|\leq DE_{+}+DE_{-},\]
_where \(I=\int_{-\infty}^{\infty}w(x)dx\),_
\[DE_{+}=\frac{M_{+}}{e^{2\pi d/\tau}},\ \ DE_{+}=\frac{M_{-}}{e^{2\pi d/\tau}}.\]
Based on this lemma, for the CIMs with the integral contour \(\Gamma_{1}\) and \(\Gamma_{2}\), we have the following error estimates.
**Theorem 3.2**.: _Let \(G_{1}(t)\) and \(G_{1,N}^{(1)}(t)\) be the solutions of (11) and (22) with uniform step-size \(h_{1}\). Given \(a>0\), for \(t>0\), \(h_{1}=\mathcal{O}(1/N)\), there holds_
\[\big{|}G_{1}(t)-G_{1,N}^{(1)}(t)\big{|}\leq B\sqrt{\frac{\eta_{1}}{\pi}}f^{-1/ 2}\left(\eta_{1}^{-1}|G_{1,0}|+\eta_{1}^{-(1+\alpha_{2})}|G_{2,0}|\right)\left( \frac{e^{\eta_{1}t}}{e^{2\pi a/h_{1}}-1}+\frac{e^{\pi^{2}/(\eta_{1}h_{1}^{2}) }}{e^{2\pi^{2}/(\eta_{1}h_{1}^{2})-2\pi/h_{1}}-1}+\frac{e^{\eta_{1}t}}{e^{\eta_ {1}t(Nh_{1})^{2}}}\right).\]
_Let \(G_{1,N}^{(2)}(t)\) be the solution of (22) with uniform step-size \(h_{2}\). For \(t>0\), \(h_{2}=\mathcal{O}(1/N)\), and \(0<y<\min\{\alpha,\pi/2-\alpha-\delta\}\), we have_
\[\big{|}G_{1}(t)-G_{1,N}^{(2)}(t)\big{|} \leq\frac{16}{\pi}\sqrt{\frac{1+\sin(\alpha+y)}{1-\sin(\alpha+y)} }\left\{|G_{1,0}|+\frac{4\left|m_{1}B_{\alpha_{1}}^{-1}\right|}{\eta_{2}^{ \alpha_{2}}(1-\sin(\alpha+y))^{\alpha_{2}}}|G_{2,0}|\right\}e^{\eta_{2}t}\] \[\times\left(\frac{L(\eta_{2}t\sin(\alpha))}{e^{2\pi(\pi/2-\alpha- \delta)/h_{2}}-1}+\frac{L(\eta_{2}t\sin(\alpha))}{e^{2\pi a/h_{2}}-1}+\frac{1+ L(\eta_{2}t\sin(\alpha))}{e^{\mu t\sin\alpha\cosh(Nh_{2})}}\right).\]
_For \(\big{|}G_{2}(t)-G_{2,N}^{(m)}(t)\big{|}\), \(m=1,2\), there are similar estimates hold._
Proof.: Let \(v_{j}(t,\phi)\), \(j=1,2\) be defined as in (21) with \(z(\phi)\) defined as (18) and (19), respectively.
To proof the first part of the theorem, we need to perform error estimates about \(DE_{+}^{(1)}\), \(DE_{-}^{(1)}\), and \(TE^{(1)}\).
Estimate of \(DE_{+}^{(1)}\): By (25) in Lemma 3.2, for \(t>0\), \(0<a<1\), there is
\[\int_{-\infty}^{\infty}|v_{1}(t,x+ia)|dx \leq\frac{\eta_{1}e^{\eta_{1}t}}{\pi}\left(\frac{16}{\eta_{1}(1-a)} |G_{1,0}|+\frac{64\left|m_{1}B_{\alpha_{1}}^{-1}\right|}{\eta_{1}^{1+\alpha_{2} }(1-a)^{1+2\alpha_{2}}}|G_{2,0}|\right)\int_{-\infty}^{\infty}e^{-\eta_{1}t}x^{ 2}dx\] \[\leq B\sqrt{\frac{\eta_{1}}{\pi}}t^{-1/2}e^{\eta_{1}t}\left(\eta_{ 1}^{-1}|G_{1,0}|+\eta_{1}^{-(1+\alpha_{2})}|G_{2,0}|\right)<\infty,\]
where \(B\) is defined in (B.9). Then, according to Lemma 3.2, we have
\[DE_{+}^{(1)}\leq B\sqrt{\frac{\eta_{1}}{\pi}}t^{-1/2}e^{\eta_{1}t}\pi t\left( \eta_{1}^{-1}|G_{1,0}|.+\eta_{1}^{-(1+\alpha_{2})}|G_{2,0}|\right)\frac{1}{e^{2 \pi a/h_{1}}-1} \tag{27}\]
Thus,
\[DE_{+}^{(1)}=\mathcal{O}\left(e^{\eta_{1}t-2\pi a/h_{1}}\right),\quad h_{1}\to 0. \tag{28}\]
Estimate of \(DE_{-}^{(1)}\): Similarly, for \(t>0\), there holds
\[\int_{-\infty}^{\infty}|v_{1}(t,x-id)|dx \leq\frac{\eta_{1}e^{\eta_{1}(1+d)^{2}t}}{\pi}\left(16\eta_{1}^{-1 }|G_{1,0}|+64\left|m_{1}B_{\alpha_{1}}^{-1}\right|\eta_{1}^{-(1+\alpha_{2})}|G_ {2,0}|\right)\int_{-\infty}^{\infty}e^{-\eta_{1}tx^{2}}dx\] \[\leq B\sqrt{\frac{\eta_{1}}{\pi}}t^{-1/2}e^{\eta_{1}(1+d)^{2}t} \pi t\left(\eta_{1}^{-1}|G_{1,0}|+\eta_{1}^{-(1+\alpha_{2})}|G_{2,0}|\right)<\infty.\]
By Lemma 3.2, we have
\[DE_{-}^{(1)}\leq B\sqrt{\frac{\eta_{1}}{\pi}}t^{-1/2}e^{\eta_{1}(1+d)^{2}t} \left(\eta_{1}^{-1}|G_{1,0}|+\eta_{1}^{-(1+\alpha_{2})}|G_{2,0}|\right)\frac {1}{e^{2\pi d/h_{1}}-1}, \tag{29}\]
and
\[DE_{-}^{(1)}=\mathcal{O}\left(e^{\eta_{1}(1+d)^{2}t-2\pi d/h_{1}}\right), \quad h_{1}\to 0. \tag{30}\]
Denote \(\omega(d)=\eta_{1}(1+d)^{2}t-2\pi d/h_{1}\). Then the 'best' choice of \(d\) is obtained by setting \(\omega^{\prime}(d)=0\) (see [21]), which yields
\[d=\frac{\pi}{\eta_{1}th_{1}}-1. \tag{31}\]
Now, we have
\[DE_{-}^{(1)}\leq B\sqrt{\frac{\eta_{1}}{\pi}}t^{-1/2}\left(\eta_{1}^{-1}|G_{1,0}|+\eta_{1}^{-(1+\alpha_{2})}|G_{2,0}|\right)\frac{e^{\pi^{2}/(\eta_{1}th_{1 }^{2})}}{e^{2\pi^{2}/(2\eta_{1}th_{1}^{2})-2\pi/h_{1}}-1}\]
and
\[DE_{-}^{(1)}=\mathcal{O}\left(e^{-\pi^{2}/(\eta_{1}th_{1}^{2})+2\pi/h_{1}} \right),\quad h_{1}\to 0.\]
Estimate of \(TE^{(1)}\): By the definition of \(I_{1,h_{1}}\) and \(I_{1,h_{1},N}\), we deduce
\[\left|I_{1,h_{1}}-I_{1,h_{1},N}\right|\leq h_{1}\sum_{k=N}^{\infty}(|v_{1}(t, kh_{1})|+|v_{1}(t,-kh_{1})|)\leq 2h_{1}\sum_{k=N}^{\infty}|v_{1}(t,kh_{1})|.\]
According to (25) in Lemma 3.2, we have
\[h_{1}\sum_{k=N}^{\infty}|v_{1}(t,kh_{1})| \leq\frac{\eta_{1}Be^{\eta_{1}t}}{\pi}\left(\eta_{1}^{-1}|G_{1,0} |+\eta_{1}^{-(1+\alpha_{2})}|G_{2,0}|\right)\int_{Nh_{1}}^{\infty}e^{-x^{2}\eta _{1}t}dx \tag{32}\] \[\leq B\sqrt{\frac{\eta_{1}}{\pi}}t^{-1/2}\left(\eta_{1}^{-1}|G_{1,0}|+\eta_{1}^{-(1+\alpha_{2})}|G_{2,0}|\right)e^{\eta_{1}t\left(1-(Nh_{1})^{2 }\right)}.\]
Thus,
\[TE^{(1)}\leq B\sqrt{\frac{\eta_{1}}{\pi}}t^{-1/2}\left(\eta_{1}^{-1}|G_{1,0} |+\eta_{1}^{-(1+\alpha_{2})}|G_{2,0}|\right)e^{\eta_{1}t\left(1-(Nh_{1})^{2}\right)} \tag{33}\]
and
\[TE^{(1)}=\mathcal{O}\left(e^{\eta_{1}t\left(1-(Nh_{1})^{2}\right)}\right), \quad N\to+\infty. \tag{34}\]
Combining (27), (29), and (33) results in the first part of the theorem.
As for the hyperbolic integral contour with the strip \(S^{(2)}\), similar to the previous analyses, we will directly give the corresponding results in the sequence.
Estimate of \(DE_{+}^{(2)}\): By (26) in Lemma 3.2 and Lemma 3.3, for \(t>0\) and \(0<y<\pi/2-\alpha-\delta\), there holds
\[DE_{+}^{(2)}\leq\frac{16}{\pi}\sqrt{\frac{1+\sin(\alpha+y)}{1-\sin(\alpha+y)}} \left(|G_{1,0}|+\frac{4\left|m_{1}B_{\alpha_{1}}^{-1}\right|}{\eta_{2}^{\alpha_ {2}}(1-\sin(\alpha+y))^{\alpha_{2}}}|G_{2,0}|\right)\frac{e^{\eta_{2}t}L( \eta_{2}t\sin(\alpha))}{e^{2\pi(\pi/2-\alpha-\delta)/h_{2}}-1}, \tag{35}\]
\[DE^{(2)}_{+}=\mathcal{O}\left(e^{\eta_{2}t-2\pi(\pi/2-\alpha-\delta)/h_{2}}\right), \quad h_{2}\to 0. \tag{36}\]
Estimate of \(DE^{(2)}_{-}\): Similarly, there are
\[DE^{(2)}_{-}\leq\frac{16}{\pi}\sqrt{\frac{1+\sin(\alpha+y)}{1-\sin(\alpha+y)}} \left(|G_{1,0}|+\frac{4\left|m_{1}B^{-1}_{\alpha_{1}}\right|}{\eta_{2}^{\alpha _{2}}(1-\sin(\alpha+y))^{\alpha_{2}}}|G_{2,0}|\right)\frac{e^{\eta_{2}t}L(\eta _{2}t\sin(\alpha))}{e^{2\pi\alpha/h_{2}}-1}, \tag{37}\]
and
\[DE^{(2)}_{-}=\mathcal{O}\left(e^{\eta_{2}t-2\pi\alpha/h_{2}}\right),\quad h_{ 2}\to 0. \tag{38}\]
Estimate of \(TE^{(2)}\): From (26) and Lemma 3.1, there holds
\[h_{2}\sum_{N}^{\infty}|v_{1}(t,kh_{2})| \leq\frac{8}{\pi}\sqrt{\frac{1+\sin(\alpha+y)}{1-\sin(\alpha+y) }}\left(|G_{1,0}|+\frac{4\left|m_{1}B^{-1}_{\alpha_{1}}\right|}{\eta_{2}^{ \alpha_{2}}(1-\sin(\alpha+y))^{\alpha_{2}}}|G_{2,0}|\right)\int_{Nh_{2}}^{+ \infty}e^{\eta_{2}t-\eta_{2}t\sin\alpha\cosh x}dx\] \[\leq\frac{8}{\pi}(1+L(\eta_{2}t\sin\alpha))\sqrt{\frac{1+\sin( \alpha+y)}{1-\sin(\alpha+y)}}\left(|G_{1,0}|+\frac{4\left|m_{1}B^{-1}_{ \alpha_{1}}\right|}{\eta_{2}^{\alpha_{2}}(1-\sin(\alpha+y))^{\alpha_{2}}}|G_{ 2,0}|\right)e^{\eta_{2}t-\eta_{2}t\sin(\alpha)\cosh(Nh_{2})}.\]
Thus
\[TE^{(2)}\leq\frac{16}{\pi}(1+L(\eta_{2}t\sin\alpha))\sqrt{\frac{1+\sin(\alpha+ y)}{1-\sin(\alpha+y)}}\left(|G_{1,0}|+\frac{4\left|m_{1}B^{-1}_{\alpha_{1}} \right|}{\eta_{2}^{\alpha_{2}}(1-\sin(\alpha+y))^{\alpha_{2}}}|G_{2,0}| \right)e^{\eta_{2}t-\eta_{2}t\sin(\alpha)\cosh(Nh_{2})}, \tag{39}\]
and
\[TE^{(2)}=\mathcal{O}\left(e^{\eta_{2}t(1-\sin\alpha\cosh(h_{2}N))}\right), \quad N\rightarrow+\infty. \tag{40}\]
Together with (35), (37) and (39), we finish the second part of the theorem.
Similar estimates on \(\left|G_{2}(t)-G^{(m)}_{2,N}(t)\right|\) can be obtained.
Next, we determine the optimal parameters in the integral contours \(\Gamma_{1}\) and \(\Gamma_{2}\). Reference [21] has provided a technical method to determine these parameters under ideal conditions, i.e., asymptotically balancing \(DE^{(m)}_{+}\), \(DE^{(m)}_{-}\), and \(TE^{(m)}\), \(m=1,2\). Based on this idea, we will optimize these parameters in our situations.
For \(\Gamma_{1}\), by asymptotically balancing \(DE^{(1)}_{+}\), \(DE^{(1)}_{-}\), and \(TE^{(1)}\), it needs
\[\eta_{1}t-\frac{2\pi a}{h_{1}}=\frac{-\pi^{2}}{\eta_{1}th_{1}^{2}}+\frac{2\pi }{h_{1}}=\eta_{1}t\left(1-(Nh_{1})^{2}\right). \tag{41}\]
Then we obtain the optimal parameters used in the contour \(\Gamma_{1}\), i.e.,
\[\eta_{1}^{*}=\frac{\pi\sqrt{2aq^{3}}}{2a}\frac{N}{t},\ \ \text{and}\ h_{1}^{*}= \frac{\sqrt{2aq}}{q}\frac{1}{N}, \tag{42}\]
where \(q:=1+a-\sqrt{a^{2}+2a}\) with \(1/4<a<1\). With these optimal parameters, the corresponding convergence order of the CIM with the parabolic contour \(\Gamma_{1}\) are
\[E^{(1)}_{j,N}=\mathcal{O}\left(e^{-\left(\pi\sqrt{2aq}-\pi\sqrt{2aq^{3}}/(2a) \right)\nu}\right),\ j=1,2.\]
Furthermore, as mentioned in [21], the parameter \(\eta_{1}^{*}\) in (42) depends on time \(t\), which means that the integral contour changes with time \(t\). The ideal situation is that we find a fixed integral contour that satisfies the condition (24) which does not change over time \(t\). Reviewing the error estimates \(DE^{(1)}_{+}\), \(DE^{(1)}_{-}\), and \(TE^{(1)}\), it can be found that \(DE^{(1)}_{+}\) and \(DE^{(1)}_{-}\) increase with \(t\), and \(TE\) decreases with \(t\). If we want a small absolute error on the interval \(t_{0}\leq t\leq t_{1}=T\), \(t_{0}>0\), we can modify (41) as
\[\eta_{1}t_{1}-\frac{2\pi a}{h_{1}}=\frac{-\pi^{2}}{\eta_{1}t_{1}h_{1}^{2}}+ \frac{2\pi}{h_{1}}=\eta_{1}t_{0}\left(1-(Nh_{1})^{2}\right). \tag{43}\]
Denote \(\Lambda=t_{1}/t_{0}\), which increases from 1. After solving (43), we have
\[\eta_{1}=\frac{\pi q\sqrt{q^{2}(1-\Lambda)+2a\Lambda q}}{q(1-\Lambda)+2a\Lambda} \frac{N}{t_{1}},\quad h_{1}=\frac{\sqrt{q^{2}(1-\Lambda)+2a\Lambda q}}{q}\frac{ 1}{N}; \tag{44}\]
and the corresponding convergence order of the CIMs are
\[E^{(1)}_{j,N}=\mathcal{O}\left(e^{-P(\Lambda)N}\right),\ \ j=1,2,\ N\to+\infty, \tag{45}\]
where \(P(\Lambda)=\frac{\pi(q-2a)\sqrt{q^{2}(1-\Lambda)+2a\Lambda q}}{\varphi(\Lambda -1)-2a\Lambda}\).
For \(\Gamma_{2}\), similar to the previous analyses of \(\Gamma_{1}\), when \(t\in[t_{0},t_{0}\Lambda]\), \(t_{0}>0,T=t_{1}=t_{0}\Lambda\), the discretization error \(DE^{(2)}_{-}\) increases with \(t\) and the truncation error \(TE^{(2)}\) decreases with \(t\). Thus, \(DE^{(2)}_{-}\) and \(TE^{(2)}\) can be modified as
\[DE^{(2)\ *}_{-}=\mathcal{O}\left(e^{\eta_{2}t_{1}-2\pi a/h_{2}}\right),\quad TE ^{(2)\ *}=\mathcal{O}\left(e^{\eta_{2}t_{0}(1-\sin\alpha\cosh(h_{2}N))}\right).\]
By asymptotically balancing \(DE^{(2)}_{+}\), \(DE^{(2)\ *}_{-}\) and \(TE^{(2)\ *}\), there holds
\[\frac{-2\pi(\pi/2-\alpha-\delta)}{h_{2}}=\eta_{2}t_{1}-\frac{2\pi\alpha}{h_{2 }}=\eta_{2}t_{0}(1-\sin\alpha\cosh(h_{2}N)).\]
Solving it results in
\[h_{2}=\frac{A(\alpha)}{N},\ \ \eta_{2}=\frac{4\pi\alpha-\pi^{2}+2\pi\delta}{A( \alpha)}\frac{N}{t_{1}},\ \ \text{and}\ A(\alpha)=\cosh^{-1}\left(\frac{(\pi-2\alpha-2\delta)\Lambda+(4 \alpha-\pi+2\delta)}{(4\alpha-\pi+2\delta)\sin(\alpha)}\right), \tag{46}\]
and
\[E^{(2)}_{j,N}=\mathcal{O}\left(e^{-Q(\alpha)N}\right),\ j=1,2,\ N\to+\infty,\]
where \(Q(\alpha)=\frac{\pi^{2}-2\pi\alpha-2\pi\delta}{A(\alpha)}\). For fixed \(\Lambda\) and \(\delta\), the optimal parameter \(\alpha\) can be obtained by maximizing \(Q(\alpha)\), which is similar to the results of [21].
Through the above analyses, it can be found that the CIMs constructed in this paper with given optimal step-sizes and parameters have the convergence order of \(\mathcal{O}(e^{-cN})\). That is, the CIMs in our paper have spectral accuracy.
### The time-matching schemes
In this subsection, in order to verify the high numerical performance of the CIMs (22) with the determined integral contours \(\Gamma_{1}\) and \(\Gamma_{2}\), we also show another numerical methods to solve (11), i.e., the time-marching schemes (TMs) e.g. [20], etc.
Let \(h\) be the discrete stepsize and \(t_{n}=nh,n=0,1,2,\cdots,M\), \(M=\frac{T}{h}\). Integrating both sides of (12) from 0 to \(t\) and letting \(t=t_{n+1}\), we get the integral form (C.1) and (C.2).
The specific TMs are shown in C, in which the following two formulas are used.
\[\begin{split}\int_{0}^{t}\mathfrak{D}_{s}^{1-\alpha_{j}}G_{j}(,.,\rho)ds=&\frac{1}{\Gamma(\alpha_{j})}\int_{0}^{t}\frac{ \partial}{\partial s}\int_{0}^{s}\frac{e^{-(s-\tau)\rho U(j)}}{(s-\tau)^{1- \alpha_{j}}}G_{j}(\rho,\tau)d\tau ds+\frac{\rho U(j)}{\Gamma(\alpha_{j})}\int _{0}^{t}\int_{0}^{s}\frac{e^{-(s-\tau)\rho U(j)}}{(s-\tau)^{1-\alpha_{j}}}G_{ j}(\rho,\tau)d\tau ds\\ =&\frac{1}{\Gamma(\alpha_{j})}\left(\int_{0}^{t} \frac{e^{-(t-\tau)\rho U(j)}}{(t-\tau)^{1-\alpha_{j}}}G_{j}(\rho,\tau)d\tau+( \rho U(j))^{1-\alpha_{j}}\gamma(\alpha_{j},1)\int_{0}^{t}G_{j}(\rho,\tau)d\tau \right),\ j=1,2,\end{split} \tag{47}\]
where \(\gamma(\alpha_{j},1)\) are the incomplete gamma function with different parameters \(\alpha_{j},j=1,2\). For the first term of the right hand of 47, we perform the linear interpolation on the integrand, that is
\[\begin{split}\int_{t_{n}}^{t_{n+1}}e^{-\rho U(j)(t_{n+1}-\tau) }(t_{n+1}-\tau)^{\alpha_{j}-1}G_{j}(\rho,\tau)d\tau&\approx \int_{t_{n}}^{t_{n+1}}(t_{n+1}-\tau)^{\alpha_{j}-1}\frac{G_{j}(.,t_{n+1})(\tau-t _{n})+e^{-\rho U(j)h}G_{j}(.,t_{n})(t_{n+1}-\tau)}{h}d\tau\\ &=\frac{h^{\alpha_{j}}}{\alpha_{j}(\alpha_{j}+1)}\left(\alpha_{j} e^{-\rho U(j)h}G_{j}(.,t_{n})+G_{j}(.,t_{n+1})\right),\ \ j=1,2.\end{split}\]
Based on these results, after doing some calculations, the time-marching schemes of (11) are designed as in (C.5), (C.7).
## 4 Numerical Results
In this section, we use two examples to evaluate the effectiveness of the CIM. We choose \(p=2/3\), \(b=3/4\), \(B_{\sigma_{1}}^{-1}=B_{\sigma_{2}}^{-1}=1\), and the optimal parameters are given in (44) and (46). Take \(a=0.9875\) and \(\delta=0.1123\) such that the contours \(\Gamma_{1}\) and \(\Gamma_{2}\) satisfy the condition (24). Here, we remark that the computing environment for all examples is _Intel(R) Core(TM) i7-7700 CPU @3.60GHz, MATLAB R2018a_.
### Example 1
This example is used to verify that the CIMs (22) have spectral accuracy. We choose the absolute error as a function of \(N\), i.e.,
\[error(N)=\max_{b_{1}\leq t\leq\Lambda_{0}}\left|G_{j}(.,t)-G_{j,N}^{(m)}(.,t) \right|,\ \ m,j=1,2, \tag{48}\]
where \(G_{j}(.,t)\) is the reference solution computed by the time-marching schemes with much small stepsize, and \(G_{j,N}^{(m)}(.,t)\) is the numerical solution obtained by the CIM with parabolic contour \(\Gamma_{1}\) and hyperbolic contour \(\Gamma_{2}\), respectively. Then, the absolute errors of the CIMs at different given times and the numerical solutions of the system are shown in Figure 4.1 and Figure 4.2.
Given a specific accuracy to be reached, the number of discrete points and the time cost of the CIMs and TMs are shown in Table 1, in which the parameters used are consistent with those in Figure 3, and we treat the numerical solutions obtained by TMs with \(M=2^{12}\).
One can see from Figure 4.1, Figure 4.2, and Table 1, the CIMs with parabolic contour and hyperbolic contour are much effective and time-saving in solving the Feynman-Kac equation with two internal states. Also they have spectral accuracy.
Figure 4.1: The absolute errors at time \(t=100\) and the numerical solutions of the system (11); (_left_) and (_center_) are the absolute errors for the CIM with \(\Gamma_{1}\) and \(\Gamma_{2}\), respectively; The parameters are set as \(N=30\), \(\Lambda\)=5, \(\alpha_{1}=0.6\), \(\alpha_{2}=0.4\), \(U(1)=1\), \(U(2)=1.5\), and the initial values \(G_{1.0}=0.55\), \(G_{2.0}=0.45\); (_right_) are the numerical solutions obtained by using TMs with the same parameters and the number of discrete points \(M=2^{15}\) and \(T=100\).
Figure 4.2: The absolute errors at time \(t=3\) and the numerical solutions of the system (11); (_left_) and (_center_) are the absolute errors of the CIM with \(\Gamma_{1}\) and \(\Gamma_{2}\), respectively; The parameters are set as \(N=25\), \(\Lambda\)=5, \(\alpha_{1}=0.82\), \(\alpha_{2}=0.59\), \(U(1)=0.89\), \(U(2)=0.68\), \(\rho=1.5\), and the initial values \(G_{1.0}=0.55\) and \(G_{2.0}=0.45\); (_right_) are the numerical solutions obtained by using TMs with the same parameters and the number of discrete points \(M=2^{12}\) and \(T=3\).
### Example 2
As a physical application, we calculate the average occupation time of each internal state by using the CIMs. As the matter of fact, the average occupation time of the first internal state can be calculated as the solution of the system (8) by taking
\[U[j(\tau)]=\begin{cases}1,\ j(\tau)=1,\\ 0,\ \text{else},\end{cases} \tag{49}\]
in the functional \(A=\int_{0}^{\tau}[U(j(\tau))]d\tau\). While taking \(U(1)=0\) and \(U(2)=1\), it works for the second internal state. Take \(\alpha_{1}=\alpha_{2}\). According to the theoretical results presented in [23], the average occupation time of the first state is
\[\langle A\rangle\sim\frac{\varepsilon_{1}}{\varepsilon_{1}+\varepsilon_{2}}t,\ \ \ \ \ \text{for}\ \ \text{large}\ \ t, \tag{50}\]
where \(\varepsilon_{1}\) and \(\varepsilon_{2}\) are the initial distributions (replacing the coefficient by \(\varepsilon_{2}/(\varepsilon_{1}+\varepsilon_{2})\) for the second internal state). In this paper, \(\langle A\rangle\) calculated by using the fact
\[\langle A\rangle=-\frac{\partial}{\partial\rho}g(\rho,t)\Big{|}_{\rho=0} \tag{51}\]
with \(g(\rho,t)=G_{1}(\rho,t)+G_{2}(\rho,t)\). More specifically, after differentiating (17) w.r.t. \(\rho\) and setting \(\rho=0\), we then solve the system by the CIMs. At this time, one can check that the analytical domain of the new system is no smaller than the original one, therefore the discussions in above sections still work. See the simulation results in Figure 4.3, which further verifies the theoretical predictions.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{Accuracy} & \multirow{2}{*}{PDF} & \multicolumn{2}{c}{CIM-PC} & \multicolumn{2}{c}{CIM-HC} & \multicolumn{2}{c}{TMs} \\ \cline{3-6} & & N & time(s) & N & time(s) & M & time (s) \\ \hline \multirow{2}{*}{\(10^{-2}\)} & \(G_{1}(.,t)\) & 5 & 4.1903e-03 & 3 & 1.0031e-02 & 10 & 2.0175e-03 \\ & \(G_{2}(.,t)\) & 5 & 3.4310e-03 & 3 & 1.2785e-02 & 24 & 5.2115e-03 \\ \hline \multirow{2}{*}{\(10^{-3}\)} & \(G_{1}(.,t)\) & 9 & 1.2757e-03 & 7 & 1.6547e-03 & 10 & 2.2867e-03 \\ & \(G_{2}(.,t)\) & 9 & 1.1832e-03 & 7 & 2.0558e-03 & 283 & 1.6356e-01 \\ \hline \multirow{2}{*}{\(10^{-4}\)} & \(G_{1}(.,t)\) & 13 & 3.0482e-03 & 8 & 1.2326e-03 & 630 & 6.7298e-01 \\ & \(G_{2}(.,t)\) & 14 & 2.3619e-03 & 8 & 1.4250e-03 & 1953 & 5.9451e+00 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The CPU time cost of CIM and TMS for the system (11) at time \(t=3\).
## 5 Conclusions
The Feynman-Kac equation with two internal states describes the functional distribution of the particle's internal states. This paper presents the regularity analyses for the system, and built the CIMs with numerical stability analysis and error estimates. With the reference solutions provided by the time-marching schemes, the performances (of the CIMs) on spectral accuracy, low computational complexity, and small memory requirement, etc, are obtained. As one of the physical applications, by using the CIMs, we calculate the average occupation time of the first and second state of the stochastic process with two internal states.
## Appendix A The proofs of the results presented in Section 2.2
The techniques of the proofs are inspired by [3].
### The proof of Lemma 2.1
Proof.: For \(z\in\Sigma_{\theta,\delta}\), there is
\[\frac{|(z+\rho U(j))^{-\alpha_{j}}|}{|z|^{-\alpha_{j}}}=\left(\frac{|z|}{|z+ \rho U(j)|}\right)^{\alpha_{j}}\leq\left(\frac{|z|}{|z|-|\rho U(j)|}\right)^{ \alpha_{j}}.\]
Since \(\delta>2|\rho_{s}U(j)|\), it has
\[|z|-|\rho U(j)|\geq|z|-\frac{1}{2}|z|=\frac{1}{2}|z|.\]
Then
\[\frac{|(z+\rho U(j))^{-\alpha_{j}}|}{|z|^{-\alpha_{j}}}\leq(2)^{\alpha_{j}} \leq 2.\]
### The proof of Lemma 2.2
Proof.: For \(z\in\Sigma_{\theta,\delta}\), let
\[\left((z+\rho U(j))^{\alpha_{j}}-m_{j}B_{\alpha_{j}}^{-1}\right)u_{j}=v_{j}, \;\;j=1,2.\]
Then
\[u_{j}=\left((z+\rho U(j))^{\alpha_{j}}\right)^{-1}v_{j}+m_{j}B_{\alpha_{j}}^{- 1}\left((z+\rho U(j))^{\alpha_{j}}\right)^{-1}u_{j},\;\;j=1,2.\]
Taking modulus on both sides of the above equality and by Lemma 2.1, we have
\[\left|u_{j}\right|\leq\left|((z+\rho U(j))^{\alpha_{j}})^{-1}\right|\left|v_{ j}\right|+\left|m_{j}B_{\alpha_{j}}^{-1}\left((z+\rho U(j))^{\alpha_{j}} \right)^{-1}\right|\left|u_{j}\right|\leq 2|z|^{-\alpha_{j}}\left|v_{j}\right|+2 \left|m_{j}B_{\alpha_{j}}^{-1}\right|\left|z|^{-\alpha_{j}}\left|u_{j}\right|,j=1,2.\]
According to the condition that \(\delta\) satisfies, there holds \(2\left|m_{j}B_{\alpha_{j}}^{-1}\right|\left|z\right|^{-\alpha_{j}}<\frac{1}{2}\). Thus,
\[\left|u_{j}\right|\leq 4|z|^{-\alpha_{j}}\left|v_{j}\right|,\;\;j=1,2,\]
which implies the desired estimates.
### The proof of Lemma 2.3
Proof.: For all \(z\in\Sigma_{\theta,\delta}\), let
\[\left\{\left((z+\rho U(1))^{\alpha_{1}}-m_{1}B_{\alpha_{1}}^{-1}\right)\left( (z+\rho U(2))^{\alpha_{2}}-m_{2}B_{\alpha_{2}}^{-1}\right)-m_{1}m_{2}B_{\alpha _{1}}^{-1}B_{\alpha_{2}}^{-1}\right\}u=v.\]
After simple calculations, we have
\[u= \left(\left(z+\rho U(1)\right)^{\alpha_{1}}-m_{1}B_{\alpha_{1}}^{ -1}\right)^{-1}\left((z+\rho U(2))^{\alpha_{2}}-m_{2}B_{\alpha_{2}}^{-1} \right)^{-1}v\] \[+m_{1}m_{2}B_{\alpha_{1}}^{-1}B_{\alpha_{2}}^{-1}\left((z+\rho U (1))^{\alpha_{1}}-m_{1}B_{\alpha_{1}}^{-1}\right)^{-1}\left((z+\rho U(2))^{ \alpha_{2}}-m_{2}B_{\alpha_{2}}^{-1}\right)^{-1}u.\]
Taking modulus on both sides of the above equality and by Lemma 2.2, there is
\[|u|\leq 16|z|^{-\alpha_{1}-\alpha_{2}}|v|+16\left|m_{1}m_{2}B_{\alpha_{1}}^{-1}B_{ \alpha_{2}}^{-1}\right|z|^{-\alpha_{1}-\alpha_{2}}|u|.\]
According to the condition that \(\delta\) satisfies, there holds \(16\left|m_{1}m_{2}B_{\alpha_{1}}^{-1}B_{\alpha_{2}}^{-1}\right|\left|z\right|^ {-\alpha_{1}-\alpha_{2}}<\frac{1}{2}\). Thus,
\[|u|\leq 32|z|^{-\alpha_{1}-\alpha_{2}}|v|,\]
which implies the desired estimate on \(H(z)\).
For \(H_{\alpha_{1}}(z)\), similarly, let
\[\left((z+\rho U(1))^{\alpha_{1}}-m_{1}B_{\alpha_{1}}^{-1}\right)u=v+m_{1}m_{2} B_{\alpha_{1}}^{-1}B_{\alpha_{2}}^{-1}\left((z+\rho U(2))^{\alpha_{2}}-m_{2}B_{ \alpha_{2}}^{-1}\right)^{-1}u.\]
There exists
\[u=m_{1}m_{2}B_{\alpha_{1}}^{-1}B_{\alpha_{2}}^{-1}\left((z+\rho U(1))^{\alpha _{1}}-m_{1}B_{\alpha_{1}}^{-1}\right)^{-1}\left((z+\rho U(2))^{\alpha_{2}}-m_ {2}B_{\alpha_{2}}^{-1}\right)^{-1}u+\left((z+\rho U(1))^{\alpha_{1}}-m_{1}B_{ \alpha_{1}}^{-1}\right)^{-1}v.\]
Taking modulus on both sides of the above equality and based on Lemma 2.2, there holds
\[|u|\leq 4|z|^{-\alpha_{1}}|v|+16\left|m_{1}m_{2}B_{\alpha_{1}}^{-1}B_{ \alpha_{2}}^{-1}\right|\left|z\right|^{-\alpha_{1}-\alpha_{2}}|u|.\]
According to the previous estimate, there is \(16\left|m_{1}m_{2}B_{\alpha_{1}}^{-1}B_{\alpha_{2}}^{-1}\right|\left|z\right|^ {-\alpha_{1}-\alpha_{2}}<\frac{1}{2}\). Thus,
\[|u|\leq 8|z|^{-\alpha_{1}}|v|,\]
which implies the desired estimate on \(H_{\alpha_{1}}(z)\). By analogy, one can obtain the estimate on \(H_{\alpha_{2}}(z)\).
### The proof of Lemma 2.4
Proof.: According to the condition that \(\delta\) satisfies and Lemma 2.1 and Lemma 2.3, the conclusions can be similarly obtained.
### The proof of Theorem 2.1
Proof.: By the inverse Laplace transform, the solution in (11) is
\[G_{1}(t)=\frac{1}{2\pi i}\int_{\Gamma_{\delta,\delta}}e^{\pi t}\left(H_{ \alpha_{1}}(z)(z+\rho U(1))^{\alpha_{1}-1}G_{1,0}-m_{2}B_{\alpha_{2}}^{-1}H(z) (z+\rho U(1))^{\alpha_{1}-1}G_{2,0}\right)dz.\]
Taking \(q\)-th \((q=0,1)\) derivative leads to
\[\frac{\partial^{q}}{\partial t^{q}}G_{1}(t)=\frac{1}{2\pi i}\int_{\Gamma_{ \delta,\delta}}z^{q}e^{\pi t}\left(H_{\alpha_{1}}(z)(z+\rho U(1))^{\alpha_{1} -1}G_{1,0}-m_{2}B_{\alpha_{2}}^{-1}H(z)(z+\rho U(1))^{\alpha_{1}-1}G_{2,0} \right)dz.\]
From Lemma 2.4, there exists
\[\left|\frac{\partial^{q}}{\partial t^{q}}G_{1}(t)\right| =\left|\frac{1}{2\pi i}\int_{\Gamma_{\delta,\delta}}z^{q}e^{\pi t }\left(H_{\alpha_{1}}(z)(z+\rho U(1))^{\alpha_{1}-1}G_{1,0}-m_{2}B_{\alpha_{2} }^{-1}H(z)(z+\rho U(1))^{\alpha_{1}-1}G_{2,0}\right)dz\right|\] \[\leq\frac{8}{\pi}\int_{\Gamma_{\delta,\delta}}e^{Re(z)}|z|^{q-1}| G_{1,0}||dz|+\frac{32\left|m_{2}B_{\alpha_{2}}^{-1}\right|}{\pi}\int_{\Gamma_{ \delta,\delta}}e^{Re(z)}|z|^{q-\alpha_{2}-1}|G_{2,0}||dz|\] \[\leq\frac{8}{\pi}\left(\int_{\delta}^{+\infty}e^{\pi t\cos(\theta )}r^{q-1}dr+\int_{-\theta}^{\theta}e^{\delta t\cos(\varphi)}\delta^{q}d \varphi\right)|G_{1,0}|\] \[\quad+\frac{32\left|m_{2}B_{\alpha_{2}}^{-1}\right|}{\pi}\left( \int_{\delta}^{+\infty}e^{\pi t\cos(\theta)}r^{q-\alpha_{2}-1}dr+\int_{- \theta}^{\theta}e^{\delta t\cos(\varphi)}\delta^{q-\alpha_{2}}d\varphi\right)| G_{2,0}|.\]
Let \(rt=s\). Since \(t>1/\delta\), there exists
\[\left|\frac{\partial^{q}}{\partial t^{q}}G_{1}(t)\right|\leq \frac{8}{\pi}\left(t^{-q}\int_{1}^{+\infty}e^{s\cos(\theta)}s^{q-1} ds+\delta^{q}\int_{-\theta}^{\theta}e^{\delta t\cos(\varphi)}d\varphi\right)|G_{1,0}|\] \[+\frac{32\left|m_{2}B_{\alpha_{2}1}^{-1}\right|}{\pi}\left(t^{ \alpha_{2}-q}\int_{1}^{+\infty}e^{s\cos(\theta)}s^{q-\alpha_{2}-1}ds+\delta^{ q-\alpha_{2}}\int_{-\theta}^{\theta}e^{\delta t\cos(\varphi)}d\varphi\right)|G_{2,0}|\] \[\leq \frac{8}{\pi}\left(\left(-1/\cos(\theta)\right)t^{-q}e^{\cos( \theta)}+2\theta\delta^{q}e^{\delta T}\right)|G_{1,0}|\] \[+\frac{32\left|m_{2}B_{\alpha_{2}1}^{-1}\right|}{\pi}\left((-1/ \cos(\theta))t^{\alpha_{2}-q}e^{\cos(\theta)}+2\theta\delta^{q-\alpha_{2}}e^{ \delta T}\right)|G_{2,0}|.\]
Similarly,
\[\left|G_{2}^{(q)}(t)\right|\leq\frac{32\left|m_{1}B_{\alpha_{1}}^{-1}\right|} {\pi}\left((-1/\cos(\theta))t^{\alpha_{1}-q}e^{\cos(\theta)}+2\theta\delta^{q -\alpha_{1}}e^{\delta T}\right)|G_{1,0}|+\frac{8}{\pi}\left((-1/\cos(\theta)) t^{-q}e^{\cos(\theta)}+2\theta\delta^{q}e^{\delta T}\right)|G_{2,0}|.\]
The proof is completed.
## Appendix B The proofs of the results presented in Section 3
### The proof of Proposition 3.1
Let the denominator of \(H(z)\) be zero, i.e.,
\[\left((z+\rho U(1))^{\alpha_{1}}-C_{1}\right)\left((z+\rho U(2))^{\alpha_{2}} -C_{2}\right)-C_{1}C_{2}=0. \tag{13}\]
Denote \(u_{1}+iv_{1}:=(z+\rho U(1))^{\alpha_{1}}\), \(u_{2}+iv_{2}:=(z+\rho U(2))^{\alpha_{2}}\). Then (13) can be rewritten as
\[\left((u_{1}-C_{1})(u_{2}-C_{2})-v_{1}v_{2}-C_{1}C_{2}\right)+i\left((u_{1}-C_ {1})v_{2}+(u_{2}-C_{2})v_{1}\right)=0. \tag{14}\]
In the sequence, we prove that if (23) holds, (14) does not hold.
We divide the proof into the following cases:
**Case I:**: For the case \(u_{1}>2C_{1}\) and \(u_{2}>2C_{2}\), if \(v_{1}v_{2}\leq 0\), then \((u_{1}-C_{1})(u_{2}-C_{2})-v_{1}v_{2}-C_{1}C_{2}\neq 0\); otherwise, \((u_{1}-C_{1})v_{2}+(u_{2}-C_{2})v_{1}\neq 0\).
**Case II:**: For the case \(|v_{1}|>|C_{1}|\) and \(|v_{2}|>|C_{2}|\),
* when \(u_{1}\leq 2C_{1}\) and \(u_{2}\leq 2C_{2}\): If \((u_{1}-C_{1})(u_{2}-C_{2})>0\), for \(v_{1}v_{2}>0\), there is \((u_{1}-C_{1})v_{2}+(u_{2}-C_{2})v_{1}\neq 0\); for \(v_{1}v_{2}<0\), we have \((u_{1}-C_{1})(u_{2}-C_{2})-v_{1}v_{2}-C_{1}C_{2}\neq 0\). If \((u_{1}-C_{1})(u_{2}-C_{2})=0\), \((u_{1}-C_{1})v_{2}+(u_{2}-C_{2})v_{1}\neq 0\). If \((u_{1}-C_{1})(u_{2}-C_{2})<0\), for \(v_{1}v_{2}>0\), there is \((u_{1}-C_{1})(u_{2}-C_{2})-v_{1}v_{2}-C_{1}C_{2}\neq 0\); for \(v_{1}v_{2}<0\), we have \((u_{1}-C_{1})v_{2}+(u_{2}-C_{2})v_{1}\neq 0\).
* when \((u_{1}-2C_{1})(u_{2}-2C_{2})\leq 0\): If \((u_{1}-C_{1})(u_{2}-C_{2})>0\), for \(v_{1}v_{2}>0\), there is \((u_{1}-C_{1})v_{2}+(u_{2}-C_{2})v_{1}\neq 0\); for \(v_{1}v_{2}<0\), we have \((u_{1}-C_{1})(u_{2}-C_{2})-v_{1}v_{2}-C_{1}C_{2}\neq 0\). If \((u_{1}-C_{1})(u_{2}-C_{2})=0\), there is \((u_{1}-C_{1})(u_{2}-C_{2})-v_{1}v_{2}-C_{1}C_{2}\neq 0\). If \((u_{1}-C_{1})(u_{2}-C_{2})<0\), for \(v_{1}v_{2}>0\), there is \((u_{1}-C_{1})(u_{2}-C_{2})-v_{1}v_{2}-C_{1}C_{2}\neq 0\).
To sum up, when (23) holds, \(H(z)\) is analytic.
### The proof of (24)
For the proof of (24), we split it into the following two cases:
**Case I:**: Let \(\mathrm{Re}(z+\rho U(j))>0\), i.e., \(\mathrm{Re}(z)>-\mathrm{Re}(\rho U(j))\). For fixed \(0<\alpha_{j}<1\), there are \(\theta_{j}:=\arg(z+\rho U(j))\in(-\frac{\pi}{2},\frac{\pi}{2})\) and \(\alpha_{j}\theta_{j}\in(-\frac{\alpha_{j}\pi}{2},\frac{\alpha_{j}\pi}{2})\). Further, we have \(\mathrm{Re}\left((z+\rho U(j))^{\alpha_{j}}\right)=|z+\rho U(j)|^{\alpha_{j}} \cos(\alpha_{j}\theta_{j})>|z+\rho U(j)|^{\alpha_{j}}\cos(\frac{\alpha_{j}\pi}{ 2})\). If \(|z+\rho U(j)|^{\alpha_{j}}\cos(\frac{\alpha_{j}\pi}{2})>2|C_{j}|\), that is \(|z+\rho U(j)|>(2|C_{j}|/\cos(\alpha_{j}\pi/2))^{1/\alpha_{j}}\), then \(\mathrm{Re}(z+\rho U(j))=|z+\rho U(j)|\cos(\alpha_{j}\theta_{j})>2|C_{j}|/\cos (\alpha_{j}\pi/2))^{1/\alpha_{j}}\cos(\alpha_{j}\pi)\). Hence, denote
\[d_{1}^{(j)}:=\left(2|C_{j}|/\cos(\alpha_{j}\pi/2)\right)^{1/\alpha_{j}}- \mathrm{Re}(\rho U(j)),\;\;j=1,2.\]
Once \(\mathrm{Re}(z)>d_{1}:=\max\{d_{1}^{(1)},d_{1}^{(2)}\}\), there holds \(\mathrm{Re}\left((z+\rho U(j))^{\alpha_{j}}\right)>2|C_{j}|\), \(j=1,2\). By (23), \(H(z)\) is analytic.
**Case II:**: By (23), \(H(z)\) is analytic if \(|\mathrm{Im}((z+\rho U(j))^{\alpha_{j}})|>|C_{j}|\) and \(|\mathrm{Re}\left((z+\rho U(j))^{\alpha_{j}}\right)|\leq 2|C_{j}|\) at the same time. So, if \(\mathrm{Im}((z+\rho U(j))^{\alpha_{j}})=|z+\rho U(i)|^{\alpha_{j}}\sin(\alpha _{j}\theta_{j})>|z+\rho U(j)|^{\alpha_{j}}\sin((|C_{j}|^{2}/(5|C_{j}|^{2}))^{1/ 2})>|C_{j}|\) (where \(\theta_{j}\) are defined as above), i.e., \(|z+\rho U(i)|>(\sqrt{5}|C_{j}|)^{1/\alpha_{j}}\), then \(H(z)\) is analytic. With this, let \(|z+\rho U(j)|>|\mathrm{Im}(z+\rho U(j))|\geq|\left|\mathrm{Im}(z)\right|-| \mathrm{Im}(\rho U(j))|\mid|>(\sqrt{5}|C_{j}|)^{1/\alpha_{j}}\), and denote
\[d_{2}^{(j)}:=\left(\sqrt{5}|C_{j}|\right)^{1/\alpha_{j}}+|\mathrm{Im}(\rho U (j))|,\;\;j=1,2.\]
Then, for \(|\mathrm{Im}(z)|>d_{2}:=\max\{d_{2}^{(1)},d_{2}^{(2)}\}\), there is \(|\mathrm{Im}((z+\rho U(j))^{\alpha_{j}})|>|C_{j}|\), and \(H(z)\) is analytic.
Above all, if \(z\) satisfies the conditions in (24), then \(H(z)\) is analytic.
### Determination of the open strips \(S^{(1)}\) and \(S^{(2)}\)
For strip \(S^{(1)}\), which corresponds to the integral contour \(\Gamma_{1}\), as shown in Figure 1 (_center_). Denote \(\phi_{+}:=x+ia\in S\) with \(a>0\). Then
\[z(\phi_{+})=\eta_{1}\left((1-a)^{2}-x^{2}\right)+2i\eta_{1}x(1-a),\] (B.3)
\(z(\phi_{+})=z(x+ia)\) is the left boundary of \(N^{(m)}\). which can be expressed as the parabola
\[u=\eta_{1}\left((1-a)^{2}-\frac{v^{2}}{4\eta_{1}^{2}(1-a)^{2}}\right),\] (B.4)
if denoting \(z=u+iv\). As \(a\) increases from \(0\) to \(1\), the parabola (B.3) closes and reduces to the negative real axis. Hence, the left boundary of \(N_{r}\) determines the maximum value of \(a\), which means that the parabola (B.4) passing through the point \((d_{1},d_{2})\), yields \(a=1-\sqrt{\frac{d_{1}+\sqrt{d_{1}^{2}+d_{2}^{2}}}{2\eta_{1}}}\).
Denote \(\phi_{-}:=x-id\) with \(d>0\). The image of this horizontal line is
\[z(\phi_{-})=\eta_{1}\left((1+d)^{2}-x^{2}\right)+2i\eta_{1}x(1+d).\] (B.5)
As \(d\) away from \(0\), the parabola (B.5) widens and moves to the right. The optimal \(d\) and \(\eta_{1}\) determined in (31) and (44), respectively.
For the strip \(S^{(2)}\), which corresponds to the integral contour \(\Gamma_{2}\), as shown in Figure 1 (_right_). From the expression of \(\Gamma_{2}\), the image of the horizontal line \(\phi=x+iy\) is
\[z(\phi)=\eta_{2}\left(1-\sin(\alpha+y)\cosh(x)\right)+i\eta_{2}\cos(\alpha+y) \sinh(x))\,,\] (B.6)
which can be expressed as the hyperbola
\[\left(\frac{\eta_{2}-u}{\sin(\alpha+y)}\right)^{2}-\left(\frac{v}{\cos(\alpha+y )}\right)^{2}=\eta_{2}^{2},\] (B.7)
if denoting \(z=u+iv\). As \(r\) increases from \(0\) to \(r=\pi/2-\alpha\), the left branch of parabola (B.6) closes and degenerates into the negative real axis. While, for \(y<0\), when \(r\) decreases from \(0\) to \(-\alpha\), the hyperbola widens and becomes a vertical line. The minimum value of \(\delta\) can be obtained by taking \(z(\phi_{+}):=z(x+i(\pi/2-\alpha-\delta))\) as the left boundary of \(N^{(m)}\). More specifically, \(z(\phi_{+})\) can be expressed as a hyperbola with the asymptotes
\[v=\pm\tan(\delta)(\eta_{2}-u),\] (B.8)
if denoting \(z=u+iv\). Let one of the asymptotic lines \(v=\tan(\delta)(\eta_{2}-u)\) pass through the fixed point \((d_{1},d_{2})\), which results in \(\delta=\arctan\left(\frac{d_{2}}{\eta_{2}-d_{1}}\right)\). Besides, the optimal parameter \(\alpha\) and \(\eta_{2}\) are determined by maximizing \(Q(\alpha)=\frac{\pi^{2}-2\alpha-2\alpha}{A(\alpha)}\) (see Subsection 3.2.3 and (46)). Then, the strip \(S^{(2)}\) is determined.
### The proof of Lemma 3.2
Proof.: For the first part of the lemma, as \(z(\phi)=\eta_{1}(i\phi+1)^{2}\) and \(z^{\prime}(\phi)=i2\eta_{1}(i\phi+1)\), from the expression of \(v_{1}(t,\phi)\), there is
\[v_{1}(t,\phi)=\frac{\eta_{1}}{\pi}e^{\eta_{1}(i\phi+1)^{2}t}(1+i\phi)\widehat{G }_{1}\left(\eta_{1}(1+i\phi)^{2}\right)\ \ \forall\ \phi\in S^{(1)}.\]
Choose \(\phi_{+}=x+iy\in S^{(1)}\), \(0<y<a<1\), from the upper half plane of the strip \(S^{(1)}\), for \(t>0\), there holds
\[v_{1}(t,x+iy)=\frac{\eta_{1}e^{\eta_{1}\left((1-y)^{2}-x^{2}\right)t}}{\pi}e^{ i2\eta_{1}x(1-y)}(1-y+ix)\widehat{G}_{1}\left(\eta_{1}(1-y+ix)^{2}\right).\]
Taking the modulus of the left and right sides of the above formula leads to
\[\left|v_{1}(t,x+iy)\right|\leq\frac{\eta_{1}e^{\eta_{1}(1-y)^{2}t}}{\pi}e^{-x ^{2}\eta_{1}t}\left|(1-y+ix)\widehat{G}_{1}\left(\eta_{1}(1-y+ix)^{2}\right) \right|.\]
Denote \(l=1-y\), \(l\in(1-a,1)\). From Lemma 2.4, there holds \(\left|\widehat{G}_{1}(z)\right|\leq 16|z|^{-1}|G_{1,0}|+64\left|m_{1}B_{\alpha_{1} }^{-1}\right||z|^{-1-\alpha_{2}}|G_{2,0}|\). With these, we have
\[\left|(l+ix)\widehat{G}_{1}\left(\eta_{1}(l+ix)^{2}\right)\right|\leq\left( \frac{16|l+ix|}{|\eta_{1}(l+ix)^{2}|}|G_{1,0}|+\frac{64\left|m_{1}B_{\alpha_{ 1}}^{-1}\right|l+ix|}{|\eta_{1}(l+ix)^{2}|^{1+\alpha_{2}}}|G_{2,0}|\right).\]
Since
\[\frac{|l+ix|}{|\eta_{1}(l+ix)^{2}|}\leq\frac{1}{\eta_{1}(1-a)}\ \text{and}\ \frac{|l+ ix|}{|\eta_{1}(l+ix)^{2}|^{1+\alpha_{2}}}\leq\frac{1}{\eta_{1}^{1+\alpha_{2}}(1-a)^{ 1+2\alpha_{2}}},\]
we have
\[\left|v_{1}(t,x+ir)\right|\leq \frac{\eta_{1}e^{\eta_{1}t}}{\pi}\left(\frac{16}{\eta_{1}(1-a)}| G_{1,0}|+\frac{64\left|m_{1}B_{\alpha_{1}}^{-1}\right|}{\eta_{1}^{1+\alpha_{2}} (1-a)^{1+2\alpha_{2}}}|G_{2,0}|\right)e^{-\eta_{1}tx^{2}}\] \[\leq \frac{e^{\eta_{1}t}}{\pi}\text{max}\left\{16(1-a)^{-1},64\left|m _{1}B_{\alpha_{1}}^{-1}\right|(1-a)^{-(1+2\alpha_{2})}\right\}\left(|G_{1,0}| +\eta_{1}^{-\alpha_{2}}|G_{2,0}|\right)e^{-\eta_{1}tx^{2}}.\]
Choose \(\phi_{-}=x-iy\in S^{(1)}\), \(-d<-y\), from the lower half plane of \(S^{(1)}\), for \(t>0\), there holds
\[\left|v_{1}(t,x-iy)\right| \leq\frac{\eta_{1}e^{\eta_{1}(1+d)^{2}t}}{\pi}\left(16\eta_{1}^{ -1}|G_{1,0}|+64\left|m_{1}B_{\alpha_{1}}^{-1}\right|\eta_{1}^{-(1+\alpha_{2}) }|G_{2,0}|\right)e^{-\eta_{1}tx^{2}}\] \[\leq\frac{e^{\eta_{1}t}}{\pi}\text{max}\left\{16e^{\left(2d+d^{2 }\right)\eta_{1}t},64\left|m_{1}B_{\alpha_{1}}^{-1}\right|e^{\left(2d+d^{2} \right)\eta_{1}t}\right\}\left(|G_{1,0}|+\eta_{1}^{-\alpha_{2}}|G_{2,0}|\right) e^{-\eta_{1}tx^{2}}.\]
Therefore, for \(\phi\in S^{(1)}\), by denoting
\[B:=\text{max}\left\{16(1-a)^{-1},64\left|m_{1}B_{\alpha_{1}}^{-1}\right|(1-a)^ {-(1+2\alpha_{2})},16e^{\left(2d+d^{2}\right)\eta_{1}t},64\left|m_{1}B_{\alpha _{1}}^{-1}\right|e^{\left(2d+d^{2}\right)\eta_{1}t}\right\},\] (B.9)
Figure 1: _Diagrammatic sketch for the condition (24) and the left boundary of \(N^{(m)}\) for different integral contours. (left) is a domain determined by (24); (center) is to determine the parameters of the open strip \(S^{(1)}\) when the CIM is with the parabolic contour \(\Gamma_{1}\); (right) is to determine the parameters of the open strip \(S^{(2)}\) when the CIM is with the hyperbolic contour \(\Gamma_{2}\)._
we have
\[|v_{1}(t,\phi)|\leq\frac{Be^{\eta_{2}t}}{\pi}\left(|G_{1,0}|+\eta_{1}^{-\alpha_{ 2}}|G_{2,0}|\right)e^{-\eta_{1}tx^{2}}.\]
For the case of \(S^{(2)}\), the proof is similar to the previous one. Hence \(v_{1}(t,\phi)\) is analytical in the strip \(S^{(2)}\) and \(z^{\prime}(\phi)=i\eta_{2}\cos(i\phi-\alpha)\). By choosing \(\phi_{+}=x+iy\in S^{(2)}\), \(0<y<\pi/2-\delta-\alpha\), form the upper half plane of the strip \(S^{(2)}\), for \(t>0\), there holds
\[v_{1}(t,x+iy)=\frac{\eta_{2}}{2\pi}e^{\eta_{2}(1-\sin(\alpha+y-ix))t}\cos( \alpha+y-ix)\widehat{G}_{1}(\eta_{2}(1-\sin(\alpha+y-ix))).\]
Denote \(l^{\prime}=\alpha+y\), \(l^{\prime}\in(\alpha,\pi/2-\delta)\). After taking the modulus of the left and right sides of the above formula, we get
\[|v_{1}(t,x+ir)|\leq\frac{\eta_{2}}{2\pi}e^{\eta_{2}(1-\sin l^{\prime}\cosh x)} \left|\cos(l^{\prime}-ix)\widehat{G}_{1}(\eta_{2}(1-\sin(l^{\prime}-ix))) \right|.\]
Moreover, since
\[\left|\cos(l^{\prime}-ix)\widehat{G_{1}}\left(\eta_{2}(1-\sin(l^{\prime}-ix)) \right)\right|\leq\frac{16|\cos(l^{\prime}-ix)|}{|\eta_{2}(1-\sin(l^{\prime}- ix))|}|G_{1,0}|+\frac{64\left|m_{1}B_{\alpha_{1}}^{-1}\right|\left|\cos(l^{ \prime}-ix)\right|}{|(\eta_{2}(1-\sin(l^{\prime}-ix)))^{1+\alpha_{2}}|}|G_{2, 0}|,\]
\[\frac{|\cos(l^{\prime}-ix)|}{|\eta_{2}(1-\sin(l^{\prime}-ix))|}\leq\frac{1}{ \eta_{2}}\sqrt{\frac{1+\sin l^{\prime}}{1-\sin l^{\prime}}},\]
and
\[\frac{|\cos(l^{\prime}-ix)|}{|(\eta_{2}(1-\sin(l^{\prime}-ix)))^{1+ \alpha_{2}}|} =\frac{1}{|(\eta_{2}(1-\sin(l^{\prime}-ix)))^{\alpha_{2}}|}\frac{ |\cos(l^{\prime}-ix)|}{|\eta_{2}(1-\sin(l^{\prime}-ix))|}\] \[\leq\frac{1}{\eta_{2}^{1+\alpha_{2}}(\cosh(x)-\sin l^{\prime})^{ \alpha_{2}}}\sqrt{\frac{1+\sin l^{\prime}}{1-\sin l^{\prime}}},\]
then, by \(\sin\alpha<\sin l^{\prime}=\sin(\alpha+y)\), there holds
\[|v_{1}(t,x+ir)|\leq\frac{8e^{\eta_{2}t}}{\pi}\sqrt{\frac{1+\sin(\alpha+y)}{1- \sin(\alpha+y)}}\left(|G_{1,0}|+\frac{4\left|m_{1}B_{\alpha_{1}}^{-1}\right|} {\eta_{2}^{\alpha_{2}}(1-\sin(\alpha+y))^{\alpha_{2}}}|G_{2,0}|\right)e^{- \eta_{2}t\sin\alpha\cosh x}.\]
Choosing \(\phi_{-}=x-iy\in S^{(2)}\), \(-\alpha<-y\), from the lower half plane of \(S^{(2)}\), for \(t>0\), we deduce
\[|v_{1}(t,x-iy)|\leq\frac{8e^{\eta_{2}t}}{\pi}\sqrt{\frac{1+\sin(\alpha+y)}{1- \sin(\alpha+y)}}\left(|G_{1,0}|+\frac{4\left|m_{1}B_{\alpha_{1}}^{-1}\right|} {\eta_{2}^{\alpha_{2}}(1-\sin(\alpha+y))^{\alpha_{2}}}|G_{2,0}|\right)e^{- \eta_{2}t\sin\alpha\cosh x}.\]
Therefore, for \(\phi\in S^{(2)}\), \(0<y<\min\{\alpha,\pi/2-\alpha-\delta\}\), there holds
\[|v_{1}(t,\phi)|\leq \frac{8e^{\eta_{2}t}}{\pi}\sqrt{\frac{1+\sin(\alpha+y)}{1-\sin( \alpha+y)}}\left(|G_{1,0}|+\frac{4\left|m_{1}B_{\alpha_{1}}^{-1}\right|}{\eta_ {2}^{\alpha_{2}}(1-\sin(\alpha+y))^{\alpha_{2}}}|G_{2,0}|\right)e^{-\eta_{2}t \sin\alpha\cosh x}.\]
Similar estimates for \(v_{2}(t,\phi)\) can be obtained on the strips \(S^{(1)}\) and \(S^{(2)}\), respectively.
### The proof of Theorem 3.1
Proof.: We firstly choose the integral contour \(z(\phi)\) as defined in (18) with the uniform step-size \(h_{1}\). For \(t>0\), by (25), there holds
\[\left|G_{1,N}^{(1)}(t)\right|\leq h_{1}\sum_{|k|\leq N-1}|v_{1}(t,kh_{1})| \leq\frac{2\eta_{1}Be^{\eta_{1}t}}{\pi}\left(\eta_{1}^{-1}|G_{1,0}|+\eta_{1}^ {-(1+\alpha_{2})}|G_{2,0}|\right)\int_{0}^{(N-1)h_{1}}e^{-x^{2}\eta_{1}t}dx.\]
Since
\[\int_{0}^{(N-1)h_{1}}e^{-x^{2}\eta_{1}t}dx\leq\int_{0}^{\infty}e^{-x^{2}\eta_{1}t} dx=\frac{\sqrt{\eta_{1}t\pi}}{2\eta_{1}t},\]
the result of the first part of the theorem holds.
For the rest part of this theorem, we choose the hyperbolic integral contour \(z(\phi)\) defined in (19) with the uniform step-size \(h_{2}\). For \(t>0\) and \(h_{2}>0\), by (26), there holds
\[\left|G_{1,N}^{(2)}(t)\right| \leq h_{2}\sum_{|k|\leq N-1}|v_{1}(t,kh_{2})|\] \[\leq\frac{16e^{\eta_{2}t}}{\pi}\sqrt{\frac{1+\sin(\alpha+r)}{1- \sin(\alpha+r)}}\left(|G_{1,0}|+\frac{4\left|m_{1}B_{\alpha_{1}}^{-1}\right|}{ \eta_{2}^{\alpha_{2}}(1-\sin(\alpha+r))^{\alpha_{2}}}|G_{2,0}|\right)\int_{0}^ {(N-1)h_{2}}e^{-\eta_{2}t\sin\alpha\cosh x}dx.\]
Thanks to Lemma 3.1, there holds
\[\int_{0}^{(N-1)h_{2}}e^{-\eta_{2}\sin(\alpha)\cosh(x)t}dx\leq\int_{0}^{\infty} e^{-\eta_{2}\sin(\alpha)\cosh(x)t}dx\leq L(\eta_{2}t\sin(\alpha)).\]
Similarly, we can obtain the stability results about \(G_{2,N}^{(m)}(t)\), \(m=1,2\).
## Appendix C The time-marching schemes for the Feynman-Kac system (Subsection 3.3)
Here we provide the time-marching schemes for (12) directly. After integrating from \(0\) to \(t\) on the left and right sides of (12), according to (47), for \(t=t_{n+1},n=0,1,...,M\), there hold
\[G_{1}(t_{n+1})= \frac{m_{1}B_{\alpha_{1}}^{-1}}{\Gamma(\alpha_{1})}\left(\int_{0 }^{t_{n+1}}\frac{e^{-pU(1)(t_{n+1}-\tau)}}{(t_{n+1}-\tau)^{1-\alpha_{1}}}G_{1 }(\tau)d\tau+(\rho U(1))^{1-\alpha_{1}}\gamma(\alpha_{1},1)\int_{0}^{t_{n+1}} G_{1}(\tau)d\tau\right)\] \[- \frac{m_{2}B_{\alpha_{2}}^{-1}}{\Gamma(\alpha_{2})}\left(\int_{0 }^{t_{n+1}}\frac{e^{-pU(2)(t_{n+1}-\tau)}}{(t_{n+1}-\tau)^{1-\alpha_{2}}}G_{2 }(\tau)d\tau+(\rho U(2))^{1-\alpha_{2}}\gamma(\alpha_{2},1)\int_{0}^{t_{n+1}} G_{2}(\tau)d\tau\right)\] \[- \rho_{s}U(1)\int_{0}^{t_{n+1}}G_{1}(\tau)d\tau+G_{1,0},\] (C.1)
\[G_{2}(t_{n+1})= \frac{m_{2}B_{\alpha_{2}}^{-1}}{\Gamma(\alpha_{2})}\left(\int_{0 }^{t_{n+1}}\frac{e^{-pU(2)(t_{n+1}-\tau)}}{(t_{n+1}-\tau)^{1-\alpha_{2}}}G_{2 }(\tau)d\tau+(\rho U(2))^{1-\alpha_{2}}\gamma(\alpha_{2},1)\int_{0}^{t_{n+1}} G_{2}(\tau)d\tau\right)\] \[- \frac{m_{1}B_{\alpha_{1}}^{-1}}{\Gamma(\alpha_{1})}\left(\int_{0 }^{t_{n+1}}\frac{e^{-pU(1)(t_{n+1}-\tau)}}{(t_{n+1}-\tau)^{1-\alpha_{1}}}G_{1 }(\tau)d\tau+(\rho U(1))^{1-\alpha_{1}}\gamma(\alpha_{1},1)\int_{0}^{t_{n+1}} G_{1}(\tau)d\tau\right)\] \[- \rho U(2)\int_{0}^{t_{n+1}}G_{2}(\tau)d\tau+G_{2,0},\] (C.2)
where \(\gamma(\alpha_{j},1)\), \(j=1,2\) are the incomplete Gamma function. With these, by the technics mentioned in [4], one can obtain the following numerical schemes.
**Case I:** For \(n=0\),
\[G_{1}(t_{1})= \frac{m_{1}B_{\alpha_{1}}^{-1}}{\Gamma(\alpha_{1})}\left(\frac{h^ {\alpha_{1}}}{\alpha_{1}(\alpha_{1}+1)}\alpha_{1}e^{-\rho U(1)h}+\frac{h}{2} (\rho U(1))^{1-\alpha_{1}}\gamma(\alpha_{1},1)\right)G_{1,0}\] \[+\frac{m_{1}B_{\alpha_{1}}^{-1}}{\Gamma(\alpha_{1})}\left(\frac{h ^{\alpha_{1}}}{\alpha_{1}(\alpha_{1}+1)}+\frac{h}{2}(\rho U(1))^{1-\alpha_{1} }\gamma(\alpha_{1},1)\right)G_{1}(t_{1})\] \[-\frac{m_{2}B_{\alpha_{2}}^{-1}}{\Gamma(\alpha_{2})}\left(\frac{h ^{\alpha_{2}}}{\alpha_{2}(\alpha_{2}+1)}\alpha_{2}e^{-\rho U(2)h}+\frac{h}{2} (\rho U(2))^{1-\alpha_{2}}\gamma(\alpha_{2},1)\right)G_{2,0}\] (C.3) \[-\frac{m_{2}B_{\alpha_{2}}^{-1}}{\Gamma(\alpha_{2})}\left(\frac{h ^{\alpha_{2}}}{\alpha_{2}(\alpha_{2}+1)}+\frac{h}{2}(\rho U(2))^{1-\alpha_{2}} \gamma(\alpha_{2},1)\right)G_{2}(t_{1})\] \[-\frac{\rho U(1)h-2}{2}G_{1,0}-\frac{\rho U(1)h}{2}G_{1}(t_{1}),\]
\[G_{2}(t_{1})= \frac{m_{2}B_{\alpha_{2}}^{-1}}{\Gamma(\alpha_{2})}\left(\frac{h^{ \alpha_{2}}}{\alpha_{2}(\alpha_{2}+1)}\alpha_{2}e^{-\rho U(2)h}+\frac{h}{2}( \rho U(2))^{1-\alpha_{2}}\gamma(\alpha_{2},1)\right)G_{2,0}\] \[+\frac{m_{2}B_{\alpha_{2}}^{-1}}{\Gamma(\alpha_{2})}\left(\frac{h ^{\alpha_{2}}}{\alpha_{2}(\alpha_{2}+1)}+\frac{h}{2}(\rho U(2))^{1-\alpha_{2} }\gamma(\alpha_{2},1)\right)G_{2}(t_{1})\] \[-\frac{m_{1}B_{\alpha_{1}}^{-1}}{\Gamma(\alpha_{1})}\left(\frac{h ^{\alpha_{1}}}{\alpha_{1}(\alpha_{1}+1)}\alpha_{1}e^{-\rho U(1)h}+\frac{h}{2} (\rho U(1))^{1-\alpha_{1}}\gamma(\alpha_{1},1)\right)G_{1,0}\] (C.4) \[-\frac{m_{1}B_{\alpha_{1}}^{-1}}{\Gamma(\alpha_{1})}\left(\frac{h ^{\alpha_{1}}}{\alpha_{1}(\alpha_{1}+1)}+\frac{h}{2}(\rho U(1))^{1-\alpha_{1} }\gamma(\alpha_{1},1)\right)G_{1}(t_{1})\] \[-\frac{\rho U(2)h-2}{2}G_{2,0}-\frac{\rho U(2)h}{2}G_{2}(t_{1}).\]
Denote
\[a_{11} =\frac{m_{1}B_{\alpha_{1}}^{-1}}{\Gamma(\alpha_{1})}\left(\frac{h ^{\alpha_{1}}}{\alpha_{1}(\alpha_{1}+1)}+\frac{h}{2}(\rho U(1))^{1-\alpha_{1} }\gamma(\alpha_{1},1)\right)-\frac{\rho U(1)h}{2},\] \[a_{12} =-\frac{m_{2}B_{\alpha_{1}}^{-1}}{\Gamma(\alpha_{2})}\left(\frac {h^{\alpha_{2}}}{\alpha_{2}(\alpha_{2}+1)}+\frac{h}{2}(\rho U(2))^{1-\alpha_{ 2}}\gamma(\alpha_{2},1)\right),\] \[a_{21} =-\frac{m_{1}B_{\alpha_{1}}^{-1}}{\Gamma(\alpha_{1})}\left(\frac {h^{\alpha_{1}}}{\alpha_{1}(\alpha_{1}+1)}+\frac{h}{2}(\rho U(1))^{1-\alpha_{ 1}}\gamma(\alpha_{1},1)\right),\] \[a_{22} =\frac{m_{2}B_{\alpha_{2}}^{-1}}{\Gamma(\alpha_{2})}\left(\frac {h^{\alpha_{2}}}{\alpha_{2}(\alpha_{2}+1)}+\frac{h}{2}(\rho U(2))^{1-\alpha_{ 2}}\gamma(\alpha_{2},1)\right)-\frac{\rho U(2)h}{2},\]
and
\[b_{1}^{(0)}= \frac{m_{1}B_{\alpha_{1}}^{-1}}{\Gamma(\alpha_{1})}\left(\frac{h ^{\alpha_{1}}}{\alpha_{1}(\alpha_{1}+1)}\alpha_{1}e^{-\rho U(1)h}+\frac{h}{2} (\rho U(1))^{1-\alpha_{1}}\gamma(\alpha_{1},1)\right)G_{1,0}-\frac{\rho U(1)h -2}{2}G_{1,0}\] \[-\frac{m_{2}B_{\alpha_{2}}^{-1}}{\Gamma(\alpha_{2})}\left(\frac {h^{\alpha_{2}}}{\alpha_{2}(\alpha_{2}+1)}\alpha_{2}e^{-\rho U(2)h}+\frac{h}{2 }(\rho U(2))^{1-\alpha_{2}}\gamma(\alpha_{2},1)\right)G_{2,0},\] \[b_{2}^{(0)}= \frac{m_{2}B_{\alpha_{2}}^{-1}}{\Gamma(\alpha_{2})}\left(\frac {h^{\alpha_{2}}}{\alpha_{2}(\alpha_{2}+1)}\alpha_{2}e^{-\rho U(2)h}+\frac{h}{2 }(\rho U(2))^{1-\alpha_{2}}\gamma(\alpha_{2},1)\right)G_{2,0}-\frac{\rho U(2)h -2}{2}G_{2,0}\] \[-\frac{m_{1}B_{\alpha_{1}}^{-1}}{\Gamma(\alpha_{1})}\left(\frac {h^{\alpha_{1}}}{\alpha_{1}(\alpha_{1}+1)}\alpha_{1}e^{-\rho U(1)h}+\frac{h}{2 }(\rho U(1))^{1-\alpha_{1}}\gamma(\alpha_{1},1)\right)G_{1,0}.\]
Then, there holds
\[\begin{bmatrix}1&0\\ 0&1\end{bmatrix}\begin{bmatrix}G_{1}(t_{1})\\ G_{2}(t_{1})\end{bmatrix}=\begin{bmatrix}b_{1}^{(0)}\\ b_{2}^{(0)}\end{bmatrix}+\begin{bmatrix}a_{11}&a_{12}\\ a_{21}&a_{22}\end{bmatrix}\cdot\begin{bmatrix}G_{1}(t_{1})\\ G_{2}(t_{1})\end{bmatrix}.\]
Furthermore, we can obtain that
\[\begin{bmatrix}G_{1}(t_{1})\\ G_{2}(t_{1})\end{bmatrix}=\begin{bmatrix}1-a_{11}&-a_{12}\\ -a_{21}&1-a_{22}\end{bmatrix}^{-1}\cdot\begin{bmatrix}b_{1}^{(0)}\\ b_{2}^{(0)}\end{bmatrix}.\] (C.5)
Since the matrix \(\begin{bmatrix}1-a_{11}&-a_{12}\\ -a_{21}&1-a_{22}\end{bmatrix}\) is a principally diagonally dominant matrix, so it is invertible.
**Case II:** For \(n\geq 1\), there are
\[G_{1}(t_{n+1})= \frac{m_{1}B_{\alpha_{1}}^{-1}}{\Gamma(\alpha_{1})}\left(\frac{h ^{\alpha_{1}}}{\alpha_{1}(\alpha_{1}+1)}\sum_{j=0}^{n}d_{j,n}^{(1)}G_{1}(t_{j}) +(\rho U(1))^{1-\alpha_{1}}\gamma(\alpha_{1},1)\left(\frac{h}{2}G_{1,0}+h \sum_{j=1}^{n}G_{1}(t_{j})\right)\right)\] \[+\frac{m_{1}B_{\alpha_{1}}^{-1}}{\Gamma(\alpha_{1})}\left(\frac{h ^{\alpha_{1}}}{\alpha_{1}(\alpha_{1}+1)}+\frac{h}{2}(\rho U(1))^{1-\alpha_{1}} \gamma(\alpha_{1},1)\right)G_{1}(t_{n+1})\]
\[-\rho U(1)\left(\frac{h}{2}G_{1,0}+h\sum_{j=1}^{n}G_{1}(t_{j}) \right)+G_{1,0},\]
and
\[b_{2}^{(n)}= \frac{m_{2}B_{\alpha_{2}}^{-1}}{\Gamma(\alpha_{2})}\left(\frac{h^{ \alpha_{2}}}{\alpha_{2}(\alpha_{2}+1)}\sum_{j=0}^{n}d_{j,n}^{(2)}G_{2}(t_{j})+ (\rho U(2))^{1-\alpha_{2}}\gamma(\alpha_{2},1)\left(\frac{h}{2}G_{2,0}+h\sum_ {j=1}^{n}G_{2}(t_{j})\right)\right)\] \[-\rho U(2)\left(\frac{h}{2}G_{2,0}+h\sum_{j=1}^{n}G_{2}(t_{j}) \right)+G_{2,0}.\]
Thus,
\[\begin{bmatrix}G_{1}(t_{n+1})\\ G_{2}(t_{n+1})\end{bmatrix}=\begin{bmatrix}1-a_{11}&-a_{12}\\ -a_{21}&1-a_{22}\end{bmatrix}^{-1}\cdot\begin{bmatrix}b_{2}^{(n)}\\ b_{2}^{(n)}\end{bmatrix},\;\;n=1,2,\cdots,M.\] (C.7)
Combining (C.5) with (C.7), the time marching scheme for system (11) is obtained.
## Acknowledgments
The authors Ma and Deng are supported by the National Natural Science Foundation of China under Grant No. 12071195, the AI and Big Data Funds under Grant No. 2019620005000775, and the Innovative Groups of Basic Research in Gansu Province under Grant No. 22JR5RA391. The author Zhao is supported by Guangdong Basic and Applied Basic Research Foundation under Grant No. 2022A1515011332. The authors have no relevant financial or non-financial interests to disclose.
|
2307.06137 | Distribution-on-Distribution Regression with Wasserstein Metric:
Multivariate Gaussian Case | Distribution data refers to a data set where each sample is represented as a
probability distribution, a subject area receiving burgeoning interest in the
field of statistics. Although several studies have developed
distribution-to-distribution regression models for univariate variables, the
multivariate scenario remains under-explored due to technical complexities. In
this study, we introduce models for regression from one Gaussian distribution
to another, utilizing the Wasserstein metric. These models are constructed
using the geometry of the Wasserstein space, which enables the transformation
of Gaussian distributions into components of a linear matrix space. Owing to
their linear regression frameworks, our models are intuitively understandable,
and their implementation is simplified because of the optimal transport
problem's analytical solution between Gaussian distributions. We also explore a
generalization of our models to encompass non-Gaussian scenarios. We establish
the convergence rates of in-sample prediction errors for the empirical risk
minimizations in our models. In comparative simulation experiments, our models
demonstrate superior performance over a simpler alternative method that
transforms Gaussian distributions into matrices. We present an application of
our methodology using weather data for illustration purposes. | Ryo Okano, Masaaki Imaizumi | 2023-07-12T12:40:16Z | http://arxiv.org/abs/2307.06137v3 | # Distribution-on-distribution regression with Wasserstein metric: multivariate Gaussian case
###### Abstract.
Distribution data refers to a data set where each sample is represented as a probability distribution, a subject area receiving burgeoning interest in the field of statistics. Although several studies have developed distribution-to-distribution regression models for univariate variables, the multivariate scenario remains under-explored due to technical complexities. In this study, we introduce models for regression from one Gaussian distribution to another, utilizing the Wasserstein metric. These models are constructed using the geometry of the Wasserstein space, which enables the transformation of Gaussian distributions into components of a linear matrix space. Owing to their linear regression frameworks, our models are intuitively understandable, and their implementation is simplified because of the optimal transport problem's analytical solution between Gaussian distributions. We also explore a generalization of our models to encompass non-Gaussian scenarios. We establish the convergence rates of in-sample prediction errors for the empirical risk minimizations in our models. In comparative simulation experiments, our models demonstrate superior performance over a simpler alternative method that transforms Gaussian distributions into matrices. We present an application of our methodology using weather data for illustration purposes.
## 1. Introduction
The analysis of distribution data has gained significant attention in the field of statistics. Distribution data refers to data in which each sample is given in the form of a probability distribution or an empirical distribution generated from it. Examples include age-at-death distributions across different countries, house price distributions of different years, and distributions of voxel-voxel correlations of functional magnetic imaging signals. A distinctive feature of distribution data is that they take values in general metric spaces that lack a vector space structure. Existing complex data analysis methods, such as function or manifold data analysis methods, are inadequate for effectively handling distribution data due to their infinite dimensionality and non-linearity, posing significant challenges in processing. Developing methods and theories for analyzingdistribution data is an important and challenging problem for contemporary statistical practice. Refer to [19] for a review of this topic.
A common approach to handling distribution data involves the application of the Wasserstein metric to a set of distributions. The resulting metric space is known as the Wasserstein space ([14]), where distribution data are considered as its elements. There are several advantages to using the Wasserstein metric: it gives more intuitive interpretations of mean and geodesics compared to other metrics, and it reduces errors by rigorously treating constraints as distribution functions. Based on this approach, numerous methods have been proposed for the anlaysis of distribution data ([2; 18; 17; 7; 4; 9; 26]).
This paper focuses on a problem of distribution-on-distribution regression, that is, the regression of one probability distribution onto another. In the distribution-on-distribution regression problem, the task involves defining a regression map between non-linear spaces, which makes this problem technically challenging. The problem is used for comparing the temporal evolution of age-at-death distributions among different countries ([4], [9]) and predicting house price distributions in the United States([4]). For univariate distributions, several studies have investigated distribution-on-distribution regression models using Wasserstein metric. [4] proposed a model utilizing geometric properties of the Wasserstein space, [26] presented an autoregressive model for distributional time series data, and [9] introduced a model incorporating the optimal transport map associated with the Wasserstein space. However, few studies proposed distribution-on-distribution regression models for the multivariate case with the Wasserstein metric. For more detail, please refer to Section 3.3 for a comprehensive overview.
In this paper, we propose models for regressing one Gaussian distribution onto another. To define our models, we consider the space of Gaussian distributions equipped with the Wasserstein metric and use its tangent bundle structure to transform Gaussian distributions into matrices. Then, we boil down the Gaussian distribution-on-distribution regression to the matrix-on-matrix linear regression, using the transformation to the tangent bundle. Based on the transformation, we proposed two models: a basic model for the case where predictor and response Gaussian distributions are low-dimensional, and a low-rank model incorporating a low-rank structure in the parameter tensor to address high-dimensional Gaussian distributions. Additionally, we explore the extension of our proposed models to encompass non-Gaussian scenarios.
Our strategy and the model give several advantages: (i) the strategy enables the explicit construction of regression maps using the closed-form expression for the optimal transport problem between Gaussian distributions, (ii) it boils down the distribution-on-distribution regression problem to an easy-to-handle linear model while maintaining the constraint of distributions, and (iii) we can solve the linear model without computational difficulties.
We compare our method to another natural approach, which regresses a mean vector and covariance matrices of covariate Gaussian on those of response Gaussian. However, this approach deteriorates accuracy in predicting distributions, since it does not use the structure
of distributions such as the Wasserstein metric. In the simulation studies in Section 6, we compare our proposed models with this alternative approach and find that our models perform better than the alternative approach.
The remaining sections of the paper are organized as follows. In Section 2, we provide some background on the optimal transport and Wasserstein space. In Section 3, we introduce Gaussian distribution-on-distribution regression models and discuss their potential generalizations to accommodate non-Gaussian cases. We show empirical risk minimization algrithmsin our models in Section 4, and analyze their in-sample prediction errors in Section 5. We investigate the finite-sample performance of the proposed methods through simulation studies in Section 6, and illustrate the application of the proposed method using weather data in Section 7. Section 8 concludes. Proofs of theorems and additional theoretical results are provided in Appendix.
### Related Studies
There are several approaches to deal with distribution data apart from the Wasserstein metric approach. [16] introduced the log quantile density transformation, enabling the utilization of functional data methods for distribution data. The Bayes space approach has also been proposed as a viable solution for handling distribution data ([6, 21, 20]).
Within the framework of the Wasserstein metric approach, significant developments have been made in methods and theories for analyzing distribution data. [25] considered the estimation for the Frechet mean, a notion of mean in the Wasserstein space, from distribution samples. [3] established the minimax rates of convergence for these estimators. [18] proposed the Wasserstein covariance measure for dependent density data. [2] developed the method of geodesic principal component analysis on the Wasserstein space.
Various regression models utilizing the Wasserstein metric have been proposed for distribution data. [17] developed regression models for coupled vector predictors and univariate random distributions as responses. [7] developed regression models for multivariate response distributions. [4] and [9] proposed regression models for scenarios where both regressors and responses are random distributions, and [10] studies its extension to the multivariate case. [26] developed autoregressive models for density time series data.
### Notation
For \(d\geq 1\), we denote the identity matrix of size \(d\times d\) as \(I_{d}\). \(\mathrm{Sym}(d)\) is a set of all symmetric matrices of size \(d\times d\). For a positive semidefinite matrix \(A\), we denote its positive square root as \(A^{1/2}\). \(\mathrm{id}(\cdot)\) is the identity map. For a Borel measurable function \(f:\mathbb{R}^{d}\to\mathbb{R}^{d}\) and Borel probability measure \(\mu\) on \(\mathbb{R}^{d}\), \(f\#\mu\) is the push-forward measure defined by \(f\#\mu(\Omega)=\mu(f^{-1}(\Omega))\) for any Borel set \(\Omega\) in \(\mathbb{R}^{d}\). \(\|\cdot\|\) denotes the Euclidean norm. \(\mathcal{L}_{\mu}^{2}(\mathbb{R}^{d})\) is the set of functions \(f:\mathbb{R}^{d}\to\mathbb{R}^{d}\) such that \(\int\|f(x)\|^{2}d\mu(x)<\infty\), and is a Hilbert space with an inner product \(\langle\cdot,\cdot\rangle_{\mu}\) defined as \(\langle f,g\rangle_{\mu}=\int_{\mathbb{R}^{d}}f(x)^{\top}g(x)d\mu(x)\) for \(f,g\in\mathcal{L}_{\mu}^{2}(\mathbb{R}^{d})\). We denote the norm induced by this inner product as \(\|\cdot\|_{\mu}\).
For a matrix \(A\in\mathbb{R}^{d_{1}\times d_{2}}\), we denote its elements as \(A[p,q]\) for \(1\leq p\leq d_{1}\) and \(1\leq q\leq d_{2}\). For a tensor \(\mathbb{A}\in\mathbb{R}^{d_{1}\times d_{2}\times d_{3}\times d_{4}}\), we denote its elements as \(\mathbb{A}[p,q,r,s]\) for \(1\leq p\leq d_{1},1\leq q\leq d_{2},1\leq r\leq d_{3}\) and \(1\leq s\leq d_{4}\). For a tensor \(\mathbb{A}\in\mathbb{R}^{d_{1}\times d_{2}\times d_{3}\times d_{4}}\) and indices \(1\leq r\leq d_{3},1\leq s\leq d_{4}\), let \(\mathbb{A}[\cdot,\cdot,r,s]\in\mathbb{R}^{d_{1}\times d_{2}}\) denote the \(d_{1}\times d_{2}\) matrix whose \((p,q)\)-elements are given by \(\mathbb{A}[p,q,r,s]\). For vectors \(a_{1}\in\mathbb{R}^{d_{1}},a_{2}\in\mathbb{R}^{d_{2}},a_{3}\in\mathbb{R}^{d_{3}}\) and \(a_{4}\in\mathbb{R}^{d_{4}}\), let define the outer product \(\mathbb{A}=a_{1}\circ a_{2}\circ a_{3}\circ a_{4}\in\mathbb{R}^{d_{1}\times d_ {2}\times d_{3}\times d_{4}}\) by \(\mathbb{A}[p,q,r,s]=a_{1}[p]a_{2}[q]a_{3}[r]a_{4}[s]\). For two matrices \(A_{1},A_{2}\in\mathbb{R}^{d_{1}\times d_{2}}\), we define their inner product \(\langle A_{1},A_{2}\rangle\in\mathbb{R}\) as \(\langle A_{1},A_{2}\rangle=\sum_{p=1}^{d_{1}}\sum_{q=1}^{d_{2}}A_{1}[p,q]A_{2}[ p,q]\). Furthermore, for a tensor \(\mathbb{A}\in\mathbb{R}^{d_{1}\times d_{2}\times d_{3}\times d_{4}}\) and a matrix \(A\in\mathbb{R}^{d_{1}\times d_{2}}\), we define their product \(\langle A,\mathbb{A}\rangle_{2}\in\mathbb{R}^{d_{3}\times d_{4}}\) as \(\langle A,\mathbb{A}\rangle_{2}[r,s]=\sum_{p=1}^{d_{1}}\sum_{q=1}^{d_{2}}A[p,q ]\mathbb{A}[p,q,r,s]\) for \(1\leq r\leq d_{3}\) and \(1\leq s\leq d_{4}\).
## 2. Background
In this section, we provide some background on optimal transport, the Wasserstein space, and its tangent space. For more background, see e.g., [23], [1] and [14].
### Optimal Transport
Let \(\mathcal{W}(\mathbb{R}^{d})\) be the set of Borel probability distributions on \(\mathbb{R}^{d}\) with finite second moments. The \(2\)-Wasserstein distance between \(\mu_{1},\mu_{2}\in\mathcal{W}(\mathbb{R}^{d})\) is defined by
\[d_{W}(\mu_{1},\mu_{2})=\left(\inf_{\pi\in\Pi(\mu_{1},\mu_{2})} \int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}\|x-y\|^{2}d\pi(x,y)\right)^{\!1/2}. \tag{1}\]
Here, \(\Pi(\mu_{1},\mu_{2})\) is the set of couplings of \(\mu_{1}\) and \(\mu_{2}\), that is, the set of joint distributions on \(\mathbb{R}^{d}\times\mathbb{R}^{d}\) with marginal distributions \(\mu_{1}\) and \(\mu_{2}\). In our setting, the minimizer \(\pi\) in (1) always exists (Theorem 4.1 in [23]), and is called an optimal coupling. When \(\mu_{1}\) is absolutely continuous with respect to the Lebesgue measure, there exists a map \(T:\mathbb{R}^{d}\to\mathbb{R}^{d}\) such that the joint distribution of \(\boldsymbol{(}\bar{W},T(\bar{W})\boldsymbol{)}\), where \(\bar{W}\sim\mu_{1}\), is an optimal coupling in (1), and such a map \(T\) is uniquely determined \(\mu_{1}\)-almost everywhere (Theorem 1.6.2 in [14]). The map \(T\) is called the optimal transport map between \(\mu_{1}\) and \(\mu_{2}\), and we denote it as \(T_{\mu_{1}}^{\mu_{2}}\). When \(d=1\), the optimal transport map has the following closed-form expression (Section 1.5 in [14]):
\[T_{\mu_{1}}^{\mu_{2}}(x)=F_{\mu_{2}}^{-1}\circ F_{\mu_{1}}(x), \quad x\in\mathbb{R}, \tag{2}\]
where \(F_{\mu_{1}}\) is the cumulative distribution function of \(\mu_{1}\), and \(F_{\mu_{2}}^{-1}\) is the quantile funciton of \(\mu_{2}\).
### The Wasserstein Space and its Tangent Space
The Wasserstein distance \(d_{W}\) is a metric on \(\mathcal{W}(\mathbb{R}^{d})\) (Chapter 6 in [23]), and the metric space \((\mathcal{W}(\mathbb{R}^{d}),d_{W})\) is called the Wasserstein space. We give a notion of a linear space induced from the Wasserstein space, by applying the the basic concepts of Riemannian manifolds, as shown in [1], [2] and [14].
Let arbitrarily fix a reference measure \(\mu_{*}\in\mathcal{W}(\mathbb{R}^{d})\) which is absolutely continuous with respect to the Lebesgue measure. For any \(\mu\in\mathcal{W}(\mathbb{R}^{d})\), the geodesic from \(\mu_{*}\) to \(\mu\),
\([0,1]\to\mathcal{W}(\mathbb{R}^{d})\), is given by
\[\gamma_{\mu_{*},\mu}(t)=[t(T^{\mu}_{\mu_{*}}-\mathrm{id})+\mathrm{id}]\#\mu_{*}, \quad t\in[0,1].\]
The tangent space of the Wasserstein space at \(\mu_{*}\) is defined by
\[\mathcal{T}_{\mu_{*}}=\overline{\{t(T^{\mu}_{\mu_{*}}-\mathrm{id}):\mu\in \mathcal{W}(\mathbb{R}^{d}),t>0\}}, \tag{3}\]
where the upper bar denotes the closure in terms of the norm \(\|\cdot\|_{\mu_{*}}\) in the space \(\mathcal{L}^{2}_{\mu_{*}}(\mathbb{R}^{d})\). The space \(\mathcal{T}_{\mu_{*}}\) is a subspace of \(\mathcal{L}^{2}_{\mu_{*}}(\mathbb{R}^{d})\) (Theorem 8.5.1 in [1]). The exponential map \(\mathrm{Exp}_{\mu_{*}}:\mathcal{T}_{\mu_{*}}\to\mathcal{W}(\mathbb{R}^{d})\) is then defined by
\[\mathrm{Exp}_{\mu_{*}}g=(g+\mathrm{id})\#\mu_{*},\quad g\in\mathcal{T}_{\mu_{ *}},\]
and as its right inverse, the logarithmic map \(\mathrm{Log}_{\mu_{*}}:\mathcal{W}(\mathbb{R}^{d})\to\mathcal{T}_{\mu_{*}}\) is given by
\[\mathrm{Log}_{\mu_{*}}\mu=T^{\mu}_{\mu_{*}}-\mathrm{id},\quad\mu\in\mathcal{W }(\mathbb{R}^{d}). \tag{4}\]
When \(d=1\), the logarithmic map is isometric in the sense that
\[\|\mathrm{Log}_{\mu_{*}}\mu_{1}-\mathrm{Log}_{\mu_{*}}\mu_{2}\|_{\mu_{*}}=d_{ W}(\mu_{1},\mu_{2}) \tag{5}\]
for all \(\mu_{1},\mu_{2}\in\mathcal{W}(\mathbb{R})\) (Section 2.3.2 in [14]). Remind that \(\|\cdot\|_{\mu^{*}}\) is the norm of \(\mathcal{L}^{2}_{\mu_{*}}(\mathbb{R}^{d})\) with the reference measure \(\mu^{*}\), as defined in Section 1.2.
### Specification with Gaussian Case
We restrict our attention to the Gaussian measures. Let \(\mathcal{G}(\mathbb{R}^{d})\) be the set of Gaussian distributions on \(\mathbb{R}^{d}\), and we call the metric space \((\mathcal{G}(\mathbb{R}^{d}),d_{W})\) as the Gaussian space.
For two Gaussian measures \(\mu_{1}=N(m_{1},\Sigma_{1})\), \(\mu_{2}=N(m_{2},\Sigma_{2})\in\mathcal{G}(\mathbb{R}^{d})\) with mean vectors \(m_{1},m_{2}\in\mathbb{R}^{d}\) and covariance matrices \(\Sigma_{1},\Sigma_{2}\in\mathbb{R}^{d\times d}\), the \(2\)-Wasserstein distance between them has the following closed-form expression (Section 1.6.3 in [14]):
\[d_{W}(\mu_{1},\mu_{2})=\sqrt{\|m_{1}-m_{2}\|^{2}+\mathrm{tr}[\Sigma_{1}+ \Sigma_{2}-2(\Sigma_{1}^{1/2}\Sigma_{2}\Sigma_{1}^{1/2})^{1/2}]}. \tag{6}\]
When \(\Sigma_{1}\) is non-singular, the optimal transport map between \(\mu_{1}\) and \(\mu_{2}\) also has the following closed-form expression (Section 1.6.3 in [14]):
\[T^{\mu_{2}}_{\mu_{1}}(x)=m_{2}+S(\Sigma_{1},\Sigma_{2})(x-m_{1}),\quad x\in \mathbb{R}^{d}, \tag{7}\]
where we define \(S(\Sigma_{1},\Sigma_{2})=\Sigma_{1}^{-1/2}[\Sigma_{1}^{1/2}\Sigma_{2}\Sigma_{ 1}^{1/2}]^{1/2}\Sigma_{1}^{-1/2}\) for two covariance matrices \(\Sigma_{1},\Sigma_{2}\).
We introduce a tangent space of Gaussian spaces. Fix a Gaussian measure \(\mu_{*}=N(m_{*},\Sigma_{*})\in\mathcal{G}(\mathbb{R}^{d})\) as a reference measure with a non-singular covariance matrix \(\Sigma_{*}\). Replacing \(\mathcal{W}(\mathbb{R}^{d})\) with \(\mathcal{G}(\mathbb{R}^{d})\) in the definition of tangent space (3), we obtain the tangent space by a form of a function space
\[\mathcal{T}\mathcal{G}_{\mu_{*}}=\overline{\{t(T^{\mu}_{\mu_{*}}-\mathrm{id}): \mu\in\mathcal{G}(\mathbb{R}^{d}),t>0\}}.\]
Using the form of the optimal transport map (7), a function in the tangent space \(\mathcal{TG}_{\mu_{*}}\) has the following form
\[t(T^{\mu}_{\mu_{*}}-\mathrm{id})(x)=t(m-S(\Sigma_{*},\Sigma)m_{*})+t(S(\Sigma_{* },\Sigma)-I_{d})x,\quad x\in\mathbb{R}^{d}. \tag{8}\]
This form implies that the function space \(\mathcal{TG}_{\mu_{*}}\) is a set of affine functions of \(x\in\mathbb{R}^{d}\). Note that \(\mathrm{Exp}_{\mu_{*}}g\in\mathcal{G}(\mathbb{R}^{d})\) holds for any \(g\in\mathcal{TG}_{\mu_{*}}\), and also \(\mathrm{Log}_{\mu_{*}}\mu\in\mathcal{TG}_{\mu_{*}}\) holds for any \(\mu\in\mathcal{G}(\mathbb{R}^{d})\).
## 3. Model
In this section, we define regression models between Gaussian spaces using the above notion of tangent spaces. We first present our key idea of modeling and then develop two models.
### Idea: Nearly isometry between Gaussian Space and Linear Matrix Space
As our key idea, we give a nearly isometric map from Gaussian space \(\mathcal{G}(\mathbb{R}^{d})\) to a linear matrix space. For \(d\geq 1\), we define a set of symmetric matrices as
\[\Xi_{d}=\{(a,V)\in\mathbb{R}^{d\times(d+1)}:a\in\mathbb{R}^{d},V\in\mathrm{Sym }(d)\},\]
which is obviously a linear space. We will give a map from \(\mathcal{G}(\mathbb{R}^{d})\) to \(\Xi_{d}\) and show that this map has certain isometric properties. This isometry map plays a critical role in our regression model, given in the next subsection. We fix a non-singular Gaussian measure \(\mu_{*}=N(m_{*},\Sigma_{*})\in\mathcal{G}(\mathbb{R}^{d})\) as a reference measure.
Preliminarily, we introduce an inner product on the space \(\Xi_{d}\). For \((a,V),(b,U)\in\Xi_{d}\), we define
\[\langle(a,V),(b,U)\rangle_{m_{*},\Sigma_{*}}=(a+Vm_{*})^{\top}(b+Um_{*})+ \mathrm{tr}(V\Sigma_{*}U).\]
Then we can easily check that \(\langle\cdot,\cdot\rangle_{m_{*},\Sigma_{*}}\) satisfies the conditions of inner product. This design follows an inner product for a space of affine functions. Rigorously, for \(a\in\mathbb{R}^{d}\) and \(V\in\mathrm{Sym}(d)\), we define an affine function \(f_{a,V}(x)=a+Vx\) and its space \(\mathcal{F}_{\mathrm{aff}}=\{f_{a,V}:a\in\mathbb{R}^{d},V\in\mathrm{Sym}(d)\}\). Note that \(\mathcal{TG}_{\mu_{*}}\subset\mathcal{F}_{\mathrm{aff}}\) holds from (8). Then we consider an inner product between \(f_{a,V},f_{b,U}\in\mathcal{F}_{\mathrm{aff}}\) with \((a,V),(b,U)\in\Xi_{d}\) as
\[\langle f_{a,V},f_{b,U}\rangle_{\mu_{*}}=\int_{\mathbb{R}^{d}}(a+Vx)^{\top}(b+ Ux)d\mu_{*}(x)=(a+Vm_{*})^{\top}(b+Um_{*})+\mathrm{tr}(V\Sigma_{*}U).\]
Inspired by the design, we obtain an inner product space \((\Xi_{d},\langle\cdot,\cdot\rangle_{(m_{*},\Sigma_{*})})\). The norm \(\|\cdot\|_{(m_{*},\Sigma_{*})}\) induced by this inner product is specified as
\[\|(a,V)\|_{(m_{*},\Sigma_{*})}=\sqrt{\|a+Vm_{*}\|^{2}+\mathrm{tr}(V\Sigma_{*}V )}. \tag{9}\]
We construct a nearly isometric map \(\varphi_{\mu_{*}}\) from \((\mathcal{G}(\mathbb{R}^{d}),d_{W})\) to \((\Xi_{d},\|\cdot\|_{(m_{*},\Sigma_{*})})\) as
\[\varphi_{\mu_{*}}=\pi\circ\psi_{\mu_{*}}. \tag{10}\]
We specify the maps \(\psi_{\mu_{*}}:\mathcal{G}(\mathbb{R}^{d})\to\mathcal{TG}_{\mu_{*}}\) and \(\pi:\mathcal{F}_{\mathrm{aff}}\to\Xi_{d}\) as follows. First, \(\psi_{\mu_{*}}\) is the logarithm map \(\mathrm{Log}_{\mu_{*}}(\cdot)\) as (4) with restriction to \(\mathcal{G}(\mathbb{R}^{d})\). That is, for \(\mu=N(m,\Sigma)\in\mathcal{G}(\mathbb{R}^{d})\), \(\psi_{\mu_{*}}\mu\) is the affine function of the form (8). Second, for an affine function \(f_{a,V}\in\mathcal{F}_{\mathrm{aff}}\), we define \(\pi f_{a,V}=(a,V).\) For summary, the map \(\varphi_{\mu_{*}}:\mathcal{G}(\mathbb{R}^{d})\to\Xi_{d}\) in (10) is specified as
\[\varphi_{\mu_{*}}\mu=(m-S(\Sigma_{*},\Sigma)m_{*},S(\Sigma_{*},\Sigma)-I),\quad \mu=N(m,\Sigma)\in\mathcal{G}(\mathbb{R}^{d}). \tag{11}\]
We also define a map \(\xi_{\mu_{*}}:\varphi_{\mu_{*}}\mathcal{G}(\mathbb{R}^{d})\to\mathcal{G}( \mathbb{R}^{d})\) as the left inverse of the map \(\varphi_{\mu_{*}}\) by
\[\xi_{\mu_{*}}(a,V)=N(a+(V+I)m_{*},(V+I)\Sigma_{*}(V+I)),\quad(a,V)\in\varphi_{ \mu_{*}}\mathcal{G}(\mathbb{R}^{d}).\]
Here, a range of the map (11) with the domain \(\mathcal{G}(\mathbb{R}^{d})\) is written as
\[\varphi_{\mu_{*}}\mathcal{G}(\mathbb{R}^{d})=\{(a,V)\in\Xi_{d}:V+I_{d}\,\text{ is positive semidefinite}\},\]
which is obviously a subset of \(\Xi_{d}\).
We obtain results on the distance-preserving property of the map \(\varphi_{\mu_{*}}\). As a preparation, for a \(d\times d\) orthogonal matrix \(U\), we define a class of Gaussian measures \(\mathscr{C}_{U}\subset\mathcal{G}(\mathbb{R}^{d})\) as
\[\mathscr{C}_{U}=\{N(m,\Sigma)\in\mathcal{G}(\mathbb{R}^{d}):m\in\mathbb{R}^{d },\,\,\,U\Sigma U^{\top}\text{is diagonal}\}.\]
Here, we give a formal statement.
**Proposition 1**.: _Let \(\mu_{*}\in\mathcal{G}(\mathbb{R}^{d})\) be an arbitrary fixed reference measure. For any \(\mu\in\mathcal{G}(\mathbb{R}^{d})\), we have_
\[d_{W}(\mu,\mu_{*})=\|\varphi_{\mu_{*}}\mu\|_{(m_{*},\Sigma_{*})}.\]
_Moreover, if \(\mu_{*}\in\mathscr{C}_{U}\) holds, we have the following for any \(\mu_{1},\mu_{2}\in\mathscr{C}_{U}\):_
\[d_{W}(\mu_{1},\mu_{2})=\|\varphi_{\mu_{*}}\mu_{1}-\varphi_{\mu_{*}}\mu_{2}\|_{ (m_{*},\Sigma_{*})}.\]
Note that since \(\varphi_{\mu_{*}}\mu_{*}=0\) holds, the first claim shows that the Wasserstein distance between any Gaussian measure \(\mu\) and the reference Gaussian measure \(\mu_{*}\) is equal to the distance between corresponding elements in the space \((\Xi_{d},\|\cdot\|_{(m_{*},\Sigma_{*})})\). The second claim shows that if we choose a class of Gaussian measures appropriately, the map \(\varphi_{\mu_{*}}\) is isometric on that class. This isometric property is essentially illustrated in Section 2.3.2 in [14] for the case of centered Gaussian distributions. Our claim can be understood as its generalization to the non-centered case.
### Regression Model
In this section, we develop our regression models for the Gaussian-to-Gaussian distribution regression. Our strategy is to map Gaussian distributions to the linear matrix spaces using the nearly isometric maps and then conduct linear regression between the matrix spaces. Figure 1 illustrates the strategy. Specifically, we develop the
following two models: (i) a basic model, and (ii) a low-rank model. See Section 1.2 for the notation regarding matrices and tensors.
We review the setup of the regression problem. Let \(d_{1}\) and \(d_{2}\) be positive integers and \(\mathcal{F}\) be a joint distribution on \(\mathcal{G}(\mathbb{R}^{d_{1}})\times\mathcal{G}(\mathbb{R}^{d_{2}})\). Let \((\nu_{1},\nu_{2})\) be a pair of random elements generated by \(\mathcal{F}\), where we write \(\nu_{1}=N(m_{1},\Sigma_{1})\) and \(\nu_{2}=N(m_{2},\Sigma_{2})\). We assume \(\nu_{1}\) and \(\nu_{2}\) are square integrable in the sense that \(\max\{\mathbb{E}[d_{W}^{2}(\mu_{1},\nu_{1})],\mathbb{E}[d_{W}^{2}(\mu_{2},\nu_ {2})]\}<\infty\) for some (and thus for all) \(\mu_{1}\in\mathcal{G}(\mathbb{R}^{d_{1}})\) and \(\mu_{2}\in\mathcal{G}(\mathbb{R}^{d_{2}})\). In the following, we give models for dealing with this joint distribution \(\mathcal{F}\).
#### 3.2.1. Basic model
The first step is to define reference measures to introduce the nearly isometric maps. For \(j\in\{1,2\}\), we define the Frechet mean of the random Gaussian distribution \(\nu_{j}\) as
\[\nu_{j\oplus}=N(m_{j\oplus},\Sigma_{j\oplus})=\operatorname*{arg\,min}_{\mu_{ j}\in\mathcal{G}(\mathbb{R}^{d_{j}})}\mathbb{E}[d_{W}^{2}(\mu_{j},\nu_{j})],\]
with the mean vector \(m_{j\oplus}\in\mathbb{R}^{d_{j}}\) and the covariance matrix \(\Sigma_{j\oplus}\in\mathbb{R}^{d_{j}\times d_{j}}\). Note that the Frechet means \(\nu_{1\oplus}\) and \(\nu_{2\oplus}\) are also Gaussian, and we assume they uniquely exist and are non-singular.
Using the Frechet means \(\nu_{1\oplus}\) and \(\nu_{2\oplus}\) as reference measures, we transform random Gaussian distributions \(\nu_{1}\) and \(\nu_{2}\) to random elements \(X\in\Xi_{d_{1}}\) and \(Y\in\Xi_{d_{2}}\) by
\[X=\varphi_{\nu_{1\oplus}}\nu_{1},\text{ and }Y=\varphi_{\nu_{2\oplus}}\nu_{2},\]
where \(\varphi_{\nu_{1\oplus}}\) and \(\varphi_{\nu_{2\oplus}}\) are the nearly isometric maps in (11).
For the random matrices \(X\) and \(Y\) transformed from the random distributions \(\nu_{1}\) and \(\nu_{2}\) as above, we perform a matrix-to-matrix linear regression. To the aim, we consider a coefficient tensor \(\mathbb{B}\in\mathbb{R}^{d_{1}\times(d_{1}+1)\times d_{2}\times(d_{2}+1)}\) and define its associated linear map
\[\Gamma_{\mathbb{B}}:\mathbb{R}^{d_{1}\times(d_{1}+1)}\to\mathbb{R}^{d_{2} \times(d_{2}+1)},A\mapsto\langle A,\mathbb{B}\rangle_{2}.\]
Remind that \(\langle\cdot,\cdot\rangle_{2}\) is a product for tensors defined in Section 1.2. To deal with the symmetricity of matrices in \(\Xi_{d_{1}}\) and \(\Xi_{d_{2}}\), we define the following class of coefficient tensors:
\[\mathcal{B} =\{\mathbb{B}\in\mathbb{R}^{d_{1}\times(d_{1}+1)\times d_{2} \times(d_{2}+1)}\] \[\quad:\mathbb{B}[\cdot,\cdot,r,s]=\mathbb{B}[\cdot,\cdot,s-1,r+1 ]\,\text{ for }\,1\leq r\leq d_{2},2\leq s\leq d_{2}+1\}. \tag{12}\]
This definition guarantees \(\langle A,\mathbb{B}\rangle_{2}\in\Xi_{d_{2}}\) holds for any \(\mathbb{B}\in\mathcal{B}\) and \(A\in\Xi_{d_{1}}\).
We now give the linear regression model. We assume that the \((\Xi_{d_{1}}\times\Xi_{d_{2}})\)-valued random element \((X,Y)\), which is obtained by the transform of the random pair if distributions \((\nu_{1},\nu_{2})\), follows the following linear model with some \(\mathbb{B}_{0}\in\mathcal{B}\):
\[Y=\Gamma_{\mathbb{B}_{0}}(X)+E,\ \mathbb{E}[E|X]=0, \tag{13}\]
where \(E\) is a \(\Xi_{d_{2}}\)-valued random element as an error term. Note that \(\mathbb{B}_{0}\) is not necessarily unique. We can rewrite this model into an element-wise representation such that
\[Y[r,s]=\langle X,\mathbb{B}_{0}[\cdot,\cdot,r,s]\rangle+E[r,s], \quad\mathbb{E}[E[r,s]|X]=0, \tag{14}\]
for \(1\leq r\leq d_{2},2\leq s\leq d_{2}+1\). Furthermore, we impose the following assumption on the data-generating process in this model:
\[\Gamma_{\mathbb{B}_{0}}(X)\in\varphi_{\nu_{2\oplus}}\mathcal{G}( \mathbb{R}^{d_{2}})\quad\text{with probability 1}. \tag{15}\]
For summary, we consider a regression map \(\Gamma_{\mathcal{G},\mathbb{B}_{0}}\) between the Gaussian spaces \(\mathcal{G}(\mathbb{R}^{d_{1}})\) and \(\mathcal{G}(\mathbb{R}^{d_{2}})\) as
\[\Gamma_{\mathcal{G},\mathbb{B}_{0}}=\xi_{\nu_{2\oplus}}\circ \Gamma_{\mathbb{B}_{0}}\circ\varphi_{\nu_{1\oplus}}. \tag{16}\]
Note that our model satisfies \(\Gamma_{\mathcal{G},\mathbb{B}}(\nu_{1\oplus})=\nu_{2\oplus}\) for any \(\mathbb{B}\), since we have \(\varphi_{\nu_{1\oplus}}\nu_{1\oplus}=0\) and \(\xi_{\nu_{2\oplus}}(0)=\nu_{2\oplus}\),
Note that our model satisfies \(\Gamma_{\mathcal{G},\mathbb{B}_{0}}(\nu_{1\oplus})=\nu_{2\oplus}\), since we have \(\varphi_{\nu_{1\oplus}}\nu_{1\oplus}=0\) and \(\xi_{\nu_{2\oplus}}(0)=\nu_{2\oplus}\). In other words, the regression map \(\Gamma_{\mathcal{G},\mathbb{B}_{0}}\) maps the Frechet mean of \(\nu_{1}\) to that of \(\nu_{2}\).
**Remark 1** (Scalar response model).: _A variant of the proposed basic model is the pairing of Gaussian distributions with scalar responses. In this case, the regression comes down to matrix-to-scalar linear regression. Let \((\nu_{1},Z)\) be a pair of random elements with a joint distribution on \(\mathcal{G}(\mathbb{R}^{d_{1}})\times\mathbb{R}\), and let \(\nu_{1\oplus}=(m_{1\oplus},\Sigma_{1\oplus})\) be the Frechet mean of \(\nu_{1}\) in \(\mathcal{G}(\mathbb{R}^{d_{1}})\). A
Gaussian distribution-to-scalar regression model is_
\[Z=\langle X,\mathbb{B}_{0}\rangle+\varepsilon,\quad\mathbb{E}[ \varepsilon|X]=0. \tag{17}\]
_Here, \(X=\varphi_{\nu_{1\theta}}\nu_{1}\) is an element in \(\Xi_{d_{1}}\), \(\mathbb{B}_{0}\in\mathbb{R}^{d_{1}\times(d_{1}+1)}\) is the regression parameter and \(\varepsilon\) is a real-valued error term._
#### 3.2.2. Low-Rank Model
We consider the case where the coefficient tensor \(\mathbb{B}\) is assumed to have low-rank, as an extension of the basic model. The issue with the basic model (13) is that the number of elements in \(\mathbb{B}\) is \(d_{1}(d_{1}+1)d_{2}(d_{2}+1)\), which is high dimensional and far exceeds the usual sample size when \(d_{1}\) and \(d_{2}\) are not small. A natural way to handle this issue is to approximate \(\mathbb{B}\) with fewer parameters, and we employ the low-rank CP decomposition of tensors for that purpose. This approach was employed by [27] for a tensor regression model for scalar outcome, and by [13] for a tensor-on-tensor regression model.
We define the low-rank coefficient tensor. Let \(K\) be a positive integer such that \(K\leq\min\{d_{1},d_{2}\}\). Then the tensor \(\mathbb{B}\in\mathbb{R}^{d_{1}\times(d_{1}+1)\times d_{2}\times(d_{2}+1)}\) admits a rank-\(K\) decomposition (e.g., [11]), if it holds that
\[\mathbb{B}=\sum_{k=1}^{K}a_{1}^{(k)}\circ a_{2}^{(k)}\circ a_{3} ^{(k)}\circ a_{4}^{(k)}, \tag{18}\]
where \(a_{1}^{(k)}\in\mathbb{R}^{d_{1}},a_{2}^{(k)}\in\mathbb{R}^{d_{1}+1},a_{3}^{(k) }\in\mathbb{R}^{d_{2}},a_{4}^{(k)}\in\mathbb{R}^{d_{2}+1}(k=1,...,K)\) are all column vectors. For convenience, the decomposition (18) is often represented by a shorthand
\[\mathbb{B}=[\![A_{1},A_{2},A_{3},A_{4}]\!], \tag{19}\]
where \(A_{1}=[a_{1}^{(1)},...,a_{1}^{(K)}]\in\mathbb{R}^{d_{1}\times K},A_{2}=[a_{2}^ {(1)},...,a_{2}^{(K)}]\in\mathbb{R}^{(d_{1}+1)\times K},A_{3}=[a_{3}^{(1)},...,a_{3}^{(K)}]\in\mathbb{R}^{d_{2}\times K},A_{4}=[a_{4}^{(1)},...,a_{4}^{(K)} ]\in\mathbb{R}^{(d_{2}+1)\times K}\). A number of elements of the tensor \(\mathbb{B}\) in (18) is \(2K(d_{1}+d_{2}+1)\), which is much smaller than \(d_{1}(d_{1}+1)d_{2}(d_{2}+1)\) when \(d_{1}\) and \(d_{2}\) are large.
Based on this decomposition, we propose a rank-\(K\) model for Gaussian distribution-to-distribution regression, in which the regression parameter \(\mathbb{B}\) in (13) admits the rank-\(K\) decomposition (18). In the rank-\(K\) model, we assume \(a_{3}^{(k)}[\![r]\!]=a_{3}^{(k)}[\![s\!-\!1]\) and \(a_{4}^{(k)}[\![s]\!]=a_{4}^{(k)}[\![r+1]\!]\) for \(1\leq r\leq d_{2},2\leq s\leq d_{2}+1\) and \(1\leq k\leq K\). In other words, when \(\mathbb{B}\) is represented as \([\![A_{1},A_{2},A_{3},A_{4}]\!]\), we assume the matrices \(A_{3}\) and \(A_{4}\) have the forms
\[A_{3}=\begin{pmatrix}\alpha_{1}&\alpha_{2}&\cdots&\alpha_{K}\\ \vdots&\vdots&&\vdots\\ \alpha_{1}&\alpha_{2}&\cdots&\alpha_{K}\end{pmatrix},\quad A_{4}=\begin{pmatrix} \beta_{1}&\beta_{2}&\cdots&\beta_{K}\\ \gamma_{1}&\gamma_{2}&\cdots&\gamma_{K}\\ \vdots&\vdots&&\vdots\\ \gamma_{1}&\gamma_{2}&\cdots&\gamma_{K},\end{pmatrix}, \tag{20}\]
where \(\alpha_{k},\beta_{k},\gamma_{k},1\leq k\leq K\) are some scalars. Under this assumption, the symmetric condition in (12) holds, so that we have \(\langle A,\mathbb{B}\rangle_{2}\in\Xi_{d_{2}}\) for any \(A\in\Xi_{d_{1}}\). We denote the resulting
parameter space for the rank-\(K\) model as
\[\mathcal{B}_{\mathrm{low}}=\{\mathbb{B}=\llbracket A_{1},A_{2}, A_{3},A_{4}\rrbracket\in\mathbb{R}^{d_{1}\times(d_{1}+1)\times d_{2} \times(d_{2}+1)}\] \[:A_{3}\,\mathrm{and}\,A_{4}\,\mathrm{satisfy\,\,the\,\,condition\,\, (\ref{eq:B1})\}. \tag{21}\]
Finally, we consider the regression model (16) with \(\mathbb{B}_{0}\in\mathcal{B}_{\mathrm{low}}\). In practice, the appropriate number of rank \(K\) is unknown, and it can be selected via the cross-validation method.
### Comparison with Existing Models in Terms of Generalization to Multivariate Case
For the univariate case where \(d_{1}=d_{2}=1\), regression models applying the Wasserstein metric to distribution-on-distribution were introduced by [4, 26, 9]. [4] and [26] transformed distributions in the Wasserstein space \(\mathcal{W}(\mathbb{R})\) to elements in its tangent space (3) by the logarithmic map (4), and boiled down distribution-on-distribution regression to function-on-function linear regression. Because the logarithmic map (4) is isometric in the univariate case, their methods fully utilize the geometric properties of the Wasserstein space. [9] modeled the regression operator from \(\mathcal{W}(\mathbb{R})\) to \(\mathcal{W}(\mathbb{R})\) by using the optimal transport map. This approach enabled to interpret the regression effect directly at the level of probability distributions through a re-arrangement of probability mass.
Despite the effectiveness of these models for univariate distribution-on-distribution regression, their extension to the multivariate scenario remains non-trivial. This challenge primarily arises from two reasons. The first reason is that the explicit solution of the optimal transport problem for univariate distributions (2) is not available for the multivariate case. This brings numerical difficulties in the computation of optimal transport maps, which is required to transform distributions to unconstrained functions in the model by [4]. The derivation of optimal transport maps also becomes essential when devising estimators for the regression map within [9]'s model. The second reason is that the flatness of the Wasserstein space, that is, the isometric property of the logarithmic map (5), does not hold for the multivariate case in general. This means the transformation method by [4] lacks the theoretical support for preserving the geometric properties of the Wasserstein space in the multivariate case. Moreover, the identifiability result of the regression map in the model by [9], which depends on the flatness of the Wasserstein space, is hard to be generalized for the multivariate case. Another study [10] analyzes the multivariate case and reveals several theoretical properties such as the sample complexity.
We addressed these challenges by limiting the class of distributions to Gaussian distributions. In our model, we transform Gaussian distributions to unconstrained matrices via the map (11). Consequently, we simplify the regression of Gaussian distribution-on-Gaussian distribution to matrix-on-matrix linear regression. Given the explicit expression of the optimal transport map between Gaussian distributions as (7), our transformation avoids computational difficulties. Although our transformation is not isometric in general, it has certain
isometric properties as shown in Proposition 1. This guarantees that our transformation method partially utilizes the geometric properties of the Gaussian space.
### Generalization to Elliptically Symmetric Distributions
Our proposed regression models extend to scenarios where distributions \(\nu_{1}\) and \(\nu_{2}\) belong to the class of elliptically symmetric distributions, a broader category than Gaussian distributions. This is because, as shown in [8], the closed-form expression of the Wasserstein distance (6) holds if two distributions are in the same class of elliptically symmetric distributions.
We give more rigorous description. Let \(d\geq 1\) and let \(f:[0,\infty)\to[0,\infty)\) be a measurable function that is not almost everywhere zero and satisfies
\[\int_{-\infty}^{\infty}|t|^{\ell}f(t^{2})dt<\infty,\quad\ell=d-1,d,d+1. \tag{22}\]
Given such a function \(f\), for a positive definite matrix \(A\in\mathbb{R}^{d\times d}\) and a vector \(v\in\mathbb{R}^{d}\), one can consider a density function of the form \(f_{A,v}(x)=(c_{A})^{-1}f((x-v)^{\top}A(x-v)),x\in\mathbb{R}^{d}\). Here, we define \(c_{A}=\int_{\mathbb{R}^{d}}f((x-v)^{\top}A(x-v))dx\) as the normalizing constant. Then, we can consider a class of distributions on \(\mathbb{R}^{d}\) whose elements have a density \(f_{A,v}\) for some positive definite matrix \(A\in\mathbb{R}^{d\times d}\) and vector \(v\in\mathbb{R}^{d}\). We denote such a class as \(\mathcal{P}_{f}(\mathbb{R}^{d})\), and call it as the class of elliptically symmetric distributions with function \(f\). For example, if we set \(f(t)=e^{-t/2}\), we obtain the set of Gaussian distributions with positive definite covariance matrices as \(\mathcal{P}_{f}(\mathbb{R}^{d})\). Furthermore, by setting \(f(t)=I_{[0,1]}(t)\), we obtain the set of uniform distributions on ellipsoids of the forms \(U_{A,v}=\{x\in\mathbb{R}^{d}:(x-v)^{\top}A(x-v)\leq 1\}\) for some positive definite matrix \(A\in\mathbb{R}^{d\times d}\) and vector \(v\in\mathbb{R}^{d}\).
According to Theorem 2.4 of [8], the closed-forms of the Wasserstein distance (6) and optimal transport map (7) are valid for any two measures \(\mu_{1},\mu_{2}\) in the same class of elliptically symmetric distributions \(\mathcal{P}_{f}(\mathbb{R}^{d})\). Since our models rely only the forms (6), (7), our result can be extended to the case in which \((\nu_{1},\nu_{2})\) are \(\mathcal{P}_{f_{1}}(\mathbb{R}^{d_{1}})\times\mathcal{P}_{f_{2}}(\mathbb{R}^{ d_{2}})\)-valued random elements. Note that \(f_{1},f_{2}:[0,\infty)\to[0,\infty)\) should be non-vanishing and satisfy the condition (22) for \(d=d_{1}\) and \(d=d_{2}\), respectively.
## 4. Empirical Risk Minimization Algorithms
In this section, we propose empirical risk minimization procedures for constructing a prediction model following the regression map \(\Gamma_{\mathcal{G},\mathbb{B}_{0}}\) (16) based on observed data. Specifically, we consider two cases: (i) we directly observe random distributions (Section 4.1), and (ii) we observe only samples from the random distributions (Section 4.2). We refer the estimation issue of the coefficient tensor \(\mathbb{B}_{0}\) itself and its related topics to Appendix.
### Algorithm with Directly Observed Distributions
Suppose that we directly observe \(n\) independent pairs of random Gaussian distributions \((\nu_{i1},\nu_{i2})\sim\mathcal{F}\) for \(i=1,...,n\)
Here, we write \(\nu_{ij}=N(\mu_{ij},\Sigma_{ij})\) for \(j\in\{1,2\}\). Firstly, based on the distributions \(\nu_{ij}(i=1,...,n;j=1,2)\), we compute the empirical Frechet means for \(j\in\{1,2\}\):
\[\widetilde{\nu}_{j\oplus}=\operatorname*{arg\,min}_{\mu_{j}\in\mathcal{G}( \mathbb{R}^{d_{j}})}\frac{1}{n}\sum_{i=1}^{n}d_{W}^{2}(\mu_{j},\nu_{ij}), \tag{23}\]
where we write \(\widetilde{\nu}_{j\oplus}=N(\widetilde{m}_{j\oplus},\widetilde{\Sigma}_{j \oplus})\). For solving optimizations in (23), we can use the steepest descent algorithm (Section 5.4.1 in [14]). Then, we transform Gaussian distributions \(\nu_{ij}\) into matrices by \(\widetilde{X}_{i}=\varphi_{\overline{\nu}_{1\oplus}}\nu_{i1}\) and \(\widetilde{Y}_{i}=\varphi_{\overline{\nu}_{2\oplus}}\nu_{i2}\). In the basic model, we solve the following least squares problem:
\[\widetilde{\mathbb{B}}\in\operatorname*{arg\,min}_{\mathbb{B}\in\mathcal{B}} \sum_{i=1}^{n}\|\widetilde{Y}_{i}-\Gamma_{\mathbb{B}}(\widetilde{X}_{i})\|_{( \widetilde{m}_{2\oplus},\widetilde{\Sigma}_{2\oplus})}^{2},\]
where \(\mathcal{B}\) is the parameter space defined by (12), and \(\|\cdot\|_{(\widetilde{m}_{2\oplus},\widetilde{\Sigma}_{2\oplus})}\) denotes the norm defined by (9) for \(m_{*}=\widetilde{m}_{2\oplus}\) and \(\Sigma_{*}=\widetilde{\Sigma}_{2\oplus}\). In the rank-\(K\) model, we solve the following least squares problem:
\[\widetilde{\mathbb{B}}\in\operatorname*{arg\,min}_{\mathbb{B}\in\mathcal{B}_{ \text{low}}}\sum_{i=1}^{n}\|\widetilde{Y}_{i}-\Gamma_{\mathbb{B}}(\widetilde{X }_{i})\|_{(\widetilde{m}_{2\oplus},\widetilde{\Sigma}_{2\oplus})}^{2}, \tag{24}\]
where \(\mathcal{B}_{\text{low}}\) is the parameter space defined by (21). In either case, we use \(\Gamma_{\mathcal{G},\widetilde{\mathbb{B}}}=\xi_{\overline{\nu}_{2\oplus}} \circ\Gamma_{\widetilde{\mathbb{B}}}\circ\varphi_{\overline{\nu}_{1\oplus}}\) as the map for prediction.
We propose an algorithm for solving the optimization problem in (24). We observe that although the tensor \(\mathbb{B}\) with rank \(K\)-decomposition (19) is not linear in \((A_{1},A_{2},A_{3},A_{4})\) jointly, it is linear in \(A_{c}\) individually for \(c=1,2,3,4\). This observation suggests a so-called block relaxation algorithm ([5]), which alternately updates \(A_{c},c=1,2,3,4\), while keeping the other matrices fixed. This algorithm is employed in [27] for parameter estimation in a tensor regression model. Recalling that the matrices \(A_{3},A_{4}\) have the forms (20) so that \(\mathbb{B}\in\mathcal{B}_{\text{low}}\), we denote the objective function in the optimization problem in (24) as
\[\ell(A_{1},A_{2},\alpha,\beta,\gamma)=\sum_{i=1}^{n}\|\widetilde{Y}_{i}-\Gamma _{\mathbb{B}}(\widetilde{X}_{i})\|_{(\widetilde{m}_{2\oplus},\widetilde{ \Sigma}_{2\oplus})}^{2},\]
where \(\alpha=(\alpha_{1},...,\alpha_{K})\in\mathbb{R}^{K},\beta=(\beta_{1},..., \beta_{K})\in\mathbb{R}^{K}\) and \(\gamma=(\gamma_{1},...,\gamma_{K})\in\mathbb{R}^{K}\). Then the procedure for solving the optimization problem in (24) is summarized in Algorithm 1.
As the block relaxation algorithm monotonically decreases the objective function ([5]), the convergence of objective values \(\ell(A_{1}^{(t)},A_{2}^{(t)},\alpha^{(t)},\beta^{(t)},\gamma^{(t)})\) is guaranteed whenever the function \(\ell\) is bounded from above.
### Algorithm with Samples of Not Directly Observed Distributions
In this section, suppose that we observe only samples from the random Gaussians \((\nu_{i1},\nu_{i2})\), instead of the direct observation on \((\nu_{i1},\nu_{i2})\) in Section 4.1. Rigorously, we assume the following two-step data generating process. First, \(n\) independent pairs of Gaussian distributions \((\nu_{i1},\nu_{i2})\sim\mathcal{F}\)\((i=1,...,n)\) are generated. Next, the \(N\) sample vectors \(W_{ijm}\sim\nu_{ij}(m=1,...,N)\)
are generated from the distributions, then we observe the sample vectors. For each fixed \((i,j)\), the \(W_{ijm}\) are independent and identically distributed.
At the beginning, we develop a proxy for each Gaussian distribution \(\nu_{ij}=N(\mu_{ij},\Sigma_{ij})\). For \(i=1,...,n\) and \(j\in\{1,2\}\), we consider the empirical mean and covariance of \(W_{ijm}\) as
\[\widehat{\mu}_{ij}=\frac{1}{N}\sum_{m=1}^{N}W_{ijm}\quad\text{and} \quad\widehat{\Sigma}_{ij}=\frac{1}{N}\sum_{m=1}^{N}(W_{ijm}-\widehat{\mu}_{ ij})(W_{ijm}-\widehat{\mu}_{ij})^{\top},\]
for estimators of \(\mu_{ij}\) and \(\Sigma_{ij}\), respectively. We define \(\widehat{\nu}_{ij}=N(\widehat{\mu}_{ij},\widehat{\Sigma}_{ij})\) and use it for a proxy of \(\nu_{ij}=N(\mu_{ij},\Sigma_{ij})\). Based on these proxies, we compute the empirical Frechet means for \(j\in\{1,2\}\):
\[\widehat{\nu}_{j\oplus}=\operatorname*{arg\,min}_{\mu_{j}\in \mathcal{G}(\mathbb{R}^{d_{j}})}\frac{1}{n}\sum_{i=1}^{n}d_{W}^{2}(\mu_{j}, \widehat{\nu}_{ij}),\]
where we write \(\widehat{\nu}_{1\oplus}=N(\widehat{m}_{1\oplus},\widehat{\Sigma}_{1\oplus}), \widehat{\nu}_{2\oplus}=N(\widehat{m}_{2\oplus},\widehat{\Sigma}_{2\oplus})\). As with the directly observed case, we can use the steepest descent algorithm for solving this optimization. Then, we transform Gaussian distributions \(\widehat{\nu}_{ij}\) into matrices by \(\widehat{X}_{i}=\varphi_{\mathcal{D}_{1\oplus}}\widehat{\nu}_{i1}\) and \(\widehat{Y}_{i}=\varphi_{\mathcal{D}_{2\oplus}}\widehat{\nu}_{i2}\). In the basic model, we solve the following least squares problem:
\[\widehat{\mathbb{B}}\in\operatorname*{arg\,min}_{\mathbb{B}\in \mathcal{B}}\sum_{i=1}^{n}\|\widehat{Y}_{i}-\Gamma_{\mathbb{B}}(\widehat{X}_{i })\|_{(\widehat{m}_{2\oplus},\widehat{\Sigma}_{2\oplus})}^{2},\]
where \(\|\cdot\|_{(\widehat{m}_{2\oplus},\widehat{\Sigma}_{2\oplus})}\) denotes the norm defined by (9) for \(m_{*}=\widehat{m}_{2\oplus}\) and \(\Sigma_{*}=\widehat{\Sigma}_{2\oplus}\). In the rank-\(K\) model, we solve the following least squares problem:
\[\widehat{\mathbb{B}}\in\operatorname*{arg\,min}_{\mathbb{B}\in \mathcal{B}_{\text{low}}}\sum_{i=1}^{n}\|\widehat{Y}_{i}-\Gamma_{\mathbb{B}}( \widehat{X}_{i})\|_{(\widehat{m}_{2\oplus},\widehat{\Sigma}_{2\oplus})}^{2}. \tag{25}\]
In either case, we use \(\Gamma_{\mathcal{G},\widehat{\mathbb{B}}}=\xi_{\mathcal{D}_{2\oplus}}\circ \Gamma_{\widehat{\mathbb{B}}}\circ\varphi_{\mathcal{D}_{1\oplus}}\) as the prediction map. As with the directly observed case, we can use the block relaxation algorithm for solving the optimization (25) by the similar manner of Algorithm 1.
## 5. Analysis of in-sample prediction error
In this section, we analyze the prediction error of the proposed models and algorithms. We especially focus on the in-sample prediction error measured on the observations, which is naturally extended to the out-sample prediction error. Here, suppose that we directly observe the pairs of Gaussian distributions \((\nu_{1i},\nu_{2i}),i=1,...,n\) from the model (13) as the case in Section 4.1. For simplicity, we assume that the true values of Frechet means \(\nu_{1\oplus}\) and \(\nu_{2\oplus}\) are known. In addition, we treat predictors \(\{\nu_{1i}\}_{i=1}^{n}\) as fixed in this analysis. Based on the sample \((\nu_{1i},\nu_{2i}),i=1,...,n\), we solve the following least squares problem for \(\widetilde{\mathcal{B}}=\mathcal{B}\) or \(\widetilde{\mathcal{B}}=\mathcal{B}_{\text{low}}\):
\[\widetilde{\mathcal{B}}\in\operatorname*{arg\,min}_{\mathbb{R}\in\widetilde{ \mathcal{B}}}\sum_{i=1}^{n}\|Y_{i}-\Gamma_{\mathbb{B}}(X_{i})\|_{(m_{2\oplus}, \Sigma_{2\oplus})}^{2}, \tag{26}\]
where \(X_{i}=\varphi_{\nu_{1\oplus}}\nu_{1i}\) and \(Y_{i}=\varphi_{\nu_{2\oplus}}\nu_{2i}\). Then, we define the prediction map. Moreover, under the assumption that \(\Gamma_{\overline{\mathbb{B}}}(X_{i})\in\varphi_{\nu_{2\oplus}}\mathcal{G}( \mathbb{R}^{d_{2}})(i=1,...,n)\), we define the in-sample prediction error with the Wasserstein metric in terms of the empirical measure by
\[\mathcal{R}_{n}(\Gamma_{\mathcal{G},\overline{\mathbb{B}}},\Gamma_{\mathcal{G },\mathbb{B}_{0}})=\sqrt{\frac{1}{n}\sum_{i=1}^{n}d_{W}^{2}(\Gamma_{\mathcal{ G},\overline{\mathbb{B}}}(\nu_{1i}),\Gamma_{\mathcal{G},\mathbb{B}_{0}}(\nu_{1i}))},\]
which is an analogy of the empirical \(L^{2}\)-norm. We also assume that the \(\Xi_{d}\)-valued random variable \(E\) in the linear model (17) is Gaussian, that is, that is, for any \(A\in\Xi_{d}\), \((E,A)_{m_{*},\Sigma_{*}}\) is a real Gaussian random variable.
In the following, we measure the in-sample prediction error of the basic model in terms of the Wasserstein distance. Note that this is unique to our distribution-on-distribution regression problem, and deriving the convergence rate of in-sample prediction error under this setting is not a trivial problem.
**Theorem 1** (Basic Model).: _Suppose that \((\nu_{1i},\nu_{2i})(i=1,...,n)\) are pairs of Gaussian distributions generated from the basic model (13), and that error matrices \(E_{i}\in\Xi_{d_{2}}\) are Gaussian with mean \(0\) and covariance with trace \(1\), that is, \(\mathbb{E}[E_{i}]=0\) and \(\mathbb{E}[\|E_{i}\|_{m_{2\oplus},\Sigma_{2\oplus}}^{2}]=1\). Let \(\widetilde{\mathcal{B}}\in\mathcal{B}\) be an solution of the optimization (26), and assume that \(\Gamma_{\overline{\mathbb{B}}}(X_{i})\in\varphi_{\nu_{2\oplus}}\mathcal{G}( \mathbb{R}^{d_{2}})\) holds for \(i=1,...,n\). Then, we have_
\[\mathcal{R}_{n}(\Gamma_{\mathcal{G},\overline{\mathbb{B}}},\Gamma_{\mathcal{G },\mathbb{B}_{0}})=O_{P}(d_{1}d_{2}/\sqrt{n}),\]
_as \(n\to\infty\)._
This result shows that that our method achieves optimal convergence rates. That is, the convergence rates in Theorem 1 achieve the parametric rate \(n^{-1/2}\) regarding the sample size \(n\). This rate comes from our parametric assumption of Gaussianity on distributions. In contrast, existing distribution-on-distribution regression models do not impose parametric assumptions, which results in slower convergence rates of estimators for regression parameters. For example, in the regression model proposed by [4], an estimator for the regression
operator achieve the same rate as the minimax rate for function-to-function linear regression in a certain case (Theorem1 in [4]), which is generally slower than the parametric rate. In the regression model proposed by [9], an estimator for the regression map achieve the rate \(n^{-1/3}\) (Theorem 3.8 in [9]), which is slower than the parametric rate.
Next, we study the in-sample prediction error of the rank-\(K\) model. This analysis provides an effect of the number of ranks \(K\), in addition to the results of the basic model in Theorem 1.
**Theorem 2** (Rank-\(K\) Model).: _Suppose \((\nu_{1i},\nu_{2i})(i=1,...,n)\) are pairs of Gaussian distributions generated from the rank-\(K\) model defined in Section 3.2.2, and that error matrices \(E_{i}\in\Xi_{d_{2}}\) are Gaussian with mean \(0\) and covariance with trace \(1\). Let \(\widetilde{\mathbb{B}}\in\mathcal{B}_{\text{low}}\) be an solution of the optimization (26), and assume that \(\Gamma_{\widetilde{\mathbb{B}}}(X_{i})\in\varphi_{\nu_{2\oplus}}\mathcal{G}( \mathbb{R}^{d_{2}})\) holds for \(i=1,...,n\). Then, we have_
\[\mathcal{R}_{n}(\Gamma_{\mathcal{G},\widetilde{\mathbb{B}}},\Gamma_{\mathcal{ G},\mathbb{B}_{0}})=O_{P}(\sqrt{Kd_{1}}/\sqrt{n}),\]
_as \(n\to\infty\)._
Theorem 2 states an advantage of the low-rank model, in addition to the result that the model achieves the optimal parametric rate. The constant part of the rate is \(\sqrt{Kd_{1}}\) in the rank-\(K\) model while \(d_{1}d_{2}\) in the basic model. This implies that when the dimensions of distributions \(\nu_{1},\nu_{2}\) are large, the regression map in the rank-\(K\) model is better approximated than that in the basic model. In the rate of the rank-\(K\) model, the dimension of output distribution \(d_{2}\) does not appear in the constant part. This is due to the specific forms of matrices (20) imposed on tensors in \(\mathcal{B}_{\text{low}}\).
We add some discussion on the observations of distributions. Recall that we assume the true Frechet means \(\nu_{1\oplus},\nu_{2\oplus}\) are known, and distributions \((\nu_{1i},\nu_{2i})\) are directly observed. Relaxing these assumptions presents additional challenges for theoretical analysis. Specifically, if we estimate the Frechet mean of \(\nu_{2i}\) with the empirical Frechet mean \(\widetilde{\nu}_{2\oplus}\), we solve the least squares problem (26) by replacing \(Y_{i}=\log_{\nu_{2\oplus}}\nu_{2i}\) with \(\widetilde{Y}_{i}=\log_{\widetilde{\nu}_{2\oplus}}\nu_{2i}\). Since \(\widetilde{Y}_{1},...,\widetilde{Y}_{n}\) are not independent, the standard theory for analyzing the error of empirical risk minimization is not directly applicable in this setting. Moreover, if distributions are not directly observed and only samples from them are available, we need to tackle the discrepancy between the estimated distributions based on the sample and the actual distributions in the analysis. As for the estimation of the Frechet mean, [12] derive the rates of convergence of empirical Frechet mean on the Gaussian space (Corollary 17 in [12]), which may be helpful for further theoretical analysis.
Finally, we prove the consistency and asymptotic normality of an estimator for identified regression parameters in the Appendix.
## 6. Simulation Studies
In this section, we investigate the finite-sample performance of the proposed methods together with another method through simulation studies.
### Setting
Setting \(d_{1}=d_{2}=d\), we generate pairs of Gaussian distributions \(\{(\nu_{1i},\nu_{2i})\}_{i=1}^{n}\) from the basic model as follows. Firstly, for each \(i=1,...,n\), we generate independent random variables \(G_{i}^{(1)},...,G_{i}^{(d)}\sim N(0,1)\), \(H_{i}^{(1)},...,H_{i}^{(d)}\sim Exp(1)\) and set a matrix \(X_{i}\in S_{d}\) by
\[X_{i}=\begin{pmatrix}G_{i}^{(1)}&H_{i}^{(1)}&&O\\ \vdots&&\ddots&\\ G_{i}^{(d)}&O&&H_{i}^{(d)}\end{pmatrix}.\]
Here, \(Exp(1)\) is the exponential distribution with the rate parameter \(1\). Then we obtain a Gaussian distribution \(\nu_{1i}=\xi_{\nu_{1\mathfrak{C}}}X_{i}\in\mathcal{G}(\mathbb{R}^{d})\), where \(\nu_{1\mathfrak{C}}\) is the \(d\)-dimensional standard Gaussian distribution. Note that under this setting, the random distribution \(\nu_{1i}\) has the Frechet mean \(\nu_{1\mathfrak{C}}\). Next, we set the coefficient tensor \(\mathbb{B}\in\mathbb{R}^{d\times(d+1)\times d\times(d+1)}\) as
\[\mathbb{B}[\cdot,\cdot,r,1]=\begin{pmatrix}1&0&\cdots&0\\ \vdots&&\ddots&\\ 1&0&\cdots&0\end{pmatrix},\quad\mathbb{B}[\cdot,\cdot,r,r+1]=\begin{pmatrix}0& (2d)^{-1}&&O\\ \vdots&&\ddots&\\ 0&O&&(2d)^{-1}\end{pmatrix},\]
for \(1\leq r\leq d\), and set the other elements to be zero. Additionally, for each \(i=1,...,n\), we generate independent random variables \(U_{i}^{(1)},...,U_{i}^{(d)}\sim N(0,1),V_{i}^{(1)},...,V_{i}^{(d)}\sim U(-1/2,1/2)\) and set the error matrix \(E_{i}\in S_{d}\) by
\[E_{i}=\begin{pmatrix}U_{i}^{(1)}&V_{i}^{(1)}&&O\\ \vdots&&\ddots&\\ U_{i}^{(d)}&O&&V_{i}^{(d)}\end{pmatrix}.\]
Here, \(U(-1/2,1/2)\) is the uniform distribution on the interval \((-1/2,1/2)\). We set \(Y_{i}=(X_{i},\mathbb{B})_{2}+E_{i}\) and obtain a response Gaussian distribution \(\nu_{2i}=\xi_{\nu_{2\mathfrak{C}}}Y_{i}\in\mathcal{G}(\mathbb{R}^{2})\), where \(\nu_{2\mathfrak{C}}\) is the \(d\)-dimensional standard Gaussian distribution. Note that under this setting, the condition (15) holds and the random distribution \(\nu_{2i}\) has the Frechet mean \(\nu_{2\mathfrak{C}}\). From the above procedure, we have obtained pairs of Gaussian distributions \(\{(\nu_{1i},\nu_{2i})\}_{i=1}^{n}\). Finally, we draw \(N\) independent sample vectors from each of the distributions \(\{\nu_{1i}\}_{i=1}^{n}\) and \(\{\nu_{2i}\}_{i=1}^{n}\).
### Methods and Performance Criterion
As an alternative approach, we consider the following model between \(\nu_{1i}\) and \(\nu_{2i}\):
\[W_{i}=\langle Z_{i},\mathbb{A}_{0}\rangle_{2}+E_{i},\quad\mathbb{E}[E_{i}|Z_{i }]=0. \tag{27}\]
Here, \(Z_{i}=(m_{1i},\Sigma_{1i})\in S_{d_{1}}\) and \(W_{i}=(m_{2i},\Sigma_{2i})\in S_{d_{2}}\) are the matrices obtained from Gaussian distributions \(\nu_{1i}=N(m_{1i},\Sigma_{1i})\) and \(\nu_{2i}=N(m_{2i},\Sigma_{2i})\), respectively. \(\mathbb{A}_{0}\in\mathcal{B}\) is the regression parameter and \(E_{i}\in S_{d}\) is the error matrix in this model. Note that this alternative model
does not consider the Wasserstein metric. For the proposed models, we construct estimators \(\widehat{\mathbb{B}}\) as described in Section 4.2. For the alternative model (27), we construct an estimator by solving the least square problem
\[\widehat{\mathbb{A}}\in\operatorname*{arg\,min}_{\mathbb{A}\in\mathcal{B}}\sum_{ i=1}^{n}\|\widehat{W}_{i}-\langle\widehat{Z}_{i},\mathbb{A}\rangle_{2}\|_{F}^{2},\]
where \(\widehat{Z}_{i}=(\widehat{m}_{1i},\widehat{\Sigma}_{1i})\) and \(\widehat{W}_{i}=(\widehat{m}_{2i},\widehat{\Sigma}_{2i})\).
To investigate the performance of the proposed and alternative methods, following simulations in [4], we generate \(200\) new predictors \(\{\nu_{1i}\}_{i=n+1}^{n+200}\) and compute the out-of-sample average Wasserstein discrepancy (AWD). Denoting the true response distributions by \(\nu_{2i}^{*}=\xi_{\nu_{1\emptyset}}Y_{i}^{*}\) with \(Y_{i}^{*}=\langle X_{i},\mathbb{B}_{0}\rangle_{2}\), and the fitted response distributions by \(\nu_{2i}^{\#}\), the out-of-sample AWD is given by
\[\text{AWD}=\frac{1}{200}\sum_{i=n+1}^{n+200}d_{W}(\nu_{2i}^{*},\nu_{2i}^{\#}). \tag{28}\]
In the proposed model, when the fit of the response in the space \(\Xi_{d_{2}}\) does not fall in the range of map \(\varphi_{\widehat{\mathbb{P}}_{2\emptyset}}\), that is,
\[\Gamma_{\widehat{\mathbb{B}}}(X_{i})\notin\varphi_{\widehat{\mathbb{P}}_{2 \emptyset}}\mathcal{G}(\mathbb{R}^{d_{2}}), \tag{29}\]
we need to modify the fit to calculate the fitted response distribution. To handle this problem, we use a boundary projection method similar to one proposed by [4]. Specifically, for \(d\geq 1\), let \(g_{d}:\mathbb{R}^{d\times(d+1)}\to\mathbb{R}^{d\times d}\) be the map such that \(g((a,V))=V\) for \((a,V)\in\mathbb{R}^{d\times(d+1)}\). If the event (29) happens, we calculate a constant \(\eta_{i}\) such that
\[\eta_{i}=\max\{\eta\in[0,1]:\eta(g_{d_{2}}\circ\Gamma_{\widehat{\mathbb{B}}}(X _{i}))+I_{d_{2}}\,\text{ is positive semidefinite}\},\]
and update the original fit by \(\eta_{i}\Gamma_{\widehat{\mathbb{B}}}(X_{i})\). Conceptually, we update the original fit by a projection onto the boundary of \(\varphi_{\widehat{\mathbb{P}}_{2\emptyset}}\mathcal{G}(\mathbb{R}^{d_{2}})\) along the line segment between the origin \(0\) and the fit \(\Gamma_{\widehat{\mathbb{B}}}(X_{i})\). In the alternative method, if \(g_{d_{2}}(\langle X_{i},\widehat{\mathbb{A}}\rangle_{2})\) is not positive semidefinite, we update \(g_{d_{2}}(\langle X_{i},\widehat{\mathbb{A}}\rangle_{2})\) by \(\operatorname*{arg\,min}_{C\in\operatorname{Sym}^{*}(d_{2})}\|C-g_{d_{2}}( \langle X_{i},\widehat{\mathbb{A}}\rangle_{2})\|_{F}\).
### Results
Firstly, we set \(d=2\) and consider four scenarios with \(n\in\{20,200\}\) and \(N\in\{50,500\}\). We simulate \(500\) runs for each \((n,m)\) pair, and for each Monte Carlo run, we compute the AWD (28) based on \(200\) new predictors. The results of the proposed and alternative methods are summarized in the boxplots of Figure 2. In all four scenarios, the proposed method outperforms the alternative method. This result comes from the fact that the proposed method takes into account the geometry of the Wasserstein metric, while the alternative method does not. In this setting, the event (29) seldom happened even if the number of distributions \(n\) is small.
Next, we set \(d=6,n=200,N=500\) and fit the proposed and alternative models whose regression tensors have rank \(K\in\{2,3,4\}\). As with the previous experiment, we simulate \(500\) runs, and for each Monte Carlo run, we compute the AWD (28) based on \(200\) new predictors.
The results are summarized in the boxplots of Figure 3. In all cases, the proposed method outperforms the alternative method. In this setting, event (29) happened more frequently than in the previous experiment.
Finally, to see the performance of the methods under the existence of model misspecification, we generate pairs of multivariate \(t\)-distributions \(\{(t_{1i},t_{2i})\}_{i=1}^{n}\) and fit the Gaussian-on-Gaussian regression models. Specifically, we firstly generate pairs of Gaussian distributions \(\{(\nu_{1i},\nu_{2i})\}_{i=1}^{n}\) from the basic model as described in Section 3. Denoting these Gaussian distributions as \(\nu_{1i}=N(m_{1i},\Sigma_{1i}),\nu_{2i}=N(m_{2i},\Sigma_{2i})\), we set multivariate \(t\)-distributions as \(t_{1i}=t_{\ell}(m_{1i},\Sigma_{1i}),t_{2i}=t_{\ell}(m_{2i},\Sigma_{2i})\). Here, \(t_{\ell}(m,\Sigma)\) denotes the multivariate t-distribution with location \(m\), scale matrix \(\Sigma\) and the degree of freedom \(\ell\). We draw an i.i.d. observations of size \(N\) from each of the distributions \(\{t_{1i}\}_{i=1}^{n}\) and \(\{t_{2i}\}_{i=1}^{n}\), and construct estimators for the proposed and alternative models, respectively. Finally, we generate 200 new
Figure 2. Boxplots of the out-of-sample AWDs defined as (28) for the four scenarios with \(n\in\{20,200\}\) and \(N\in\{50,500\}\). ”proposed” denotes the proposed method and ”alternative” denotes the alternative method. The number in brackets ”\([\ ]\)” below the boxplots for the proposed indicates how many runs event (28) happened and boundary projection was needed.
predictors \(\{t_{1i}\}_{i=n+1}^{n+200}\) and calculate the out-of-sample AWD \(200^{-1}\sum_{i=n+1}^{n+200}d_{W}(t_{2i}^{*},\nu_{2i}^{\#})\). Here, \(t_{2i}^{*}=t_{\ell}(m_{2i}^{*},\Sigma_{2i}^{*})\) is the true response \(t\)-distribution whose location and scale are given by \(N(m_{2i}^{*},\Sigma_{2i}^{*})=\xi_{\nu_{1}\Theta}Y_{i}^{*}\) with \(Y_{i}^{*}=\langle X_{i},\mathbb{B}_{0}\rangle_{2}\). \(\nu_{2i}^{\#}\) is the fitted response Gaussian distribution. We set \(d=2,n=200,N=500\) and consider three scenarios with the degree of the freedom \(\ell\in\{5,10,15\}\). As with the previous experiments, we simulate 500 runs, and for each Monte Carlo run, we compute the AWD (28) based on 200 new predictors. The results of the proposed and alternative methods are summarized in the boxplots of Figure 4. In all three scenarios, the proposed method outperforms the alternative method. In addition, the prediction performance is getting better as the degree of freedom increases. This result comes from the fact that as the degree of freedom increases, the \(t\)-distribution becomes more close to the Gaussian distribution, and thus there is less model misspecification.
Figure 3. Boxplots of the out-of-sample AWDs defined as (28) for the low-rank methods with rank \(K\in\{2,3,4\}\). ”proposed” denotes the proposed method and ”alternative” denotes the alternative method. The number in brackets ”[ ]” below the boxplots for the proposed indicates how many runs event (28) happened and boundary projection was needed.
## 7. Applications
In this section, we employ the proposed regression model to grasp the relationship between daily weather in spring (March, April, and May) and that in summer (Jun, July, and August) in Calgary, Alberta. We obtain data from [https://calgary.weatherstats.ca](https://calgary.weatherstats.ca). This dataset contains the temperature and humidity for each day in Calgary from 1953 to 2021. We consider the joint distribution of the average temperatures recorded daily and the average relative humidity recorded daily. We regard each pair of daily values as one observation from a two-dimensional Gaussian distribution. As examples, Figure 5 illustrates the observations and estimated Gaussian densities for spring and summer in each year from 1953 to 1956.
Figure 4. Boxplots of the out-of-sample AWDs defined as (28) for the three scenarios with the degree of the freedom \(\ell\in\{5,10,15\}\). ”proposed” denotes the proposed method and ”alternative” denotes the alternative method. The number in brackets ”[ ]” below the boxplots for the proposed indicates how many runs event (29) happened and boundary projection was needed.
We applied the proposed (13) and alternative (27) regression models with the distributions for spring as the predictor and summer as the response. Models are trained on data up to 1988 and predictions are computed for the remaining period, where we predicted the distribution of summer based on that of spring for each year.
Table 1 shows the fitting and prediction results of the proposed method for training and prediction periods. Additionally, Table 2 shows the result of the alternative method. In these tables, we report the summary of the Wasserstein discrepancies between observed and fitted distributions in training periods, and those between observed and predicted distributions in prediction periods. We also show the prediction results of both methods from 2017 to 2019 in Figure 6. We find that fitting and prediction by the proposed model are generally better than those by the alternative model. This result can be explained by the fact that the proposed model takes into consideration the geometry of the Wasserstein space while the alternative model does not.
## 8. Conclusion
In this paper, we propose the distribution-on-distribution regression models for multivariate Gaussians with the Wasserstein metric. In the proposed regression models, Gaussian
Figure 5. Observed data and estimated Gaussian joint densities of the average temperatures and average relative humidity in spring (top row) and summer (bottom row) from 1953 to 1956. Black points are observed data and solid lines are contour lines of estimated densities.
distributions are transformed into elements in linear matrix spaces by the proposed nearly isometric maps, and the regression problem comes down to matrix-on-matrix linear regression. It has the advantage that the distribution-on-distribution regression is reduced to a linear regression while keeping the properties of distributions. Also, owing to the linear regression structure, we can easily implement and interpret the models. We incorporate a low-rank structure in the parameter tensor to address large dimensional Gaussian distributions and also discuss the generalization of our models to the class of elliptically symmetric distributions. In the simulation studies, we find that our models perform better than an alternative approach of transforming Gaussian distributions to matrices that do not consider the Wasserstein metric.
**Appendix**
## Appendix A Proofs
Proof of Proposition 1.: Firstly, we set \(a=m-S(\Sigma_{*},\Sigma)m_{*}\) and \(V=S(\Sigma_{*},\Sigma)-I\). Then, we have \(a+Vm_{*}=m-m_{*}\) and
\[V\Sigma_{*}V=\Sigma+\Sigma_{*}-\Sigma_{*}^{1/2}[\Sigma_{*}^{1/2}\Sigma\Sigma_{* }^{1/2}]^{1/2}\Sigma_{*}^{-1/2}-\Sigma_{*}^{-1/2}[\Sigma_{*}^{1/2}\Sigma\Sigma_{* }^{1/2}]^{1/2}\Sigma_{*}^{1/2}.\]
Therefore, \(\|\varphi_{\mu_{*}}\mu\|_{(m_{*},\Sigma_{*})}^{2}\) is expressed as
\[\|\varphi_{\mu_{*}}\mu\|_{(m_{*},\Sigma_{*})}^{2} =\|a+Vm_{*}\|^{2}+\operatorname{tr}(V\Sigma_{*}V)\] \[=\|m-m_{*}\|^{2}+\operatorname{tr}(\Sigma)+\operatorname{tr}( \Sigma_{*})-\operatorname{tr}(\Sigma_{*}^{1/2}[\Sigma_{*}^{1/2}\Sigma\Sigma_{* }^{1/2}]^{1/2}\Sigma_{*}^{-1/2})\] \[\quad-\operatorname{tr}\bigl{(}\Sigma_{*}^{-1/2}[\Sigma_{*}^{1/2 }\Sigma\Sigma_{*}^{1/2}]^{1/2}\Sigma_{*}^{1/2}\bigr{)}\] \[=\|m-m_{*}\|^{2}+\operatorname{tr}(\Sigma)+\operatorname{tr}( \Sigma_{*})-2\operatorname{tr}\bigl{(}[\Sigma_{*}^{1/2}\Sigma\Sigma_{*}^{1/2 }]^{1/2}\bigr{)}\] \[=d_{W}^{2}(\mu,\mu_{*}).\]
\begin{table}
\begin{tabular}{l c c c c c} \hline & Min & \(Q_{0.25}\) & Median & \(Q_{0.75}\) & Max \\ \hline Training & 0.5725 & 1.7709 & 3.0337 & 4.5545 & 6.4389 \\ Prediction & 1.708 & 2.748 & 3.991 & 5.606 & 12.401 \\ \hline \end{tabular}
\end{table}
Table 1. Summary of the Wasserstein discrepancies for the proposed method in training and prediction periods.
\begin{table}
\begin{tabular}{l c c c c c} \hline & Min & \(Q_{0.25}\) & Median & \(Q_{0.75}\) & Max \\ \hline Training & 0.3086 & 2.3041 & 3.2879 & 4.7202 & 6.8268 \\ Prediction & 1.317 & 3.610 & 5.409 & 7.306 & 10.513 \\ \hline \end{tabular}
\end{table}
Table 2. Summary of the Wasserstein discrepancies for the alternative method in training and prediction periods.
Next, let \(U\) be a \(d\times d\) orthogonal matrix and suppose \(\mu_{*}=N(m_{*},\Sigma_{*}),\mu_{1}=N(m_{1},\Sigma_{1})\) and \(\mu_{2}=N(m_{2},\Sigma_{2})\) are Gaussian measures in \(\mathscr{C}_{U}\). Because \(\Sigma_{1}^{1/2}\Sigma_{2}^{1/2}=\Sigma_{2}^{1/2}\Sigma_{1}^{1/2}\) holds in this setting, the Wasserstein distance between \(\mu_{1}\) and \(\mu_{2}\) is expressed as
\[d_{W}^{2}(\mu_{1},\mu_{2})=\|m_{1}-m_{2}\|^{2}+\operatorname{tr}((\Sigma_{1}^{ 1/2}-\Sigma_{2}^{1/2})^{2}). \tag{30}\]
Figure 6. Observed and predicted (middle and bottom rows) densities of the average temperatures and average relative humidity in spring (top row) and summer (middle and bottom rows) from 2017 to 2020. Solid lines are contour lines of observed densities, and dashed lines (middle and bottom rows) are contour lines of predicted densities. Predictions in the middle row are by the proposed method, while those in the bottom row are by the alternative method. In the middle and bottom rows, the Wasserstein discrepancies (WDs) between observed and predicted densities are also listed.
On the other hand, because \(\Sigma_{*}^{1/2}\Sigma_{1}^{1/2}=\Sigma_{1}^{1/2}\Sigma_{*}^{1/2}\) and \(\Sigma_{*}^{1/2}\Sigma_{2}^{1/2}=\Sigma_{2}^{1/2}\Sigma_{*}^{1/2}\) also hold in this setting, we have
\[\varphi_{\mu_{*}}\mu_{1}=(m_{1}-\Sigma_{1}^{1/2}\Sigma_{*}^{-1/2} m_{*},\Sigma_{1}^{1/2}\Sigma_{*}^{-1/2}-I),\] \[\varphi_{\mu_{*}}\mu_{2}=(m_{2}-\Sigma_{2}^{1/2}\Sigma_{*}^{-1/2} m_{*},\Sigma_{2}^{1/2}\Sigma_{*}^{-1/2}-I).\]
This implies
\[\varphi_{\mu_{*}}\mu_{1}-\varphi_{\mu_{*}}\mu_{2}=(m_{1}-m_{2}-( \Sigma_{1}^{1/2}-\Sigma_{2}^{1/2})\Sigma_{*}^{-1/2}m_{*},(\Sigma_{1}^{1/2}- \Sigma_{2}^{1/2})\Sigma_{*}^{-1/2}),\]
and we have
\[\|\varphi_{\mu_{*}}\mu_{1}-\varphi_{\mu_{*}}\mu_{2}\|_{(m_{*}, \Sigma_{*})}^{2}=\|m_{1}-m_{2}\|^{2}+\operatorname{tr}\bigl{(}(\Sigma_{1}^{1/ 2}-\Sigma_{2}^{1/2})^{2}\bigr{)}. \tag{31}\]
From (30) and (31), we obtain \(d_{W}(\mu_{1},\mu_{2})=\|\varphi_{\mu_{*}}\mu_{1}-\varphi_{\mu_{*}}\mu_{2}\|_ {(m_{*},\Sigma_{*})}\).
To prove Theorem 1 and 2, we employ the following general result regarding the in-sample prediction error of least squares regression, which is shown by [15]. We refer to Section A.2 in [15] for Gaussian random variables in Hilbert spaces.
**Theorem 3** ([15], Section 4.1).: _Let \(x_{1},...,x_{n}\) be fixed covariates taking values in a set \(\mathcal{X}\), and let \(Y_{1},...,Y_{n}\) be random variables taking values in a separable Hilbert space \((\mathcal{Y},\|\cdot\|_{\mathcal{Y}})\) satisfying \(Y_{i}=g_{0}(x_{i})+\varepsilon_{i},i=1,...,n.\) Here, \(\varepsilon_{i}\) are independent Gaussian noise terms with zero mean and covariance trace \(1\), and \(g_{0}:\mathcal{X}\rightarrow\mathcal{Y}\) is an unknown function in a class \(\mathcal{G}\). Let define the empirical norm \(\|g\|_{n}=\sqrt{n^{-1}\sum_{i=1}^{n}\|g(x_{i})\|_{\mathcal{Y}}^{2}}\) for \(g\in\mathcal{G}\), and define \(J(\delta)=\int_{0}^{\delta}\sqrt{\log N_{n}(t,\mathcal{B}_{n}(\delta;\mathcal{ G}),\|\cdot\|_{n})}dt\) for \(\delta>0\), where \(N_{n}(t,\mathcal{B}_{n}(\delta;\mathcal{G}),\|\cdot\|_{n})\) is the \(t\)-covering number of the ball \(\mathcal{B}_{n}(\delta;\mathcal{G})=\{g\in\mathcal{G}:\|g\|_{n}\leq\delta\}\). Then, if there exist real sequence \(\{\delta_{n}\}\) and constant \(C>0\) such that \(J(\delta_{n})\leq C\sqrt{n}\delta_{n}^{2}\), the least squares estimator \(\widehat{g}_{n}=\arg\min_{g\in\mathcal{G}}n^{-1}\sum_{i=1}^{n}\|Y_{i}-g(x_{i}) \|_{\mathcal{Y}}^{2}\) satisfies \(\|\widehat{g}_{n}-g_{0}\|_{n}=O_{P}(\delta_{n})\)._
Using this result, we prove Theorem 1 and 2. Throughout the proofs, we denote \(a\lesssim b\) when there exists a constant \(C>0\) not depending on \(n,d_{1},d_{2},K\) such that \(a\leq Cb\).
Proof of Theorem 1.: Firstly we bound the in-sample prediction error regarding the map \(\Gamma_{\mathbb{B}_{0}}\), which is defined by
\[\|\Gamma_{\mathbb{B}}-\Gamma_{\mathbb{B}_{0}}\|_{n}=\sqrt{n^{-1} \sum_{i=1}^{n}\|\Gamma_{\mathbb{B}}(X_{i})-\Gamma_{\mathbb{B}_{0}}(X_{i})\|_{ (m_{2\theta},\Sigma_{2\theta})}^{2}}.\]
Our strategy is to bound the metric entropy of the function space \(\mathscr{F}=\{\Gamma_{\mathbb{B}}:\mathbb{B}\in\mathcal{B}\}\) and employ Theorem 3. We define the \(\delta\)-ball of space \(\mathscr{F}\) as \(\mathcal{B}_{n}(\delta;\mathscr{F})=\{\Gamma_{\mathbb{B}}\in\mathscr{F}:\| \Gamma_{\mathbb{B}}\|_{n}\leq\delta\}\) and denote its \(t\)-covering number as \(N_{n}(t,\mathcal{B}_{n}(\delta;\mathscr{F}),\|\cdot\|_{n})\). By defining \(\|\mathbb{B}\|^{\prime}=\|\Gamma_{\mathbb{B}}\|_{n}\) for \(\mathbb{B}\in\mathcal{B}\), the set \(\mathcal{B}_{n}(\delta;\mathscr{F})\) is isometric to the \(\delta\)-ball within the space \((\mathcal{B},\|\cdot\|^{\prime})\). Since the space \((\mathcal{B},\|\cdot\|^{\prime})\) has dimension \(d_{1}(d_{1}+1)d_{2}(d_{2}+3)/2\), by a volume ratio argument (Example 5.8 in [24]), we
have
\[\log N_{n}(t,\mathcal{B}_{n}(\delta;\mathscr{F}),\|\cdot\|_{n})\lesssim d_{1}^{2}d _{2}^{2}\log\left(1+\frac{2\delta}{t}\right).\]
Using this upper bound, we have
\[\int_{0}^{\delta}\sqrt{\log N_{n}(t,\mathcal{B}_{n}(\delta;\mathscr{ F}),\|\cdot\|_{n})}dt \lesssim d_{1}d_{2}\int_{0}^{\delta}\sqrt{\log\left(1+\frac{2 \delta}{t}\right)}dt\] \[=\delta d_{1}d_{2}\int_{0}^{1}\sqrt{\log\left(1+\frac{2}{u} \right)}du\quad(u=t/\delta)\] \[\lesssim\delta d_{1}d_{2}.\]
This implies we can apply Theorem 3 with \(\delta_{n}=d_{1}d_{2}/\sqrt{n}\) and obtain \(\|\Gamma_{\overline{\mathbb{B}}}-\Gamma_{\mathbb{B}_{0}}\|_{n}=O_{P}(d_{1}d_ {2}/\sqrt{n})\).
Next, we bound the in-sample prediction error \(\mathcal{R}_{n}(\Gamma_{\mathcal{G},\overline{\mathbb{B}}},\Gamma_{\mathcal{G },\mathbb{B}_{0}})\). Because the Wasserstein space has nonnegative sectional curvature at any reference measure (e.g., Section 2.3.2 in [14]), the Gaussian space, which is the restriction of the Wasserstein space to Gaussian measures, also has this property. In other words, the inequality
\[d_{W}(\mu_{1},\mu_{2})\leq\|\varphi_{\nu_{2\theta}}\mu_{1}-\varphi_{\nu_{2 \theta}}\mu_{2}\|_{(m_{2\theta},\Sigma_{2\theta})}\]
holds for any \(\mu_{1},\mu_{2}\in\mathcal{G}(\mathbb{R}^{d_{2}})\). This implies \(\mathcal{R}_{n}(\Gamma_{\mathcal{G},\overline{\mathbb{B}}},\Gamma_{\mathcal{ G},\mathbb{B}_{0}})\leq\|\Gamma_{\overline{\mathbb{B}}}-\Gamma_{\mathbb{B}_{0}}\|_{n}\) holds, and combining this fact with \(\|\Gamma_{\overline{\mathbb{B}}}-\Gamma_{\mathbb{B}_{0}}\|_{n}=O_{P}(d_{1}d_{2 }/\sqrt{n})\), we have \(\mathcal{R}_{n}(\Gamma_{\mathcal{G},\overline{\mathbb{B}}},\Gamma_{\mathcal{ G},\mathbb{B}_{0}})=O_{P}(d_{1}d_{2}/\sqrt{n})\).
Proof of Theorem 2.: As with the proof of Theorem 1, we firstly bound the in-sample prediction error regarding the map \(\Gamma_{\mathbb{B}_{0}}\). We define the function space as \(\mathscr{F}_{\text{low}}=\{\Gamma_{\mathbb{B}}:\mathbb{B}\in\mathcal{B}_{\text {low}}\}\), define its \(\delta\)- ball as \(\mathcal{B}_{n}(\delta;\mathscr{F}_{\text{low}})=\{\Gamma_{\mathbb{B}}\in \mathscr{F}_{\text{low}}:\|\Gamma_{\mathbb{B}}\|_{n}\leq\delta\}\), and denote its \(t\)-covering number as \(N_{n}(t,\mathcal{B}_{n}(\delta;\mathscr{F}_{\text{low}}),\|\cdot\|_{n})\). By defining \(\|\mathbb{B}\|^{\prime\prime}=\|\Gamma_{\mathbb{B}}\|_{n}\) for \(\mathbb{B}\in\mathcal{B}_{\text{low}}\), the set \(\mathcal{B}_{n}(\delta;\mathscr{F}_{\text{low}})\) is isometric to the \(\delta\)-ball within the space \((\mathcal{B}_{\text{low}},\|\cdot\|^{\prime\prime})\). Recall that if a tensor \(\mathbb{B}=\llbracket A_{1},A_{2},A_{3},A_{4}\rrbracket\) is in \(\mathcal{B}_{\text{low}}\), the matrices \(A_{3}\) and \(A_{4}\) have the forms (20). Based on this fact, denoting \(\alpha=(\alpha_{1},...,\alpha_{K}),\beta=(\beta_{1},...,\beta_{K})\), and \(\gamma=(\gamma_{1},...,\gamma_{K})\), let consider an corresponding from \(\mathbb{R}^{2Kd_{1}+4K}\) to \(\mathcal{B}_{\text{low}}\) such that
\[(\operatorname{vec}(A_{1}),\operatorname{vec}(A_{2}),\alpha,\beta,\gamma) \mapsto\llbracket A_{1},A_{2},A_{3},A_{4}\rrbracket,\]
and define
\[\|(\operatorname{vec}(A_{1}),\operatorname{vec}(A_{2}),\alpha,\beta,\gamma) \|^{\prime\prime\prime}=\|\llbracket A_{1},A_{2},A_{3},A_{4}\rrbracket \|^{\prime\prime}.\]
Since the \(\delta\)-ball within the space \((\mathcal{B}_{\text{low}},\|\cdot\|^{\prime\prime})\) is isometric to the \(\delta\)-ball within \((\mathbb{R}^{2Kd_{1}+4K},\|\cdot\|^{\prime\prime\prime})\), we eventually have that the set \(\mathcal{B}_{n}(\delta;\mathscr{F}_{\text{low}})\) is isometric to the \(\delta\)-ball within the space \((\mathbb{R}^{2Kd_{1}+4K},\|\cdot\|^{\prime\prime\prime})\). Therefore, by a volume ratio argument, we have
\[\log N_{n}(t,\mathcal{B}_{n}(\delta;\mathscr{F}_{\text{low}}),\|\cdot\|_{n}) \lesssim Kd_{1}\log\left(1+\frac{2\delta}{t}\right).\]
Using this upper bound, as with the proof of Theorem 1, we have
\[\int_{0}^{\delta}\sqrt{\log N_{n}(t,\mathcal{B}_{n}(\delta;\mathscr{F}),\|\cdot\|_ {n})}dt\lesssim\delta\sqrt{Kd_{1}}.\]
This implies we can apply Theorem 3 with \(\delta_{n}=\sqrt{Kd_{1}}/\sqrt{n}\) and obtain \(\|\Gamma_{\overline{\mathbb{B}}}-\Gamma_{\mathbb{B}_{0}}\|_{n}=O_{P}(\sqrt{ Kd_{1}}/\sqrt{n})\).
As with the proof of Theorem 1, the nonnegativity of sectional curvature of the Wasserstein space implies \(\mathcal{R}_{n}(\Gamma_{\mathcal{G},\overline{\mathbb{B}}},\Gamma_{\mathcal{G },\mathbb{B}_{0}})\leq\|\Gamma_{\overline{\mathbb{B}}}-\Gamma_{\mathbb{B}_{0} }\|_{n}\). Combing this fact with \(\|\Gamma_{\overline{\mathbb{B}}}-\Gamma_{\mathbb{B}_{0}}\|_{n}=O_{P}(\sqrt{ Kd_{1}}/\sqrt{n})\), we obtain \(\mathcal{R}_{n}(\Gamma_{\mathcal{G},\overline{\mathbb{B}}},\Gamma_{\mathcal{G },\mathbb{B}_{0}})=O_{P}(\sqrt{Kd_{1}}/\sqrt{n})\).
## Appendix B Parameter Identification
In this section, we deal with the identification of regression parameter \(\mathbb{B}\) in our proposed models. Although the parameter \(\mathbb{B}\) does not need to be identified in the empirical risk minimization problems in the main article, it must be identified when we consider estimation or inference for the regression parameter.
### Basic Model
Recall that assuming linear regression model (13) is equivalent to assuming the model (14) for each \(1\leq r\leq d_{2}\) and \(1\leq s\leq d_{2}+1\). Let fix indexes \(1\leq r\leq d_{2}\) and \(1\leq s\leq d_{2}+1\) and consider the identification of parameter \(\mathbb{B}[\cdot,\cdot,r,s]\in\mathbb{R}^{d_{1}\times(d_{1}+1)}\) in (14). In order to deal with the identifiability issue coming from the symmetry in the matrix \(X\in\Xi_{d_{1}}\), we impose the following condition on the parameter \(\mathbb{B}[\cdot,\cdot,r,s]\):
\[\mathbb{B}[p,q,r,s]=0,\quad\text{for}\ \,1\leq p\leq d_{1},p+2\leq q\leq d_{2}+1. \tag{32}\]
In other words, the matrix \(\mathbb{B}[\cdot,\cdot,r,s]\) has a lower triangular form
\[\begin{pmatrix}*&\vdots&*&&O\\ \vdots&\vdots&&\ddots&&\\ \vdots&\vdots&&\ddots&\\ *&\vdots&&\bigstar\end{pmatrix},\]
where \(*\) is some real number. If two matrices \(\mathbb{B}[\cdot,\cdot,r,s]\) and \(\mathbb{B}^{\prime}[\cdot,\cdot,r,s]\) satisfy the condition (32), we have
\[\langle X,\mathbb{B}[\cdot,\cdot,r,s]\rangle=\langle X,\mathbb{B}^{\prime}[ \cdot,\cdot,r,s]\rangle\,\text{ for any }X\in\Xi_{d_{1}}\implies\mathbb{B}[\cdot,\cdot,r,s]=\mathbb{B}^{\prime}[ \cdot,\cdot,r,s],\]
which guarantees the identifiability of the parameter \(\mathbb{B}[\cdot,\cdot,r,s]\).
In summary, by adding condition (32) to the existing parameter space, we define the following modified parameter space for the basic model :
\[\mathcal{B}^{*}=\{\mathbb{B}\in\mathcal{B}:\text{the condition (\ref{eq:1}) holds for each }1\leq r\leq d_{2}\text{ and }1\leq s\leq d_{2}+1\}. \tag{33}\]
Then, the parameter \(\mathbb{B}\) is uniquely identified in \(\mathcal{B}^{*}\).
### Low-Rank Model
Next, we consider the identification of regression parameters in the low-rank model. Let \(\mathbb{B}\) admit the rank-\(K\) decomposition \(\mathbb{B}=\llbracket A_{1},A_{2},A_{3},A_{4}\rrbracket\). Following an identification strategy used in [27] for tensor regression models, we adopt the following specific constrained parametrization to fix the scaling and permutation indeterminacy of the tensor decomposition.
* To fix the scaling indeterminacy, we assume \(A_{1},A_{2},A_{3}\) are scaled such that \[a_{1}^{(k)}\llbracket 1\rrbracket=a_{2}^{(k)}\llbracket 1\rrbracket=a_{3}^{(k)} \llbracket 1\rrbracket=1,\quad 1\leq k\leq K\] (34) In other words, the first rows of \(A_{1},A_{2},A_{3}\) are ones. Since \(A_{3}\) is assumed to have the form in (20), this implies that all elements of \(A_{3}\) are ones. This scaling of \(A_{1},A_{2},A_{3}\) determines the first row of \(A_{4}\) and fixes scaling indeterminacy (Section 4.2 in [27]).
* To fix the permutation indeterminacy, we assume that the first row elements of \(A_{4}\) are distinct and arranged in the descending order \[a_{4}^{(1)}\llbracket 1\rrbracket>a_{4}^{(2)}\llbracket 1\rrbracket>\cdots>a_{4}^{(R)} \llbracket 1\rrbracket.\] (35) This fixes permutation indeterminacy (Section 4.2 in [27]).
Adding these constraints to the existing parameter space, we define the modified parameter space for the rank-\(K\) model as
\[\mathcal{B}_{\mathrm{low}}^{*}=\{\mathbb{B}=\llbracket A_{1},A_{2 }, A_{3},A_{4}\rrbracket\in\mathcal{B}_{\mathrm{low}}:\] \[A_{1},A_{2},A_{3},A_{4}\;\;\text{satisfy the conditions}\,(\ref{eq: 34}),(\ref{eq:35})\}.\]
If the tensor \(\mathbb{B}=\llbracket A_{1},A_{2},A_{3},A_{4}\rrbracket\in\mathcal{B}_{\mathrm{ low}}^{*}\) satisfies the condition
\[\mathrm{rank}A_{1}+\mathrm{rank}A_{2}+\mathrm{rank}A_{3}+\mathrm{rank}A_{4} \geq 2K+3,\]
then Proposition 3 in [27] implies that \(\mathbb{B}\) is uniquely identified in \(\mathcal{B}_{\mathrm{low}}^{*}\).
## Appendix C Consistency and Asymptotic Normality of Estimators
In this section, we study the asymptotic property of estimators for the regression parameter in the basic model. Let \(\{(\nu_{i1},\nu_{i2})\}_{i=1}^{n}\) be independent realization of the pair of Gaussian distributions \((\nu_{1},\nu_{2})\) from the basic model. For simplicity, we assume the true Frechet means \(\nu_{1\oplus},\nu_{2\oplus}\) are known and distributions \(\{(\nu_{1i},\nu_{2i})\}_{i=1}^{n}\) are fully observed.
We set \(X_{i}=\varphi_{\nu_{1\oplus}}\nu_{1i},Y_{i}=\varphi_{\nu_{2\oplus}}\nu_{2i}\) and define an estimator as \(\widetilde{\mathbb{B}}_{n}=\operatorname*{arg\,min}_{\mathbb{B}\in\mathcal{B }^{*}}\sum_{i=1}^{n}\|Y_{i}-(X_{i},\mathbb{B})\|_{(m_{2\oplus},\Sigma_{2\oplus })}^{2}\). Here, \(\mathcal{B}^{*}\) is the modified parameter space defined by (33).
In order to state our results, we introduce a half-vectorization of tensor \(\mathbb{B}\) in \(\mathcal{B}^{*}\). For a matrix \(A\in\mathbb{R}^{d\times(d+1)}\), we define its vectorization \(\operatorname{vech}^{*}(A)\in\mathbb{R}^{d(d+3)/2}\) as
\[\operatorname{vech}^{*}(A)= (A[1,1],A[2,1],\cdots,A[d,1],A[1,2],A[2,2],\cdots A[d,2],\] \[A[2,3],\cdots,A[d,3],A[3,4],\cdots,A[d,4],\cdots,A[d-1,d],A[d,d],A[d,d+ 1]).\]
Furthermore, for a tensor \(\mathbb{B}\in\mathcal{B}^{*}\), we define its vectorization \(\operatorname{vec}^{*}(\mathbb{B})\in\mathbb{R}^{d_{1}(d_{1}+1)d_{2}(d_{2}+1)/4}\) as
\[\operatorname{vec}^{*}(\mathbb{B})=((\operatorname{vech}^{*}(\mathbb{B}[\cdot,\cdot,r,s])^{\top})_{1\leq r\leq d_{2},r+2\leq s\leq d_{2}+1})^{\top}.\]
Note that the \(\operatorname{vec}^{*}(\cdot)\) operator is a one-to-one correspondence between \(\mathcal{B}^{*}\) and \(\mathbb{R}^{d_{1}(d_{1}+1)d_{2}(d_{2}+1)/4}\). Therefore, for any \(\theta\in\mathbb{R}^{d_{1}(d_{1}+1)d_{2}(d_{2}+1)/4}\), there uniquely exists a tensor \(\mathbb{B}\in\mathcal{B}^{*}\) such that \(\operatorname{vec}^{*}(\mathbb{B})=\theta\). We denote this tensor \(\mathbb{B}\) as \(\mathbb{B}(\theta)\).
Under this vectorization, we denote \(\widetilde{\theta}_{n}=\operatorname{vec}^{*}(\widetilde{\mathbb{B}}_{n})\) and \(\theta_{0}=\operatorname{vec}^{*}(\mathbb{B}_{0})\), and analyze the asymptotic property of the estimator \(\widetilde{\theta}_{n}\) with the standard theory for M-estimation. For vector \(\theta\in\mathbb{R}^{d_{1}(d_{1}+1)d_{2}(d_{2}+1)/4}\) and matrices \(X\in\Xi_{d_{1}},Y\in\Xi_{d_{2}}\), we define
\[m_{\theta}(\operatorname{vech}^{*}(X),\operatorname{vech}^{*}(Y))=\| \operatorname{vech}^{*}(Y)-\operatorname{vech}^{*}(\langle X,\mathbb{B}( \theta)\rangle_{2})\|_{(m_{2\emptyset},\Sigma_{2\emptyset})}^{2}.\]
Here, for a vector \(z\in\mathbb{R}^{d_{2}(d_{2}+3)/2}\) represented as \(z=\operatorname{vech}^{*}(A)\) with a matrix \(A\in\mathbb{R}^{d_{2}(d_{2}+3)}\), we define its norm as \(\|z\|_{m_{2\emptyset},\Sigma_{2\emptyset}}=\|A\|_{m_{2\emptyset},\Sigma_{2 \emptyset}}\). Then, the estimator \(\widetilde{\theta}_{n}\) is characterized as the minimizer of the criterion function \(\theta\mapsto n^{-1}\sum_{i=1}^{n}m_{\theta}(\operatorname{vech}^{*}(X_{i}), \operatorname{vech}^{*}(Y_{i}))\). Note that the vector \(\operatorname{vech}^{*}(\langle X,\mathbb{B}(\theta)\rangle_{2})\in\mathbb{R} ^{d_{2}(d_{2}+3)/2}\) has the form
\[\operatorname{vech}^{*}(\langle X,\mathbb{B}(\theta)\rangle_{2})=(\langle \operatorname{vech}^{*}(X),\operatorname{vech}^{*}(\mathbb{B}(\theta)[\cdot,\cdot,r,s])\rangle)_{1\leq r\leq d_{2},r+2\leq s\leq d_{2}+1},\]
which implies \(\widetilde{\theta}_{n}\) is the least-square estimator in the linear regression model between vectors \(\operatorname{vech}^{*}(X)\) and \(\operatorname{vech}^{*}(Y)\).
Then, we obtain the following results. We denote the partial derivative of the function \(m_{\theta}\) in terms of \(\theta\) as \(\nabla_{\theta}m_{\theta}\).
**Theorem 4** (Consistency of Estimator).: _Assume \(\theta_{0}\) is in a compact parameter space \(\Theta_{0}\subset\mathbb{R}^{d_{1}(d_{1}+1)d_{2}(d_{2}+1)/4}\) and the pair of vectors \((\operatorname{vech}^{*}(X_{i}),\operatorname{vech}^{*}(Y_{i}))\) is supported on a bounded set. Then, \(\widetilde{\theta}_{n}\) is a consistent estimator for \(\theta_{0}\)._
Proof.: We show that the set of functions \(\{m_{\theta}:\theta\in\Theta_{0}\}\) is a Glivenko-Cantelli class (Section 19 in [22]). If this holds, the consistency of the estimator \(\widetilde{\theta}_{n}\) follows from Theorem 5.7 in [22]. Note that for a vector \(z=(z_{1},...,z_{d_{2}(d_{2}+3)/2})\in\mathbb{R}^{d_{2}(d_{2}+3)/2}\), the norm \(\|z\|_{m_{2\emptyset},\Sigma_{2\emptyset}}\) has the form
\[\|z\|_{m_{2\emptyset},\Sigma_{2\emptyset}}^{2}=\sum_{1\leq i\leq j\leq d_{2}(d_ {2}+3)/2}c_{ij}z_{i}z_{j}, \tag{36}\]
where \(c_{ij}\) are constants determined by the values of \(m_{2\emptyset}\) and \(\Sigma_{2\emptyset}\). This implies that the map \(\theta\mapsto m_{\theta}(\operatorname{vech}^{*}(X),\operatorname{vech}^{*}(Y))\) is continuous for each fixed \(\operatorname{vech}^{*}(X)\) and \(\operatorname{vech}^{*}(Y)\). Moreover, because the parameter \(\theta\) and vectors \(\operatorname{vech}^{*}(X)\) and \(\operatorname{vech}^{*}(Y)\) are in bounded regions, the map \(m_{\theta}\) is also uniformly bounded. That is, there exists a constant \(C>0\) such that \(m_{\theta}(\operatorname{vech}^{*}(X),\operatorname{vech}^{*}(Y))\leq C\) for all \(\theta\in\Theta_{0},\operatorname{vech}^{*}(X),\operatorname{vech}^{*}(Y)\). This implies the set of functions \(\{m_{\theta}:\theta\in\Theta_{0}\}\) is dominated by the integrable constant function \(C\). Combining these facts with the assumption of compactness of \(\Theta_{0}\), Example 19.8 in [22] implies that \(\{m_{\theta}:\theta\in\Theta_{0}\}\) is a Glivenko-Cantelli class.
**Theorem 5** (Asymptotic Normality of Estimator).: _In addition to the assumptions in Theorem 4, suppose \(\theta_{0}\) is an interior point of \(\Theta_{0}\) and the map \(\theta\mapsto\mathbb{E}[m_{\theta}(\operatorname{vech}^{*}(X_{i}), \operatorname{vech}^{*}(Y_{i}))]\) has nonsingular Hessian matrix \(V_{\theta_{0}}\) at \(\theta_{0}\). Then, \(\sqrt{n}(\widetilde{\theta}_{n}-\theta_{0})\) converges in distribution to a normal distribution with mean zero and covariance matrix_
\[V_{\theta_{0}}^{-1}\mathbb{E}[\nabla_{\theta}m_{\theta_{0}}( \operatorname{vech}^{*}(X_{i}),\operatorname{vech}^{*}(Y_{i}))\nabla_{\theta}m _{\theta_{0}}(\operatorname{vech}^{*}(X_{i}),\operatorname{vech}^{*}(Y_{i}))^{ \top}]V_{\theta_{0}}^{-1}.\]
**Remark 2**.: _When the norm \(\|\cdot\|_{(m_{2\Theta},\Sigma_{2\Theta})}\) is equal to the Frobenius norm, that is, \(m_{2\mathfrak{g}}=0\) and \(\Sigma_{2\mathfrak{g}}=I\), the second-derivative matrix \(V_{\theta_{0}}\) has the form_
\[V_{\theta_{0}}=\begin{pmatrix}\mathbb{E}[\operatorname{vech}^{*}(X_{i}) \operatorname{vech}^{*}(X_{i})^{\top}]&&O\\ &\ddots&\\ O&&\mathbb{E}[\operatorname{vech}^{*}(X_{i})\operatorname{vech}^{*}(X_{i})^{ \top}]\end{pmatrix}.\]
_Therefore, \(V_{\theta_{0}}\) is nonsingular if and only if the matrix \(\mathbb{E}[\operatorname{vech}^{*}(X_{i})\operatorname{vech}^{*}(X_{i})^{ \top}]\) is nonsingular._
Proof.: We check the conditions of Theorem 5.23 in [22], which is a standard result for the asymptotic normality of the M-estimator. Noting that the norm \(\|z\|_{(m_{2\mathfrak{g}},\Sigma_{2\mathfrak{g}})}\) has the form (36) for a vector \(z=(z_{1},...,z_{d_{2}(d_{2}+3)/2})\in\mathbb{R}^{d_{2}(d_{2}+3)/2}\), the function \(\theta\mapsto m_{\theta}(\operatorname{vech}^{*}(X),\operatorname{vech}^{*}( Y))\) is differentiable on the interior of \(\Theta_{0}\) for each fixed \(\operatorname{vech}^{*}(X)\) and \(\operatorname{vech}^{*}(Y)\). Moreover, because the parameter \(\theta\) and vectors \(\operatorname{vech}^{*}(X)\) and \(\operatorname{vech}^{*}(Y)\) are in bounded regions, the partial derivative \(\nabla_{\theta}m_{\theta}\) is also bounded. That is, there exists a constant \(M>0\) such that \(\|\nabla_{\theta}m_{\theta}(\operatorname{vech}^{*}(X),\operatorname{vech}^{* }(Y))\|\leq M\) for all \(\theta\in\Theta_{0},\operatorname{vech}^{*}(X)\) and \(\operatorname{vech}^{*}(Y)\). Combining this fact with the multi-dimensional mean value theorem, for every \(\theta_{1}\) and \(\theta_{2}\) in a neighborhood of \(\theta_{0}\), we have
\[|m_{\theta_{1}}(\operatorname{vech}^{*}(X),\operatorname{vech}^{*}(Y))-m_{ \theta_{2}}(\operatorname{vech}^{*}(X),\operatorname{vech}^{*}(Y))|\leq M\| \theta_{1}-\theta_{2}\|.\]
Finally, the map \(\theta\mapsto\mathbb{E}[m_{\theta}(\operatorname{vech}^{*}(X_{i}), \operatorname{vech}^{*}(Y_{i}))]\) is assumed to have nonsingular Hessian matrix \(V_{\theta_{0}}\) at \(\theta_{0}\). Then, the conditions of Theorem 5.23 in [22] are fulfilled, and we have the conclusion from the theorem.
|
2310.03480 | The ICASSP SP Cadenza Challenge: Music Demixing/Remixing for Hearing
Aids | This paper reports on the design and results of the 2024 ICASSP SP Cadenza
Challenge: Music Demixing/Remixing for Hearing Aids. The Cadenza project is
working to enhance the audio quality of music for those with a hearing loss.
The scenario for the challenge was listening to stereo reproduction over
loudspeakers via hearing aids. The task was to: decompose pop/rock music into
vocal, drums, bass and other (VDBO); rebalance the different tracks with
specified gains and then remixing back to stereo. End-to-end approaches were
also accepted. 17 systems were submitted by 11 teams. Causal systems performed
poorer than non-causal approaches. 9 systems beat the baseline. A common
approach was to fine-tuning pretrained demixing models. The best approach used
an ensemble of models. | Gerardo Roa Dabike, Michael A. Akeroyd, Scott Bannister, Jon Barker, Trevor J. Cox, Bruno Fazenda, Jennifer Firth, Simone Graetzer, Alinka Greasley, Rebecca R. Vos, William M. Whitmer | 2023-10-05T11:46:32Z | http://arxiv.org/abs/2310.03480v2 | # The Cadenza ICASSP 2024 Grand Challenge
###### Abstract
The Cadenza project aims to enhance the audio quality of music for individuals with hearing loss. As part of this, the project is organizing the ICASSP SP Cadenza Challenge: Music Demixing/Remixing for Hearing Aids. The challenge can be tackled by decomposing the music at the hearing aid microphones into vocals, bass, drums, and other components. These can then be intelligently remixed in a personalized manner to improve audio quality. Alternatively, an end-to-end approach could be used. Processes need to consider the music itself, the gain applied to each component, and the listener's hearing loss. The submitted entries will be evaluated using the intrusive objective metric, the Hearing Aid Audio Quality Index (HAAQI). This paper outlines the challenge.
Gerardo Roa Dabike\({}^{1}\), Michael A. Akeroyd\({}^{2}\), Scott Bannister\({}^{3}\), Jon Barker\({}^{4}\), Trevor J. Cox\({}^{1}\), Bruno Fazenda\({}^{1}\), Jennifer Firth\({}^{2}\), Simone Graetzer\({}^{1}\), Alinka Greasley\({}^{3}\), Rebecca Vos\({}^{1}\), William Whitmer\({}^{2}\) University of Salford\({}^{1}\), University of Nottingham\({}^{2}\), University of Leeds\({}^{3}\), University of Sheffield\({}^{4}\) [email protected]
## 1 Introduction
According to the World Health Organization (WHO) [1], 430 million people worldwide experience disabling hearing loss, and this number is projected to increase to 1 in 10 people by 2050.
Hearing loss tends to make sounds duller due to losses at high frequencies. It also makes it harder to pick out sounds from a mixture, such as the melody line from a band. This can make music less enjoyable and risks people disengaging from listening and creating music.
68% of users report difficulties when listening to music through their hearing aids [2]. The signal statistics of speech and music are different and the default settings on hearing aids are optimized for speech [3]. Hearing aids can have programs for music, but their effectiveness varies. This highlights the urgent need for improved music processing.
The 'ICASSP 2024 SP Cadenza Grand Challenge' (CADICASSP24)1 is being run to help address this need.
Footnote 1: [https://cadenzachallenge.org/](https://cadenzachallenge.org/)
## 2 Challenge Description
Someone with a hearing loss is listening to music via their hearing aids. The task is to develop signal processing that enables personalized rebalancing of the music to enhance this experience. For instance, one can amplify the vocals relative to the rest of the band to make lyrics more intelligible. One approach to achieving this is by demixing the music and then applying gains to the separated tracks to adjust the balance when the music is downmixed to stereo. The challenge also welcomes other end-to-end approaches.
Unlike traditional demixing challenges [4, 5, 6], the signals to be processed are those captured by the hearing aid microphones at each ear when the music is replayed over stereo loudspeakers. This means the microphone signals are a combination of both the original right and left stereo signals due to cross-talk - see Figure 1. Cross-talk occurs when a signal transmitted on one channel interferes with a signal transmitted on another in close proximity. In our scenario, cross-talk is strongest at low frequencies when the wavelength is largest. Consequently, the spatial distribution of an instrument will differ in the microphone signals at the ear compared to the original left-right music signals. Therefore, stereo demixing algorithms will need to be adapted to account for this frequency-dependent alteration.
The challenge welcomes both causal and non-causal approaches. Causal, low-latency systems are needed for hearing aids, but are currently not as common as non-causal techniques.
## 3 Datasets
The music dataset is based on the MUSDB18-HQ [7] and MoisesDB [8]. MUSDB18-HQ has multitrack recordings and has been the benchmark for music source separation since 2019. It has 150 recordings spanning various styles (Table 1). For each song, MUSDB18-HQ provides the stereo mixture and isolated stems for vocals, bass, drums, and other (VDBO). The dataset is divided into 100 songs for training and 50 for evaluation.
MoisesDB is a new dataset that includes 240 previously unreleased multitrack audio recordings for source separation, expanding beyond the four VDBO stems. However, it can be adapted to the VDBO structure. Currently, it does not have a predefined split between training, validation, and testing. 30 songs in the dataset had issues, such as stems from one song having varying durations. From the remaining 210 songs, we randomly selected 50 to create a validation set, ensuring that it maintained the same genre distribution as the test set in the MUSDB18-HQ. Table 1 shows a summary of the songs per genre on each set. The genres are assigned differently between the datasets. In MoisesDB rock and pop are separated, whereas in MUSDB18-HQ there is a compound genre called _Pop/Rock_.
Head-Related Transfer Functions (HRTF) model the sound propagation from the stereo loudspeakers in an anechoic chamber to the hearing aid microphones. It is important to note that the time it takes for a signal to reach the left and right microphones is not the same, resulting in some delay, especially for the microphone farther from the source. In Figure 1, this delay is depicted by the varying lengths of the arrows originating from one speaker to each microphone.
The HRTFs are a subset of the Universitat Oldenburg Hearing Device Head-related Transfer Functions dataset (OIHeaD-HRTF) [9]. We use HRTFs for six speaker locations positioned at \(\pm 22.5^{\circ}\), \(\pm 30.0^{\circ}\), and \(\pm 37.5^{\circ}\) from the listener, resulting in nine possible combinations allowing for some asymmetry. For
Figure 1: Cross-talk music signal
instance, if the left speaker is at -22.5\({}^{\circ}\), the right speaker can be positioned at either 22.5\({}^{\circ}\), 30.0\({}^{\circ}\), or 37.5\({}^{\circ}\). We use HRTFs for all 16 human subjects to introduce variability due to differences in head shapes.
Listeners are characterised based on bilateral pure-tone audiograms that provide hearing thresholds at standardized frequencies ([250, 500, 1000, 2000, 3000, 4000, 6000, 8000] Hz) [10]. Although a broader frequency range could have been advantageous for music-related applications, we were limited to standard frequencies by the available databases and non-proprietary gain rules.
For training, we utilized 83 audiograms from the 2nd Clarity Enhancement Challenge [11] from the Clarity Project2. For validation, we selected 50 audiograms from the Jade University of Applied Sciences dataset [12]. We randomly chose the required number of audiograms to maintain the same distribution per frequency band as in the original Clarity dataset. In the evaluation phase, we used the same 52 audiograms that were employed in the First Cadenza Challenge (CAD1) [13].
Footnote 2: [https://claritychallenge.org](https://claritychallenge.org)
## 4 Baseline
Figure 2 is a schematic of the baseline. A scene generator (blue box) randomly generates scene characteristics, which include selecting the music track, choosing HRTFs characterized by the loudspeaker locations, and selecting one of the 16 subjects from the OIHeaD-HRTF dataset. Additionally, it determines the gains to be applied to each stem. The listener characteristics are provided as metadata by the green oval.
The music enhancement stage (pink box) takes the music captured by the hearing aids' microphones as inputs. An out-of-the-box audio source separation system is employed to estimate the VDBO components. Subsequently, target gains are applied to each component before downmixing to stereo. The resulting signal is the 'processed signal', which is then evaluated for audio quality using the intrusive metric Hearing-Aid Audio Quality Index (HAAQI) [14]. The reference signal for HAAQI corresponds to the rebalanced signal using the ground truth VDBO, along with the same HRTFs and gains applied during the enhancement stage.
Two baseline systems are proposed. The first one uses the Hybrid Demucs model [15], which employs a U-Net architecture that combines the advantages of both time-domain and spectrogram-based audio source separation. The second baseline uses the OpenUnmix model [16], which is a purely spectrogram-based approach and served as the baseline for the SiSEC 2018 challenge [4].
### Baseline Results and Analysis
Results on the validation set for both baselines are presented in Table 2. The Hybrid Demucs system achieved slightly higher HAAQI scores of 0.6677 \(\pm\) 0.1600 compared to 0.5963 \(\pm\) 0.1429 obtained by OpenUnmix. The presence of cross-talk likely contributed to both systems performing poorly in separating low-frequency components, such as the bass, which in turn affected the rebalancing of the music.
## 5 Conclusions and Future Work
The Cadenza project is hosting the ICASSP SP Cadenza Challenge about music for hearing aids. This challenge involves a music rebalancing scenario that can be addressed using demixing/remixing techniques or end-to-end approaches. Both causal and non-causal approaches are encouraged. For those using demixing, the challenge differs from past competitions by processing the signals picked up on hearing aid microphones, which have cross-talk between the original left and right music signals.
## 6 Acknowledgements
The Cadenza project is supported by the Engineering and Physical Sciences Research Council (EPSRC) [grant number: EP/W019434/1].
We thank our partners: BBC, Google, Logittech, RNID, Sonova, Universitat Oldenburg.
|
2305.07450 | Accelerating Java Ray Tracing Applications on Heterogeneous Hardware | Ray tracing has been typically known as a graphics rendering method capable
of producing highly realistic imagery and visual effects generated by
computers. More recently the performance improvements in Graphics Processing
Units (GPUs) have enabled developers to exploit sufficient computing power to
build a fair amount of ray tracing applications with the ability to run in
real-time. Typically, real-time ray tracing is achieved by utilizing high
performance kernels written in CUDA, OpenCL, and Vulkan which can be invoked by
high-level languages via native bindings; a technique that fragments
application code bases as well as limits portability.
This paper presents a hardware-accelerated ray tracing rendering engine,
fully written in Java, that can seamlessly harness the performance of
underlying GPUs via the TornadoVM framework. Through this paper, we show the
potential of Java and acceleration frameworks to process in real time a compute
intensive application. Our results indicate that it is possible to enable real
time ray tracing from Java by achieving up to 234, 152, 45 frames-per-second in
720p, 1080p, and 4K resolutions, respectively. | Vinh Pham Van, Juan Fumero, Athanasios Stratikopoulos, Florin Blanaru, Christos Kotselidis | 2023-05-01T13:10:03Z | http://arxiv.org/abs/2305.07450v1 | # Accelerating Java Ray Tracing Applications on Heterogeneous Hardware
###### Abstract.
Ray tracing has been typically known as a graphics rendering method capable of producing highly realistic imagery and visual effects generated by computers. More recently the performance improvements in Graphics Processing Units (GPUs) have enabled developers to exploit sufficient computing power to build a fair amount of ray tracing applications with the ability to run in real-time. Typically, real-time ray tracing is achieved by utilizing high performance kernels written in CUDA, OpenCL, and Vulkan which can be invoked by high-level languages via native bindings; a technique that fragments application code bases as well as limits portability.
This paper presents a hardware-accelerated ray tracing rendering engine, fully written in Java, that can seamlessly harness the performance of underlying GPUs via the TornadoVM framework. Through this paper, we show the potential of Java and acceleration frameworks to process in real time a compute intensive application. Our results indicate that it is possible to enable real time ray tracing from Java by achieving up to 234, 152, 45 frames-per-second in 720p, 1080p, and 4K resolutions, respectively.
FPGAs, High-Performance, Java, JIT compilation, Optimizations +
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: thanks: (c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
+
Footnote †: c) 2023 Association for Computing Machinery.
programmability require adoption of specialized low-level programming languages, such as OpenCL (Peters et al., 2017) or CUDA (Peters et al., 2018), in which deep understanding of architectural details in relation to the parallel programming models can result in increasing development expenses and time-to-market (Beng et al., 2019). Besides, the portability of accelerated, high-performance code is limited due to the re-factorizations needed whenever newer generations of devices are introduced, leading to additional maintenance costs (Steiner et al., 2019).
High-level programming languages, such as Java and Python, were designed to abstract low-level concepts, but have been generally avoided for heterogeneous programming due to their lack of native support for hardware acceleration. State-of-the-art attempts at high-level, GPU-accelerated ray tracing, such as Mambo Tracer (Mambo et al., 2017) and Python RTX (Peters et al., 2018), only work through invocations of manually implemented OpenCL or CUDA code to perform heavy computations. As alternatives, Aparapi (Aparapi, 2017), Marawacc (Mambo et al., 2017; Peters et al., 2018), and TornadoVM (TornadoVM, 2018) are parallel programming frameworks designed to exploit the computational power of heterogeneous hardware (e.g., GPUs) in a transparent manner.
In this work, we explore a high-level approach to implement hardware accelerated ray tracing with the potential to massively outperform CPU-based implementations and run in real-time, while written entirely in Java. Our open-source implementation1 does not contain any low-level programming or platform-specific optimizations, and it is fully implemented in Java using the TornadoVM APIs. More specifically, we develop a ray tracing application from the ground up of the entire rendering pipeline, synthesizing a scene containing primitive shapes with displays of ray traced reflections and shadows. In detail, this paper makes the following contributions:
Footnote 1: [https://github.com/Vinhixus/TornadoVM-Ray-Tracer](https://github.com/Vinhixus/TornadoVM-Ray-Tracer)
1. It develops an interactive rendering engine that produces scenes with ray traced optical effects in real-time fully implemented in Java.
2. It analyzes how TornadoVM allows users to exploit the high-performance heterogeneous hardware at high-level, and accelerates the computations of ray tracing algorithms by transparently using commodity GPUs.
3. It performs a performance evaluation of the proposed ray tracing engine across different types of accelerators and frame sizes, showcasing real-time rendering of 234, 152, 45 frames-per-second in 720p, 1080p, and 4K resolutions respectively.
The remaining of the paper is structured as follows: Section 2 introduces the concept of ray tracing and hardware acceleration focusing on TornadoVM. Section 3 describes the design of our framework and explains the operations used to process, in real time, shadows, textures and reflections. Section 4 presents the performance evaluation and, finally, Sections 5 and 6 present the related work and conclusions, respectively.
## 2. Background
In computer graphics, _rendering_ is the process responsible for taking a three dimensional _scene_ containing various geometric objects and projecting them based on a viewing perspective onto a 2-dimensional representation to be displayed on monitor screens (Peters et al., 2018). The main objective of rendering is to produce an accurate recreation of how these 3D objects would appear within the same environment in real life.
### Rendering techniques
As follows, we describe the main components of a ray tracer application and common techniques.
#### 2.1.1. The appearance of the real physical world
Light is probably the most important aspect of the physical appearance in the real world. Light consists of a set of light rays, a wave-like stream of photon particles that carry color information in wavelengths, that originate from sources such as a lamp or the sun. These light rays bounce around the environment, colliding with objects, where a number of interactions may occur depending on the shape and material of the object surface (Steiner et al., 2019):
_Absorption._ The object may absorb the light, terminating its progress; 100% absorption however, does not exist in nature, realistically only a percentage of the light is absorbed depending on the material.
_Reflection._ The light may get reflected in a number of directions. A fully reflective surface, such as a mirror, reflects all of the light in one direction symmetrical to the incoming light ray, whereas a diffuse surface such as plastic could disperse the light in many directions with varying intensities.
_Refraction._ In the case that the object is translucent, such as a glass or a body of water, a percentage of the incoming light ray may pass through the surface, allowing for the object to produce a see-through effect.
_Fluorescence._ Some objects, such as the mineral rock ruby, may absorb the energy of the light ray and re-emit it with different properties, such as a different color.
Eventually, after a number of interactions, a light ray may end up at our eyes, where a set of photo-receptors detect the wavelength of the ray, which our brain processes as a color, producing an image which results in a perception of the world (Steiner et al., 2019).
#### 2.1.2. An overview of ray tracing
While modelling the exact real world behavior of light would be the key to rendering photo-realistic images, it quickly becomes apparent that simulating and tracing every light ray from those starting off from the light sources to the ones generated at every interaction with objects produces an overwhelming amount of rays do not reach an observer and consequently do not contribute to the final image. To eliminate the redundant computations, researchers over the years have made efforts to identify the rays within a scene that contribute to the final picture and separate them from the ones that can be omitted. The first idea that built the basis of ray tracing was proposed by Arthur Appel in 1968 (Arthur Appel, 1968). Since the only light rays of interest are the ones reaching the eye of the observer, Appel suggested to generate rays starting from the view point and trace its path and behavior backwards, essentially reversing the entire real-life process. Appel used ray tracing to determine primary visibility, in other words to find the closest surface to the camera at each pixel, and trace secondary _shadow rays_ to the light source to define whether the point was in a cast shadow or not (Figure 2).
Not long after, in 1971 Robert A. Goldstein and Roger Nagel (Goldstein and Roger Nagel, 1972) extended the works of Appel by not only producing cast shadows,
but also computing the surface normal (the normal vector at a given surface point) at an intersection point to determine the direction the surface is facing. Knowing the position of the light source, it could then be established whether the given surface point faces towards or away from the light, according to which the brightness of the color could be adjusted to produce a shaded effect. In 1980, Turner Whitted (2007) proposed the modelling of light bounces from the interactions outlined in the previous section (2.1.1). Whitted continued the works of Appel, Goldstein and Nagel by generating secondary _reflection and refraction rays_ at intersection points, which were traced throughout the scene to compute colors, producing the effects of reflections and translucent objects. This process is recursively continued until the light ray exits the scene or runs out of energy.
Turner Whitted's work became the baseline form of modern ray tracing techniques, known as the _Classical Whitted-style Ray-Tracer_(Miller, 1998), which modelled enough of the real-world behavior of light to produce extremely realistic visuals.
#### 2.1.3. The parallel nature of ray tracing
One extremely advantageous property of ray tracing is that it is highly parallelizable. This is due to the fact that the entire process that obtains the color for one pixel is completely independent from another pixel. These kinds of algorithms where there is little to no dependency or requirement for communication between parallel tasks are called _embarrassingly parallel_; a property that makes ray tracing an ideal candidate for execution on hardware accelerators such as GPUs.
### GPU Acceleration of Ray Tracing
The increase in computational capacity of commodity GPUs has resulted in real time ray tracing to become pervasive across modern GPUs. Both major GPU manufacturers, NVIDIA and AMD, offer ray tracing capabilities in off-the-shelf GPUs that typically utilize dedicated ray tracing cores in GPUs. The main target market is gaming which has a baseline requirement the achievement of over 30 frames per second while ray tracing at 1080p or 4K resolutions. Both AMD and NVIDIA offer Software Development Kits (SDKs) and libraries that developers can use to access the ray tracing capabilities of supporting GPUs. These libraries and SDKs are typically implemented in CUDA (Krishnan et al., 2017), Vulkan (2017), OpenCL (Krishnan et al., 2017), and OpenGL (Krishnan et al., 2017) and programmers can utilize those via native bindings to their programming languages of choice (Krishnan et al., 2017; Krishnan et al., 2017).
#### 2.2.1. Ray Tracing in Java
The Java programming language, although not typically associated with game development, has found tremendous success in the form of the Java-based Minecraft game. To improve graphics and utilize ray tracing, independent developers and companies have started providing "mods" that enable high-fidelity ray tracing. These additions are typically developed in heterogeneous programming languages (e.g. CUDA and OpenGL), and are integrated inside the JVM via native bindings through the Java Native Interface (JNI) sacrificing performance in the process. Since, the Java Virtual Machine, does not natively support GPU (or any from of) acceleration, the only way to enable such high-performance graphics functionality is to fragment the code base by mixing Java code with other programming languages; a fact that contradicts the "write-once-run-everywhere" premise of Java.
Recently, however, significant efforts have been made in order to augment the JVM with capabilities to automatically and transparently accelerate Java code on heterogeneous hardware accelerators such as GPUs, FPGAs, and others. Prime examples of such frameworks are TornadoVM (Krishnan et al., 2017), Aparapi (2017), and IBM J9 (2017) which offer different degrees of functionality and JVM interoperability. Since these frameworks enable the automatic acceleration of Java code on GPUs, a natural question that may arise is "_Can we implement a complex ray tracer fully in Java and achieve real-time performance?_"; a question that this paper endeavors to answer.
### TornadoVM Background
In this work, we utilize the TornadoVM framework (Krishnan et al., 2017; Krishnan et al., 2017) to design and build a real time GPU-accelerated ray tracer fully in Java; mainly due to its proven capabilities to achieve high performance graphics applications (Krishnan et al., 2017). TornadoVM offers an API that allows developers to identify which methods and loops to parallelize through a set of annotations, such as the _@Parallel_ annotation. As an example, lines 1-4 of the code snippet shown in Listing 1 illustrate the use of the annotation on a simple method performing an element-wise addition of integer arrays \(a\) and \(b\) (adapted from (Krishnan et al., 2017)).
```
publicvoid(int[]a,int[]b,int[]c){ for(@Parallelinti=0;i<c.length;i++) c[i]=a[i]+b[i]; } publicvoidcompute(int[]a,int[]b,int[]c){ TaskSchedulets=newTaskSchedule("*0"); ts.streamIn(a,b); ts.task("t0",this::add,a,b,c); ts.streamOut(c); ts.execute(); }
```
Listing 1: Example in TornadoVM.
Listing 2: Java snippet used to illustrate the FPGA code generation.
Figure 2. Arthur Appel’s ray tracing basics.
2for(@Parallelinti=0;i<n;i++){
3for(intj=0;j<m;j++){
4//computation
5}}}
To execute such a method with TornadoVM, instead of simply invoking the method, the user should instantiate a TaskSchedule object, which acts as the executor service for the routine. A task can then be defined using a name, a reference to the target method as well as the data it operates on (essentially the arguments of the function). Additionally, as hardware accelerators usually contain their own memory, the user may set up which data is transferred between the host and the hardware accelerators:
* The streamIm() function allows the user to indicate the memory region that gets copied to the accelerator at every invocation of the task. Data that is required by the method but is not marked with streamIm() is copied to the target device at the very first call, where it persists for subsequent executions of the method.
* The streamOut() function allows the user to indicate the memory region that gets copied from the accelerator back to the host device (the main CPU).
Lines 6-10 of Listing 1 show how a task may be composed for the previously mentioned add method. Once the task, alongside its TaskSchedule is set up, the method can be invoked using an execute() call. For each task, TornadoVM generates low-level code with device-specific optimizations for hardware accelerators, that perform identical computations to the generic high-level method, which is then executed on heterogeneous hardware to exploit parallel execution. In the next section, we describe the design and implementation of a Java ray tracer using TornadoVM.
## 3. Design of the Java ray tracer
This section shows the design of the proposed Java framework for real-time ray tracing processing. We outline the steps and algorithms involved to compute a scene using ray tracing techniques, highlighting the most compute-intensive parts of the application which are accelerated by TornadoVM.
### Camera & perspective
The _camera_ is the view point from which the scene is perceived, often referred to as the _eye_. There are four major parameters that describe the camera:
**Position**: The location defined by x, y, and z coordinates
**Yaw**: Angle describing horizontal rotation
**Pitch**: Angle describing vertical rotation
**FOV**: Angular extent of a given scene that is imaged
The position of the camera defines where the user is looking from, while the yaw and pitch values define the direction the user is looking towards. The field-of-view (FOV) is the angle that defines how wide the view spectrum is. Having a single-point camera allows for a _perspective_ view of the scene similarly to how humans see the physical world, where objects that are further away from the observer appear smaller than those closer. This provides an illusion of _depth_, which allows for a 2-dimensional image to gain the additional information for representing 3-dimensional objects.
### Window & viewport
The _window_ is the rectangular area on the screen that displays the results of the rendering process. This is a physical array of pixels defined by a resolution denoting the width and the height of the window. A pixel on a window can be defined using _screen coordinates_ (_x_, _y_) (for instance, in a window of resolution \(1280x720\), the center-most pixel has coordinates of \(640x360\)).
The _viewport_ is the rectangular region placed in front of the camera within the Cartesian coordinate system of the scene that represents the range or area currently being viewed. While the pixels on the window are represented by screen coordinates, the equivalent points on the viewport takes the form of _normalized device coordinates_ (_NDC_) (Steintein et al., 2009) in the range of [-1, 1], [-1, 1]. Thus, the viewport is placed inside of a square area of which each side is of size of 2 with the middle-point located at exactly \((0,0)\) (See Figure 3). The window and the viewport relate to each other with a one-to-one mapping of pixel coordinates to NDC coordinates, computed using a simple normalization function (Listing 3).
#### 3.2.1. Relative camera placement
To ensure that the camera viewp encapsulates the entire viewport, the camera is placed at a fixed distance from the viewport according to the field-of-view angle. Figure 4 shows a graphical representation of the geometrical computation that allows us to define the relative position of the camera. Note that the green line denotes the viewport. Besides, the width of the viewport is of 2 unit lengths, which makes half of the width denoted by \(a\) equal to 1. Given the camera field-of-view angle \(\alpha\), it can be observed that \(a\) and \(b\) are the opposite and adjacent sides of a right-angled triangle with an angle of \(\frac{\alpha}{2}\) The distance of the
```
1getNormScreenCoods(x,y):
2ifwidth>height:
3u=(x-width/2+height/2)/height*2-1;
4v=-{y/height*2-1};
5else:
6u=x/width*2-1;
7v=-({y-height/2+width/2)/width*2-1};
8returnu,v;
```
Listing 3: Computation of a Normalized Device Coordinates for each pixel.
Figure 3. Mapping pixels from the window to the viewport.
camera from the viewport (denoted by \(b\)) can thus be computed as the quotient of \(a\) and the tangent of \(\frac{a}{2}\):
\[b=\frac{a}{tan(\frac{a}{2})}=\frac{1}{tan(\frac{a}{2})}\]
### The primary view rays
Once the camera and the viewport are defined, then we define the initial view rays which are shot from the camera through every pixel on the viewport into the scene. This will produce a total number of width * height rays, which are independently traced around the environment to obtain a final color for the respective pixel.
#### 3.3.1. Defining the rays
Each view ray is constructed with its origin set as the position of the camera, while the direction is obtained as follows:
* The viewport is placed into the center of the coordinate system in parallel to the _xy plane_, with the center pixel lined up with the origin point \(O(0,0,0)\).
* The normalized location of the pixel in discussion is obtained as described in section 3.2. For instance, the center pixel at (640, 360) in a viewport of resolution (1280, 720) will have a location of \(P(0,0,0)\).
* The camera is placed at the location \(C(0,0,-1/tan(\frac{a}{2}))\), where \(a\) is the field of view of the camera as described in section 3.2.1.
* The relative direction vector from the camera to the pixel is acquired by subtracting the location of pixel \(P\) from the camera position \(C\) and normalizing the result.
* The resulting vector is rotated around the yaw and the pitch of the camera to obtain the final direction of the ray.
### Objects in the scene
Objects in the scene within this application take the form of primitive shapes (e.g., spheres and planes) and are defined by four properties:
**Position**:
To illustrate the effects resulting from implementing the different features of the rendering engine (outlined in the following sections), we provide a plane and three spheres for explaining the ray tracing algorithms.
#### 3.4.1. Acquiring the closest intersected object
Once a view ray is generated, it is then checked for intersections with every object in the scene and returns the index and the intersection point on the object that is the closest to the origin of the ray. In cases where no intersections are found, a plain black background color is returned, which is later replaced by a skybox image (Section 3.8). By defining and tracing the primary view rays to acquire the closest intersected object and simply returning the base color, the output shown in Figure 5 is obtained.
### Basic Shading
The following step in the implementation of the rendering engine entails an addition of a light source which provides some local illumination to the scene. Knowing the location of the light source, the direction from which light rays come from can be defined, allowing for shading effects to be computed, providing a 3-dimensional look to objects instead of the flat appearances we observe in Figure 5.
#### 3.5.1. The Phong illumination model
To achieve high realism in our scene, we employ empirical models of illumination to perform shading, which are based on real-life observations of light behavior. An efficient and realistic model widely used today in computer graphics was described by Bui Tuong Phong in 1975 (Bui Tuong Phong, 1975), which was also utilized in the paper of Turner Whitted (Thutcher, 1975). The Phong illumination model combines three main components that contribute to a final color: ambient, diffuse, and specular, as illustrated in Figure 6.
_Ambient lighting._ In a real-life scenario, light would not originate from one single source, as there might be additional illumination such as distant sunlight or moonlight that contribute to the image even if they are not directly in sight. Shadows are thus never entirely pitch black. To simulate the resulting effect, a constant ambient strength value in the range of \([0,1]\) is described as a small percentage of the object color that is always visible, regardless of
Figure 4. Camera placement relative to the viewport.
Figure 5. The result of acquiring the closest hit.
Figure 6. The components of the Phong illumination model.
whether the point is on a shaded side or not. In ray tracing, this component is especially crucial as rays would have to be traced towards every single point that emits light within an environment to compute colors otherwise.
Diffuse lightingThe diffuse component models the directional impact of light on an object, based on the simple principle that the more a surface faces away from a light source, the darker it appears. This is a result of the object itself blocking light from arriving at these surfaces. The component is defined through the dot product (the angular distance) between the normal vector of a surface point (the direction the surface is facing), and the direction vector traced from the surface point to the light (the direction the light rays are coming from). Diffuse lighting (Dubnik et al., 2017) is the most significant component of the Phong illumination model that gives objects a 3-dimensional shaded appearance by producing shadows on surfaces facing away from the light, commonly referred to as _form shadows_.
Specular lightingThe specular component displays a bright spot on the surfaces on shiny objects as a result of looking at reflected light beams. The more reflective a surface, the smaller and more concentrated the spot is, whereas on dull surfaces, it tends to spread over a larger area. This is where a _reflectivity_ value is defined for each object: an exponent value which defines how shiny/reflective the given object is and thus determines the spread of the specular highlight. The appearance of the spot at different levels of the reflectivity value is shown in Figure 7. The specular highlights are calculated using a reflection vector, which is obtained through reflecting the light direction around the normal vector of the respective surface point. The angle between that reflection vector and the direction to the camera defines a specular factor which when combined with the reflectivity value of the object yields the strength of the highlight. Listing 4 shows the corresponding pseudocode. The result is a value obtained per pixel to enable an effect similar to the right-hand side of Figure 6.
#### 3.5.2. Blinn's revision of specular highlights
Acquiring the strength of the specular highlight from the angle between the reflection vector and the view direction contains a small flaw: if a surface point is further away from the light source, then the reflection vector ends up pointing far away into the distance. This entails that a set points on object surfaces that are behind the light with respect to the camera, produces reflection vectors with angles larger than 90deg degrees to the view direction. In these instances the dot products evaluate to be negative, resulting in no specular highlights. This cutoff is illustrated in Figure 8.
To solve this issue, James Blinn designed a modification to the Phong specular highlights in 1977 (Blinn, 2017), proposing to instead take the dot product between the normal of the surface and the halfway vector between the light and the view directions to keep the angle under 90deg degrees. This allows the specular-highlights to spread correctly along surfaces. Thus, the final result of applying all three components of the combined Blinn-Phong illumination model to the objects in the scene, is shown in Figure 8(a).
### Cast shadows
While the Blinn-Phong illumination model has allowed for form shadows to be computed for every individual object in the scene, surfaces may also experience shading resulting from other objects blocking the light, producing _cast shadows_(Biannini et al., 2017). The accuracy of cast shadows is one of the main strengths of ray tracing: By generating a _shadow ray_ from a surface point to the light source, the ray can be checked for an intersection with the remaining objects to evaluate whether anything is occluding the surface from being lit.
When a surface point is evaluated to be under the effects of a cast shadow, the color of the surface is multiplied with the ambient strength of the scene, resulting in a darker area. Figure 8(b) displays the result of computing cast shadows. Listing 5 shows a code snippet to identify if an object is in a cast shadow.
#### 3.6.1. Soft shadows
One noticeable issue that can be observed in Figure 8(b) is the crisp, hard edges of the cast shadows, which appear inconsistent with the fading effect that diffuse lighting has produced on the form shadows. This issue has led us to take a small step away from classical Whitted-style ray tracing to produce _soft shadows_ using a stochastic sampling method. In real life, instead of coming from a single point, light rays originate from an area encompassing the volume of an emitter. The inner region of a surface in shadow, where every light ray coming from the light source is blocked and
Figure 8. The cutoff from Phong’s specular highlights.
Figure 7. The different levels of reflectivity values.
is fully shaded as a result, is called the _umbra_, while the transitional region that gets lighter towards the edge due to less and less light rays being obstructed, is called the _penumbra_ (See Figure 10). The phenomenon produces a softer and more realistic look to the cast shadows [40].
In the case of a spherical light, the area where light rays originate from can be modelled as the great circle facing the surface point in question. The effects of soft shadows are simulated by generating an \(n\) number of sample points on this circle and tracing a shadow ray to each point. The number of rays that are blocked by another object divided by the total number of sampled rays defines the brightness of the given surface point. This results in a shadow coefficient in the range of [0, 1] to be multiplied with a given surface point to darken the respective area.
Uniform sampling of points on a circleTo avoid cluttering of points during sampling, obtaining an \(n\) number of samples that cover an area with the most equal scattering possible, requires a form of uniform distribution. The algorithm utilized in this paper is based on a method known as the _sunflower seed arrangement_[45]. The model generates each point \(i\) in a total of \(n\) in the following manner:
\[r(i)=2\times radius\times\sqrt{\frac{i}{n}}\theta(i)=i\times\phi\]
where _r(i)_ is the distance from the circle's origin and \(\theta(i)\) is the angle at which point \(i\) resides, with \(\phi=\pi\times(3-\sqrt{5})\ rad\approx 137.507\) being the golden angle. Figure 11 illustrates the effects of generating different numbers of sample points, notice the improvement in quality as the sampling size increases. The final result of computing soft cast shadows with a sample size of 300 is shown in Figure 12.
Figure 11. Light sample sizes from top to bottom: 1, 10, 100, 500.
Figure 10. Illustration of umbra and penumbra shadows.
Figure 9. The result of applying the Blinn-Phong shading model and adding cast shadows.
### Reflections
With shading and shadows in place, the final step of the rendering engine entails the addition of color transport between scene objects through reflections, encompassing the works of John Turner Whitted (Turner, 2017). To compute reflections, when a view ray finds its first intersected object, next to the computation of shading effects and casting shadows to determine a final color value, an additional _reflection ray_ is spawned with a direction symmetrical to the incoming ray's direction (Snell's law of reflection (Snell, 2017)). The ray is then traced throughout the scene in the same manner as the initial view ray to acquire a reflection color (Figure 13). To obtain a final color, the base color of the object is mixed with the reflection color using a ratio depending on how reflective the object is. This ratio in our case, is derived from the reflectivity value introduced during the computation of specular highlights (Section 3.5.1), by dividing the reflectivity exponent with a predefined constant _MAX_REHECIVITY_ value (objects that have a reflectivity equal to _MAX_REHECIVITY_ is a fully reflective mirror-like surface).
#### 3.7.1. Recursive reflections
The aforementioned process, however, does not have to halt once the reflection ray has gathered color information at its first hit. In this event, another subsequent reflection ray may be sent out, continuing the algorithm in a recursive manner to produce reflections within reflections (Figure 13). To avoid a potentially infinite amount of reflections rays generated, a maximum depth is defined to limit the recursion to a specific number of ray bounces. Listing 6 shows a pseudocode for computing the color of a trace with depth \(N\).
Figure 14 illustrates the different results obtained at different recursion limits. We can notice that with two reflection bounces, reflections can be seen within the reflection of the sphere. Figure 15 shows the results of calculating reflections in the basic scene within our ray tracing framework.
#### 3.7.2. Iterative reflections
While using recursion to compute reflections is intuitive, unfortunately TornadoVM does not support recursive calls due it being forbidden in the back end languages such as OpenCL and CUDA. Thus as a workaround, recursive reflections were emulated using a manual stack to record color as well as shading and casting shadow information at each reflection bounce, while looping through the actions of each of the reflection rays with an iterative loop. When the reflection bounce limit is reached or a reflection ray exits the scene without an intersection, the stack is read in reverse to mix the recorded colors.
### Additional components
A number of additional components have been added to the implemented Java ray tracer including:
* **Skybox:** Distant background in the form of a High Dynamic Range (HDR) image for improving colors and reflectivity using UV mapping (Zhou et al., 2017).
Figure 14. Limit of 0 (left), 1 (middle), 2 (right) reflection bounces.
Figure 12. The result of sampling the cast shadows.
Figure 13. Generating and tracing reflection rays.
* **Physics system:** A physics system that employs the _The Verlet integration_(Krishnan, 2017) for adding motion to rendered physical objects within a scene.
### TornadoVM Integration
The entire rendering process described in this section is implemented within a method named _render_ (shown in Listing 7) with the following input arguments:
* **pixels**: an array of size width
* height where the computed color values for every pixel are stored upon executing the method.
* **dimensions**: a two-element array that holds the width and height of the viewport.
* **camera**: a float array of six elements containing the parameters of the camera such as the \(x,y,z\) coordinates, yaw and pitch angle of rotation, and field of view.
* **rayTracingProperties**: an array of size 2, which includes the desired soft shadow sampling size and the desired limit of reflection bounces.
* **bodyPositions, bodySizes, bodyColors and bodyReflectivities**: These four arguments encode the information about all of the objects in the scene; each object is defined using its position, size, color, and reflectivity.
* **skybox**: an array of four elements that includes the color values of every pixel of the HDRI skybox image. This is used to display the panoramic background instead of a plain color.
These arguments provide the _render_ method with all the required information to produce a final color for every pixel. As the process of acquiring a final color for each pixel is independent from one another, the main loop iterating over the pixels within the _render_ method can be marked with the _@Parallel_ annotation (Krishnan, 2017). This instructs the TornadoVM compiler to parallelize the for loop and optimize the code depending on the target architecture (in our case a GPU). Listing 8 shows a code snippet that represents the main render method expressed in the TornadoVM API using the _@Parallel_ annotation. Since there are two loops to be parallelized, the TornadoVM compiler will generate an optimized 2D kernel.
To execute the code, a _TaskSchedule_ is defined as shown in Listing 9. As shown, the arguments camera, rayTracing-Properties and bodyPositions are passed to the streamIn() operator for facilitate the user changing viewpoints and rendering parameters as well as the position of each object to allow for physics to move objects around in the scene. On the other hand, the pixels argument is passed with streamOut() operator in order for the results of the render method to be copied back to the host device and its memory for display.
The ray tracing application has been entirely written in Java and the JavaFX framework (Krishnan, 2017) using Canvas and PixelWriter. The _JavaFX Canvas_ is an image area defined by _width_ and _height_ that can be used for drawing, while the _JavaFX PixelWriter_ is an interface that defines methods which allow for pixel data to be displayed on the canvas. The JavaFX GUI updates the UI elements at fixed intervals, usually at the refresh rate of the user's monitor
```
1TasksSchedulets=newTaskSchedule("$0");
2ts.streamIn(camera, rayTracingProperties, bodyPositions);
3ts.task("$0", Renderer::render, pixels, dimensions, camera, rayTracingProperties, bodyPositions, bodySizes, bodyColors, bodyReflectivities, skybox, skyboxDimensions);
4ts.streamOut(pixels);
```
Listing 9: TaskSchedule composition for accelerating the Java render method with TornadoVM.
```
1TasksSchedulets=newTaskSchedule("$0");
2ts.streamIn(camera, rayTracingProperties, bodyPositions);
3ts.task("$0", Renderer::render, pixels, dimensions, camera, rayTracingProperties, bodyPositions, bodySizes, bodyColors, bodyReflectivities, skybox, skyboxDimensions);
4ts.streamOut(pixels);
```
Listing 10: TaskSchedule composition for accelerating the Java render method with TornadoVM.
Figure 15. The result of calculating reflections.
screen, which allows for the interface to respond to user input, producing an interactive experience. A feature called AnimationTimer is provided with the JavaScript suite, which allows the user to define a subroutine inside a handle method that gets called at every update; frequently used to create animations and game loops. We make use of this feature to invoke TornadoVM and render the canvas with the updated pixels.
## 4. Evaluation
We performed the performance evaluation by utilizing the OpenCL backend of TornadoVM which has coverage for NVIDIA GPUs, Intel integrated GPUs, and Intel CPUs among other devices. The performance evaluation uses the following two baselines: a) a sequential execution of the Java code, and b) a parallel implementation using Java Parallel Streams. The accelerated versions of the ray tracer have been run using the OpenCL runtime on a multi-core CPU, an integrated GPU, and a dedicated GPU.
### Evaluation Setup
The specifications of the system used to run the tests is shown in Table 1. Regarding the evaluation methodology, we performed a warm-up phase of 100 frames when running in benchmark mode (terminal), and after 2 minutes of execution when running the GUI that renders the scene in real time. After the warmup phase, we report the average of the ten consecutive executed frames.
_Setting up a benchmark scene._ To ensure consistency between setups when recording performance with different settings, a default benchmarking scene was hard-coded with one light source, one plane, and five spheres, as shown in Figure 16. Furthermore, we evaluated the ray tracer with three different canvas sizes: a) 720p (1280 x 720), b) 1080p (1920 x 1080), and c) 4K (3840 x 2160).
### Overall real-time performance
To assess performance we compute the average number of frames rendered each second (FPS) during runtime. This is achieved by recording timestamps using _System.nanoTime()_ at the start and end of each iteration of the render loop to calculate the elapsed time for each frame being synthesized.
Figure 17 depicts the average FPS of running the application with a shadow sample size of 1 (hard shadows) as well as a reflection bounce limit of 1. These are the parameters of a fully classical Whitted-style ray tracer.
It can immediately be observed that the fully sequential execution of the application produces frame rates lower than 2 FPS at 720p and below 1 FPS for higher resolutions, which are unsuitable for real-time use cases (typically over 30 FPS). The discrete NVIDIA RTX 2060 GPU, however, has dominated this performance test with a 100-200x performance increase compared to sequential runs, achieving 2-4x higher frame rates at the 60 FPS mark. Accelerating the application on the CPU via OpenCL, outperforms the sequential runs by 10-20x, while being 50% slower than the Integrated GPU.
### Obtaining ideal rendering parameters
To achieve eye-pleasing visuals of the ray tracer, without compromising performance, the following rendering parameters have been examined:
_Soft shadow sampling._ The most performance intensive setting proved to be the only parameter where our implementation diverged from classical Whitted-style ray tracing and adapted a distributed method for rendering soft shadows. Unsurprisingly, as every single sampled shadow ray has to be checked with an intersection with every object in the scene, the number of expensive computations rapidly grow as we increase the sample size (Figure 17(a), the numbers in this chart are obtained with a reflection bounce limit of one).
\begin{table}
\begin{tabular}{|l|c|} \hline
**Operating System** & Arch Linux x86\_64 \\
**Desktop Environment** & Gnome 42.0 \\
**Host Device** & XPS 13 9370 \\
**CPU** & Intel i7-8550U @ 4.0Ghz \\
**Integrated GPU** & Intel UHD Graphics 620 \\
**Dedicated GPU** & NVIDIA GeForce RTX 2060 \\
**JDK** & graalvm-ce-java11-21.3.0 \\
**TornadoVM Version** & 0.14-dev \\
**TornadoVM Backends** & OpenCL \& PTX \\ \hline \end{tabular}
\end{table}
Table 1. The experimental hardware and software specifications of the system.
Figure 16. Benchmark scene with a skybox (Becker et al., 2017).
Figure 17. FPS achieved on different hardware.
As a middle-ground however, a shadow sample size of 200 proved to be enough to produce shadows without noticeable jags, while still achieving over 60 FPS.
#### 4.3.1. Reflection bounce limit
The analysis of reflection bounce limit was expectedly found to be highly dependent on the shadow sample size as the shadows within reflections are computed in the same manner as any initial view ray. This means that increasing the reflection bounce limit, spawns more and more shadow rays, leading to more rapid performance decrease when the shadow sample size is high. Looking at Figure 17(b), we can observe that with a shadow sample size of 200, the performance steadily declines as the number of reflection bounces grows. Visually however, reflection bounces larger than three were found to contribute an unnoticeable amount to the final visual, while resulting in the average framerate dipping under 60 FPS.
### Speedup of the rendering process
The full graphical application contains a number of overheads, outside of the rendering pipeline, that contribute to the final recorded performance, such as the efficiency of the pixelWriter, and the overheads of JavaFX. Thus, as a final evaluation, a non-GUI benchmark is set up to isolate and record the execution time of the _Renderer:render_ method by calling the function in isolation. This allows a speedup analysis of the rendering pipeline by itself showcasing the achieved performance improvements through the use of TorndadvM.
The non-GUI benchmark uses the same scene as the GUI evaluation, however the parameters for the shadow sample size and reflection bounce limit are set to 200 and 3 respectively as the ideal selected values. The benchmark simply places the render function inside a loop of 100 iterations to generate 100 frames. The execution time of the loop is then divided by 100 to obtain the time taken to render 1 frame. By doing this exercise, we observe speedups that range from 19.5x (720p - CPU/Integrated GPU) to 796x (4K - Dedicated GPU) compared to the Java Parallel Stream implementation.
## 5. Related Work
Since the first introduction of ray tracing (Beng et al., 2017), several implementations of different levels of complexity (Sutton et al., 2017; Wang et al., 2017) have been proposed throughout the years that utilize various data structures for improving the algorithmic performance (Sutton et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017). In addition, several studies that both assessed the performance of accelerated ray tracing algorithms(Sutton et al., 2017) and proposed optimizations have been done over the years (Beng et al., 2017; Wang et al., 2017; Wang et al., 2017). Regarding commercial implementations, both NVIDIA and AMD offer ray tracing capabilities with dedicated ray tracing (RT) cores for acceleration (Beng et al., 2017; Wang et al., 2017). Additionally, depending on the underlying algorithmic implementation of ray tracing, several hardware extensions have been proposed to accelerate various parts of the compute pipeline (Wang et al., 2017).
In programming terms, the majority of commercial ray tracing support is developed using heterogeneous frameworks such as CUDA (Wang et al., 2017), OpenGL (Liu et al., 2017), OpenCL (Liu et al., 2017), and Vulkan (Vulkan, 2017). Existing kernels or custom-built ones can be integrated with various programming languages via native bindings (e.g. Java Native Interface (JNI)). In the context of Java, as in other managed programming languages, existing libraries can be employed via performing native calls and manually performing all memory allocation and copying. To the best of our knowledge, this is the first work that adds high-performance graphics performance and ray tracing in Java, without having to utilize pre-built binaries or low-level external libraries.
## 6. Conclusions & Future Work
In this paper, we presented the first real time ray tracing framework written entirely in Java. To achieve real time performance, the ray tracer has been implemented with TorndoVM that enables transparent hardware acceleration of Java programs on GPUs. After analyzing the design of the Java ray tracer, we presented a comprehensive performance evaluation across different hardware architectures and frame resolutions. Our results showcase that the Java-based ray tracer achieves real-time performance ranging between 45 and 234 FPS depending on the hardware accelerator and frame size.
Having established a baseline implementation as a proof-of-concept, the algorithmic details and complexity of the ray tracer can be augmented by integrated advanced features such as: reflection diffusing over different materials, refractions, anti-aliasing, polygonal meshes, and spatial partitioning. In addition, the TorndoVM compiler can be enhanced to support the dedicated ray tracing cores currently present in modern GPUs via intrinsics or specialized instructions.
###### Acknowledgements.
This work is partially funded by grants from Intel Corporation and the European Union Horizon 2020 ELEGANT 957286. Additionally, this work is supported by the Horizon Europe AERO, INCODE, ENCRYPT and TANGO projects which are funded by UKRI grant numbers 10048318, 10048316, 10039809 and 10039107.
Figure 18. FPS achieved by increasing shadow sample sizes or reflection bounce limits. |
2307.16324 | Mispronunciation detection using self-supervised speech representations | In recent years, self-supervised learning (SSL) models have produced
promising results in a variety of speech-processing tasks, especially in
contexts of data scarcity. In this paper, we study the use of SSL models for
the task of mispronunciation detection for second language learners. We compare
two downstream approaches: 1) training the model for phone recognition (PR)
using native English data, and 2) training a model directly for the target task
using non-native English data. We compare the performance of these two
approaches for various SSL representations as well as a representation
extracted from a traditional DNN-based speech recognition model. We evaluate
the models on L2Arctic and EpaDB, two datasets of non-native speech annotated
with pronunciation labels at the phone level. Overall, we find that using a
downstream model trained for the target task gives the best performance and
that most upstream models perform similarly for the task. | Jazmin Vidal, Pablo Riera, Luciana Ferrer | 2023-07-30T21:20:58Z | http://arxiv.org/abs/2307.16324v1 | # Mispronunciation detection using self-supervised speech representations
###### Abstract
In recent years, self-supervised learning (SSL) models have produced promising results in a variety of speech-processing tasks, especially in contexts of data scarcity. In this paper, we study the use of SSL models for the task of mispronunciation detection for second language learners. We compare two downstream approaches: 1) training the model for phone recognition (PR) using native English data, and 2) training a model directly for the target task using non-native English data. We compare the performance of these two approaches for various SSL representations as well as a representation extracted from a traditional DNN-based speech recognition model. We evaluate the models on L2Arctic and EpaDB, two datasets of non-native speech annotated with pronunciation labels at the phone level. Overall, we find that using a downstream model trained for the target task gives the best performance and that most upstream models perform similarly for the task.
Jazmin Vidal\({}^{1,2}\), Pablo Riera\({}^{1,2}\), Luciana Ferrer\({}^{2}\)\({}^{1}\)Departamento de Computacion, FCEyN, Universidad de Buenos Aires (UBA), Argentina
\({}^{2}\)Instituto de Investigacion en Ciencias de la Computacion (ICC), CONICET-UBA, Argentina
{jvidal,priera,lferrer}@dc.uba.ar
**Index Terms**: computer-assisted language learning, mispronunciation detection, self-supervised learning, low-resources
## 1 Introduction
Computer-aided pronunciation training (CAPT) systems provide feedback to second language learners on their pronunciation quality, with positive impacts on learning and motivation [1]. One family of CAPT systems frames the problem as a phone recognition task, using non-native data during training [2, 3, 4]. These systems identify pronunciation errors by comparing the phonetic transcription of a student's speech to a native target sequence using dynamic programming algorithms. Another family of CAPT systems frames the problem as detection of mispronunciations, generating scores that are then thresholded for the final decision. These systems can be classified into two groups. Those that do not use non-native data during training rely on automatic speech recognition (ASR) systems trained with native speakers, and generate pronunciation scores using the acoustic model's outputs [5, 6, 7, 8]. The most widely used approach in this family is called Goodness of Pronunciation (GOP) [6]. The second group uses non-native data to directly train the system to distinguish correctly- from incorrectly-pronounced segments using a variety of input features and classifiers [9, 10, 11, 12, 13]. Recently, transfer learning techniques have been used to mitigate the problem of data scarcity that is the norm in the task. In these approaches, deep neural networks (DNNs) models trained for ASR [7, 14, 15, 16] or on a self-supervised fashion [17] are fine-tuned to detect mispronunciations.
In this work, we target systems of the second family described above. These systems have the advantage of providing a measure of the confidence that they have in their detection, enabling adjustment of false correction rates to acceptable levels to avoid frustrating the student in real educational scenarios [18]. We explore two different approaches for using representations obtained from pre-trained self-supervised or supervised models. In both cases, the representations are fed to a downstream model which generates scores for each target phone. In the first approach, the downstream model is trained for the task of phone recognition (PR) using native data only. In the second approach, the system is trained for the mispronunciation detection (MD) task using non-native speech datasets annotated with pronunciation labels at the phone level. The latter approach is similar to the one in [17], where authors explore the use of Wav2vec 2.0 [19] for the L2-Arctic database, a database of non-native English speakers of many L1 backgrounds. Unfortunately, their code is not publicly available.
Our work aims to explore the approach in [17] in further detail across a variety of SSL models and to compare it with the PR approach where downstream model training does not require non-native annotated data. To this end, we show results using various self-supervised models (WavLM+, WavLM Large, HuBERT and LightHuBERT Small) and one supervised model (TDNN-F) to extract representations. For evaluation we use two publicly available databases: L2-Arctic [20] and EpaDB [21], a database of non-native English speakers from Argentina. For performance assessment, we use the Area Under the ROC curve (AUC) and a metric designed to encourage low false correction rates, proposed in our prior work [22, 15]. We release the code to reproduce the experiments as a s3pt [23] recipe (temporary URL [https://github.com/JazminVidal/ssl-mispron](https://github.com/JazminVidal/ssl-mispron)).
## 2 Upstream models
In this work, we study the use of pre-trained SSL models for the task of mispronunciation detection. Self-supervised learning has emerged as a promising approach for speech representation learning since it can leverage large amounts of unlabelled speech data to produce effective representations.
One of the first SSL speech models to have a large impact on various downstream tasks was Wav2vec2.2.0 [19]. Following its steps, many new models appeared in recent literature. Among these, we select HuBERT [24] and WavLM+ [25] because they have a good performance on the phone recognition task in the Speech Processing Universal Performance (SUPERB) benchmark [26]. We also try a larger and a smaller model: WavLM Large [25] and LightHuBERT Small [27] to explore the effect of model size.
Two essential components comprise all of these models: a CNN encoder and 12 to 24 transformer layers. The models are trained to predict masked targets using the transformer's output.
This encourages the model to leverage long-term dependencies in the speech signal, which are essential for the task of ASR and others. In HuBERT, during the first training stage, the targets are generated by k-means clustering of Mel Frequency Cepstral Coefficients (MFCCs). In the final stage, the MFCCs are replaced with latent variables from the same model, allowing it to perform iterative refinement of the target labels. The model uses a similarity loss function between the target and the predictions. WavLM uses a similar strategy to HuBERT, but the training data is generated as mixtures of speech and noise. The objective is to predict the target clean speech signal from a noisy masked one, which makes the system more robust. In this work, we also use the WavLM+ ("Plus") version that was trained with +90k hours of speech, compared with the 960 hours used for training the HuBERT and the WavLM models. LightHuBERT is a distilled version of HuBERT which has similar performance but half of the parameters. All of these models have 12 transformer layers, except for WavLM Large which has 24 layers. The size of the activations is 384 for LightHuBERT, 768 for HuBERT and WavLM+ and 1024 for WavLM Large. All models have a frame rate of 50HZ.
The traditional ASR system we use for comparison is the Kaldi Librispeech ASR model, a TDNN-F [28] acoustic model trained for the task of senone recognition. This model consists of 18 hidden layers (some factorized, some with time delay) with ReLU activations and skip connections. The last hidden layer is linear and has an output dimension of 256. The output layer of this model consists of 6024 nodes, one per senone. The model was trained with 960 hours of native English speech from the LibriSpeech [29] dataset.
## 3 Downstream models
In this section, we describe the two approaches that we explore for training the downstream models, the phone recognition (PR) approach and the mispronunciation detection (MD) approach. Figure 1 shows a schematic of the proposed model. In both cases, the downstream model is a linear layer with one node per English phone. The layer takes the representations from the upstream models as input. What differs between the approaches is the way the parameters of this linear layer are learned. Further, both approaches require phone-level time alignments as input. The alignments are obtained with an automatic forced-aligner (described in Section 4.2) and indicate the location of each phone in the transcription.
In the **PR approach**, for the SSL models, the downstream model is trained for phone classification using a dataset of native English speakers. The output layer of this downstream model has a softmax activation, generating the posterior probabilities of each English phone for each frame. For the TDNN-F model, no downstream model is trained. Instead, the output layer of the model is used directly to compute the per-phone scores by averaging the posteriors for all senones corresponding to each phone. This combination of TDNN-F upstream model followed by the PR approach coincides exactly with the standard GOP algorithm, as implemented in the Kaldi recipe and shared in [15], which we take as our baseline.
In the **MD approach**, we use annotated non-native English data to train the downstream model to directly detect correctly versus incorrectly pronounced phones. In this case, the output layer of the model has a sigmoid activation in each node rather than a softmax activation. Again, each node corresponds to an English phone, but its value is now meant to predict whether the corresponding target phone was correctly or incorrectly pronounced. The training loss is computed by selecting the output node corresponding to the target phone pronounced at each frame (according to externally-provided alignments) and then averaging the loss over all selected scores.
The input to the downstream models for the SSL upstream models is given by a weighted sum of the encoder activations plus the activations of the transformer layers (24 layers for WavLM Large, 12 for the rest of the models). The weights are learned along with the linear layer parameters of the downstream model. We train the downstream in the TDNN-F model for the MD approach following the methods and code proposed in [15] where the output layer is stripped from the TDNN-F model and the 256-dimensional layer is fed as input to the linear layer. Finally, once the downstreams are trained, scores are computed using their frame-level outputs before the activation function (softmax for PR, sigmoid for MD) is applied. For a target phone \(p\) that starts at frame \(T\) and has length of \(D\) frames the score is computed as \(\mathrm{Score}(p)=\frac{1}{D}\sum_{t=T}^{T+D-1}s_{t,p}\) where \(s_{t,p}\) is the pre-activation output at frame \(t\) for phone \(p\).
## 4 Experimental setup
In this section, we describe the databases used in our experiments, the audio pre-processing, the model training procedure and the evaluation metrics.
### Databases
The TIMIT dataset was used for training the PR downstream models, while Epa-DB and L2Arctic were used for MD model training and final evaluation of the approaches.
**TIMIT**[30] is a database of read native English speech designed for the development and evaluation of ASR systems. It contains a total of 6300 sentences recorded at 16kHz by 630 speakers of 8 dialects of American English. Each speaker recorded 10 phonetically rich sentences. The database includes time-aligned orthographic, phonetic and word transcriptions.
**Epa-DB**[21] is a database of non-native English speech by Spanish speakers from Argentina. Recording may contain noise and vary in sample rate between 16kHz and 44kHz. The database contains 3200 English short utterances produced by 50 speakers (25 male and 25 female). Manual annotations of pronunciation labels are included for each audio sample.
**L2-ARCTIC**[20] is a database of non-native English speech intended for research in accent conversion and mispronunciation detection. The speech was recorded in a controlled scenario, under quiet conditions using quality microphones with a sample rate of 44.1kHz. There are 3621 annotated recordings from 24 monnative speakers (12 males and 12 females) whose first languages (L1s) are Hindi, Korean, Mandarin, Spanish, Arabic and Vietnamese. A subset of 150 utterances per speaker is manually annotated with mispronunciation errors.
### Preprocessing and Model Training
For this work, the set of target phones is given by the 39 phones in the English language plus silence [31]. To annotate the mispronunciations, L2Arctic uses International Phonetic Alphabet (IPA) symbols, whereas EpaDB and TIMIT use ARPAbet coding. To normalize annotations, we mapped L2Arctic phones to ARPAbet. Further, TIMIT included two extra phones not considered in the other two databases, /AX/ and /DX/. We mapped them to /AH/ and /T/, their closest phones in our phone set.
As explained in Section 3, our models require alignments and labels at the frame level. We obtain time-alignments and
phonetic transcriptions using a Kaldi TDNN-F based forced aligner implemented in PyKaldi [32], the same model we use on the PR and MD supervised systems. We take the phonetic transcriptions returned by the forced aligner as the targets the student should have pronounced. These are our individual samples for prediction. The total count of target phone instances per database is 39775 positives and 10480 negatives for EpaDB and 99799 positives and 18539 negatives for L2Arctic. TIMIT contains 241225 native phone instances.
For training of MD models and evaluation, we align each sequence of target phones with its corresponding sequence of manual annotations and assign their labels. Positive labels are assigned to correctly pronounced phones and negative labels to incorrectly pronounced ones. For L2-Arctic we use a version of the ALINE algorithm [33] adapted to work on ARPAbet symbols. For EpaDB we follow the alignment scripts provided in [15]. Note that if a non-native sample contains an error where one or more phones are added between two target phones, one must decide which target to assign the negative label that corresponds to the added phone(s). For simplification, we follow the approach taken in [20] and ignore all addition errors marked in L2-Arctic. This is not necessary for EpaDB because additions in this database are annotated as a substitution of the target phone by two new phones.
Next, we down sample all the waveforms in EpaDB and L2Arctic to 16kHz and partition the three databases into subsets. For TIMIT we use the training, development and testing splits specified in the 3gprl phone recognition recipe. For EpaDB, we assign 30 speakers for development and 20 for testing. For L2Arctic, we use the 20 speakers with non-Spanish L1 backgrounds for development and leave the 4 Spanish-L1 background speakers for testing, to make it comparable to the EpaDB test set. We train the PR models on the training split of TIMIT and select the best model using the development split. Also, in the PR case, the development split of the non-native dataset being evaluated is used for selecting the decision threshold on the test data. For the MD models, we first obtain scores in the full development set by doing K-fold cross-validation. The pooled scores are used to determine the decision threshold. For EpaDB we use 6 folds, divided by speaker. For L2Arctic we use 5 folds, divided by L1. Finally, for evaluation of the test split, we train a model using the full development set. Splits and fold lists for each database and system are provided with the code in the repository listed in Section 1.
We implement the SSL models using _s3prl_. For the downstream models for the SSL case, we use AdamW optimization and mini-batches of 64 samples using a learning rate of \(1\times 10^{-4}\) (\(1\times 10^{-5}\) for Large model). The best number of training epochs was selected based on cross-validation results. For the TDNN-F model, we follow the implementation provided in [15] where we train the last layer using the Adam optimizer, with mini-batches of 32 samples over 300 epochs using a learning rate decaying every 10 epochs by a factor of 0.9, starting from 0.01. Experiments were done using an RTX 3090 GPU.
### Evaluation metrics
Our systems generate scores for each target phone which are expected to have higher values for correctly pronounced phones than for incorrectly pronounced ones. Hard decisions can then be made by comparing these scores with a threshold. Each possible threshold results in a false positive rate (FPR) and false negative rate (FNR). In our results, we report the area under the false negative versus false positive rate curve (equivalent to 1 minus the traditional AUC metric). This metric integrates the performance over all possible operating points given by different thresholds and is a very standard metric used for this and many other tasks. In addition, following [15], we report another metric considered to be more appropriate for the task of mispronunciation detection, where the false negative rate should be minimized to avoid frustrating the student with unnecessary corrections. We define a cost given by FPR + 2 FNR, where FNR is penalized more than the FPR, prioritizing low FNR over low FPR. This type of cost function is widely used in speaker verification and language detection tasks [34], where the weights are determined depending on the application scenario. To compute the cost, a decision threshold is needed. One possible approach is to choose the threshold that minimizes the cost for each phone on the test data itself, resulting on the best possible cost on that data (MinCost). Selecting the optimal threshold on the test data, though, leads to optimistic estimates of the cost. Hence, for the test data, we also compute the cost obtained when the threshold is selected as the one that optimizes the cost of the development data for each phone. We call this the Actual Cost (ActCost).
Both 1-AUC and costs are computed as averages over phones. This is because if all samples are pooled together for computation of the metrics, the most frequent phones dominate the value of the metric. Yet, for this application, infrequent phones are equally important as frequent ones. Further, since some phones in EpaDB and L2-Arctic have very few instances of incorrect pronunciation and the performance on these phones cannot be robustly estimated, when computing the average met
Figure 1: _Schematic of our approach. Frame-level outputs of an upstream model are fed to a linear layer that produces one score per target phone in the English language and per frame in the phrase. Next, using the time alignments provided as input, we select the scores corresponding to the target phones detected at each frame. Finally, phone-level scores are computed by averaging over all the frames for each of these targets. For the final classification into correct or incorrect pronunciations, we compare the phone-level scores to a threshold tuned on the development data for each phone. For MD downstream models the linear layer is trained using the frame-level loss shown in this figure which uses the correct versus incorrect labels for each target phone in the alignment, replicating them over all frames in each phone. PR downstream models are instead trained for the task of phone recognition._
rics we discard all phones with less than 50 instances of the minority class. Finally, we use bootstrapping [35] with speaker-level sampling to compute confidence intervals and assess statistical significance of differences between systems.
## 5 Results
Figure 2 shows the average ActCost and MinCost for the test splits of EpaDB and L2-ARCTIC. Each group of bars compares different systems, for the phone recognition (PR) and mispronunciation (MD) downstream approaches. Comparing the PR and the MD groups, we see that training with non-native data to directly detect mispronunciations results in better performance than using the outputs of a phone recognizer to compute mispronunciation scores. Yet, in scenarios where little labeled non-native data is available, the MD approach may not be feasible, while the PR approach can still be used.
Importantly, the figure shows that even for the PR systems the average ActCost is significantly better than 1, meaning the systems are all better than the best naive system. This indicates that such systems would still be useful in practice according to this application-motivated metric. Notably, all SSL and the TDNN-F upstream models lead to similar performance values for each of the two downstream approaches. The relatively small differences between models' performances show that the Large WavLM model is not the best for this task, despite being the best model for the native phone recognition task in the SUPERB benchmark. Also, the Small LightHuBERT model is not far behind the rest of the models, showing similar performance to HuBERT which has twice as many parameters. Again, while HuBERT is better than LightHuBERT for the task of phone recognition according to the SUPERB benchmark, this advantage does not carry over to the task of mispronunciation detection. Overall, the WavLM+ model gives the best or close to best results across both datasets.
The solid horizontal lines inside the bars indicate the MinCost. This is an optimistic estimate of the cost, since, in practice, one never has the full evaluation data on which to estimate the thresholds. The height of the bars on the other hand, is the ActCost, computed using the threshold estimated on the development data. We can see that the ActCost is within 10% of the MinCost for most PR systems, both for EpaDB and Arctic, indicating that the threshold can be robustly selected based on the development data. For the MD results, the discrepancy is a little larger, but it does not exceed 15%.
Table 1 shows 1-AUC and ActCost for the PR and the MD approaches for two models: TDNN-F which can be considered a baseline, and WavLM+. We can see a discrepancy between 1-AUC and ActCost both in terms of trends (a system may be better than another for one metric but worse for the other metric) and in terms of absolute values (1-AUC are relatively farther from their baseline, which is 0.5, than ActCost values from their baseline, which is 1.0). Two things explain these discrepancies. First, 1-AUC ignores the problem of threshold selection which is equivalent to assuming that the threshold will always be set optimally - an unreasonable assumption. On the other hand, the ActCost values are affected by the threshold selection process. Second, 1-AUC integrates the performance of the whole range of operating points, while the cost focuses on a single point, which is the point of interest for this application. Given this discrepancy, and the fact that 1-AUC is not directly reflecting the performance of interest for our task (both because it ignores the threshold selection problem and because it does not focus on a relevant operating point), we believe 1-AUC and other metrics that suffer from one or both these issues, like the F1-score, are not appropriate for evaluation of this task. For a related discussion on classification metrics, see [36].
Additionally, we run a series of experiments to explore potential improvements in performance though, unfortunately, did not find any significant gains. We explored different training losses with phone-level summarization and class weighting in the loss. We tried more complex downstream architectures, such as CNNs, but found that they tended to overfit more than simpler linear models. We also tried fine-tuning some upstream layers but the results were comparable to those obtained without fine-tuning, although further exploration of training hyperparameters should be performed.
## 6 Conclusions
In this study, we explored addressing the mispronunciation detection task by using pre-trained self-supervised (SS) and supervised models to generate speech representations which are then used as input to downstream models for score generation. We compared two approaches for training the downstream models, using only native data and using non-native data annotated for pronunciation scoring, finding that, as expected, the latter approach leads to improved performance over the first one. The first approach, though, can be used when not enough annotated non-native data is available for model training. Among the SS models tested, WavLM+ achieved the best performance, followed closely by all other models, indicating that the specific upstream model used has a much smaller effect than the downstream approach.
\begin{table}
\begin{tabular}{c c c c c} & EpaDB & \multicolumn{3}{c}{L2-Arctic} \\ & 1-AUC & ActCost & 1-AUC & ActCost \\ \hline PR TDNN-F & 0.29 & 0.85 & 0.29 & 0.83 \\ WavLM+ & 0.33 & 0.82 & 0.33 & 0.84 \\ \hline MD TDNN-F & 0.2 & 0.73 & 0.24 & 0.79 \\ WavLM+ & 0.17 & 0.67 & 0.17 & 0.74 \\ \end{tabular}
\end{table}
Table 1: Average ActCost and 1-AUC, for PR and MD approaches and both datasets for TDNN-F and the best SSL model WavLM+. The first line, PR-TDNN-F, corresponds to the GOP baseline.
Figure 2: Results for EpaDB and L2-Arctic comparing different upstream (legend) and downstream (x-label) combinations. Values correspond to the average ActCost over phones with more than 50 samples of each class for the test data. The solid lines in each bar show the average MinCost, the optimal cost for the test data. Vertical lines indicate bootstrap confidence intervals. |
2305.18570 | Deformation and breakup of bubbles and drops in turbulence | Fragmentation of bubbles and droplets in turbulence produces a dispersed
phase spanning a broad range of scales, encompassing everything from droplets
in nanoemulsions to centimeter-sized bubbles entrained in breaking waves. Along
with deformation, fragmentation plays a crucial role in enhancing interfacial
area, with far-reaching implications across various industries, including food,
pharmaceuticals, and ocean engineering. However, understanding and modeling
these processes is challenging due to the complexity of anisotropic and
inhomogeneous turbulence typically involved, the unknown residence time in
regions with different turbulence intensities, and difficulties arising from
the density and viscosity ratios. Despite these challenges, recent advances
have provided new insights into the underlying physics of deformation and
fragmentation in turbulence. This review summarizes existing works in various
fields, highlighting key results and uncertainties, and examining the impact on
turbulence modulation, drag reduction, and heat and mass transfer. | Rui Ni | 2023-05-29T19:17:29Z | http://arxiv.org/abs/2305.18570v1 | # Deformation and breakup of bubbles and drops in turbulence
###### Abstract
Fragmentation of bubbles and droplets in turbulence produces a dispersed phase spanning a broad range of scales, encompassing everything from droplets in nanoemulsions to centimeter-sized bubbles entrained in breaking waves. Along with deformation, fragmentation plays a crucial role in enhancing interfacial area, with far-reaching implications across various industries, including food, pharmaceuticals, and ocean engineering. However, understanding and modeling these processes is challenging due to the complexity of anisotropic and inhomogeneous turbulence typically involved, the unknown residence time in regions with different turbulence intensities, and difficulties arising from the density and viscosity ratios. Despite these challenges, recent advances have provided new insights into the underlying physics of deformation and fragmentation in turbulence. This review summarizes existing works in various fields, highlighting key results and uncertainties, and examining the impact on turbulence modulation, drag reduction, and heat and mass transfer.
Xxxx. Xxx. Xxx. Xxx. 2023. AA:1-30
[https://doi.org/10.1146/](https://doi.org/10.1146/)((please add article doi))
Copyright (c) 2023 by the author(s). All rights reserved
turbulent multiphase flow, deformation and breakup/fragmentation, emulsion, polydispersed droplets and bubbles, lift and drag, heat and mass transfer
## 1 Introduction
Mixing two immiscible fluids (gas-liquid or liquid-liquid) in turbulence produces polydispersed droplets or bubbles that can freely deform, break, and coalesce while interacting with the surrounding turbulence. These processes are fundamentally important and practically relevant to multiple fields, including bubble-mediated air-sea mass exchange (Villermaux et al., 2022), chemical emulsions, food science, nuclear thermal hydraulics, and two-phase heat transfer. In contrast to the deformation and breakup of droplets in low-Reynolds-number viscous flows (Stone, 1994), in turbulence, these dynamics are intimately linked to multiple length and time scales associated with the background turbulent eddies.
A wide range of drop/bubble sizes can therefore be achieved via the adjustment of the turbulence characteristics. For example, turbulence generated by a simple batch stirrer system can break an oil-water mixture into macroemulsions with the size of the dispersed oil droplets at \(\mathcal{O}(1\)-\(100~{}\mu\)m). But if nanoemulsions with droplets of \(\mathcal{O}(10\)-\(100)\) nm are desired, the turbulent scales have to be much smaller, requiring an energy-intensive high-pressure homogenizer (HPH) method (Schultz et al., 2004, Hakansson, 2019).
Despite the wide range of scales involved, many key concepts crucial to understanding deformation and breakup in turbulence can be traced back to the seminal works by Kolmogorov (1949) and Hinze (1955), i.e. the Kolmogorov-Hinze (KH) framework. The KH framework has gained widespread acceptance in various fields, however, it is crucial to acknowledge that it contains a number of assumptions and hypotheses. The purpose of this review is to gather studies from various disciplines that investigate the deformation and breakup of both droplets and bubbles in turbulence, in order to determine the regimes in which the KH framework is applicable and more importantly, where it may fall short and new challenges and opportunities await.
### Key hypotheses and assumptions in the Kolmogorov-Hinze framework
(a) Turbulence was assumed to be homogeneous and isotropic. (b) The drop size was assumed to be in the inertial range of turbulence. (c) Droplets were assumed to be neutrally buoyant, with buoyancy and density ratio disregarded. (d) It was hypothesized that the breakup is driven by the dynamic pressure caused by changes in velocity over distances at the most equal to the drop diameter. (e) The framework assumes that the interaction between drops and turbulence is one-way, with droplets having no effect on the turbulent dynamics. (f) While Kolmogorov took into account the kinematic viscosity ratio between the two phases to separate different regimes, Hinze proposed to use the Ohnesorge number (defined in Section 2) to measure the importance of the inner viscosity.
This review provides an overview of the dynamics of deformation and breakup and their impacts on momentum, mass, and heat transfer, with a particular focus on experimental methods and results and a limited survey of simulation findings. For in-depth coverage of numerical methods for resolving deformable interfaces in turbulence, readers are referred to the recent reviews by Tryggvason et al. (2013) and Elghobashi (2019). The subject is closely related to the broader realm of particle-laden turbulence, including spherical (Balachandar & Eaton, 2010, Brandt & Coletti, 2022), non-spherical (Voth & Soldati, 2017), and buoyant particles (Mathai et al., 2020), but with a particular emphasis on deformability. This review also complements other comprehensive reviews on fragmentation (Villermaux
2007), bubble dynamics (Magnaudet & Eames 2000, Risso 2018, Lohse 2018), and the complexity introduced by surfactant (Takagi & Matsumoto 2011), phase inversion (Bakhuis et al. 2021), and non-Newtonian liquids (Zenit & Feng 2018).
The problem being considered involves bubbles and droplets of a specific diameter, denoted as \(D\), being deformed and fragmented by surrounding turbulence characterized by parameters, such as energy dissipation rate (\(\epsilon\)), fluctuation velocity (\(u^{\prime}\)), integral scale (\(L\)), and the Kolmogorov scale (\(\eta\)). The density, dynamic and kinematic viscosities are denoted by \(\rho\), \(\mu\), and \(\nu\), respectively. The fluid properties of the carrier phase and dispersed phase can be differentiated using subscripts \(c\) and \(d\), respectively. The interfacial tension between the two phases is represented by \(\sigma\).
The problem at hand is characterized by a multitude of parameters, and as a result, the relevant dimensionless groups are also vast. However, by making some key assumptions and hypotheses, as outlined in **Textbox 1**, Kolmogorov (1949) was able to simplify the problem. He proposed that, for the deformation and breakup of large bubbles/droplets (\(\eta\ll D\ll L\)), the most important dimensionless number is the Weber number, which is a measure of the ratio between the inertial forces to surface tension forces.
\[We_{t}=\frac{\rho_{c}u_{D}^{2}D}{\sigma} \tag{1.1}\]
where \(u_{D}\) is the eddy velocity of size \(D\) and \(u_{D}^{2}=C_{2}(\epsilon D)^{2/3}\) is the estimation using the second-order structure function in the inertial range in homogeneous and isotropic turbulence (HIT), where \(C_{2}\approx 2.3\) is the Kolmogorov constant. Furthermore, it was postulated that, if the Weber number is the only dimensionless number that affects the breakup problem, there must exist a critical Weber number (\(We_{t}^{c}\)) that corresponds to the critical diameter(\(D^{c}\)), below which the droplets remain stable for a prolonged period in turbulence.
\[D^{c}=\left(\frac{We_{t}^{c}\sigma}{\rho_{c}C_{2}\epsilon^{2/3}}\right)^{3/5} \tag{1.2}\]
The idea of a critical Weber number implies an abrupt shift from a finite breakup probability to zero at \(We_{t}^{c}\), a simplistic view which does not account for turbulent fluctuations. Although the likelihood of eddies with local energy dissipation rates significantly higher than the mean is low, it is not zero. Therefore, even if the mean Weber number is below \(We_{t}^{c}\), the occasional high-energy eddies can still break bubbles or droplets. Additionally, while \(We_{t}\) captures the contribution of turbulence, persistent large-scale forcing, such as shear or buoyancy, can aid and even dominate deformation and breakup. To understand the fundamental breakup mechanisms, their contributions must be distinguished from those of turbulence. Lastly, incorporating the effects of viscosity poses significant challenges, requiring a systematic review of existing experimental data. To this end, this review is structured as follows. In Section 2, different regimes of deformation and breakup driven by turbulence, including the effects of large-scale forcing and viscous damping, are reviewed. Section 3 provides an overview of the key results and models of breakup frequency. In Section 4, the findings on how deformation and breakup influence the momentum, heat, and mass transfer between phases are summarized.
## 2 Various breakup Regimes
**Figure 1** illustrates the relevant regimes that have been studied and the typical deformation and breakup morphology that has been observed. **Figure 1a** emphasizes the problems
dominated by inertia but separately considers the effects of small-scale turbulence (\(We_{t}\)) and large-scale forcing. The large-scale forcing can arise in various forms, including a persistent mean shear with the shear rate denoted as \(\mathcal{S}\) and a pressure gradient induced by buoyancy-driven migration, with their roles in deformation measured by the shear Weber number \(We_{\mathcal{S}}=\rho_{c}\mathcal{S}^{2}D^{3}/\sigma\) and the Eotvos or Bond number \(Eo=\Delta\rho gD^{2}/\sigma\), respectively.
**Figure 1b** emphasizes the transition from an inertia-dominated to a viscous-dominated regime when \(D\) crosses the Kolmogorov length scale (\(\eta=(\nu_{c}^{3}/\epsilon)^{1/4}\)) and the viscous effect becomes more pronounced. In the viscous regime, the crucial dimensionless number is the Capillary number, i.e. \(Ca_{t}=\sqrt{\mu_{c}\rho_{c}}eD/\sigma\). As the viscous effect of the outer fluid becomes relevant, it is also necessary to consider the regimes when the inner viscosity matters as well. As a result, another key dimensionless number, i.e. the Ohnesorge number (\(Oh=\mu_{d}/\sqrt{\rho_{d}\sigma D}\)), is considered to measure the relative significance of \(\mu_{d}\) in resisting and damping deformation.
### Inertia-dominated regime (\(D>\eta\))
#### 2.1.1 Intense homogeneous and isotropic turbulence (\(We_{t}>Eo\) and \(We_{t}>We_{\mathcal{S}}\)).
In this regime, the classical KH framework is most applicable. However, it can be difficult to achieve these ideal conditions in experiments. In closed systems, HIT can be generated by forcing flows from multiple symmetrical locations. As these flows merge, HIT can be produced near the center, where it is farthest from the momentum sources and therefore has
Figure 1: A parameter space of deformation and breakup of bubbles/droplets in turbulence characterized by (a) the Weber number defined based on the small-scale turbulence (\(We_{t}\)) versus large-scale persistent forcing measured by either shear (\(We_{\mathcal{S}}\)) or buoyancy (\(Eo\)), and (b) the Ohnesorge number (\(Oh\)) and the size of the bubbles/drops (\(D\)) relative to the Kolmogorov scale (\(\eta\)). The inset panels are adapted with permission from RF98 (Risso and Fabre, 1998), VVSL16 (Verschoof et al., 2016), RCR2011 (Ravelet et al., 2011), MWMCG06 (Mason et al., 2006), EAL04 (Eastwood et al., 2004), and QMN20 (Qi et al., 2020). The placement of these insets in the parameter space only indicates the general regimes they correspond to, not their exact parameters.
the lowest energy dissipation rate. This location is also where measurements were typically taken. As a result, the probability of breakup is much higher outside the measurement volume than inside. Bubbles and drops that are likely to break would have already been broken before entering the measurement volume, making it challenging to study their behavior in classical HIT systems.
One solution is to use HIT that decays along one direction and guide bubbles or droplets through turbulence along the opposite direction. In this way, the energy dissipation rate that bubbles/drops encounter continues to increase and the measurement volume can be set at a location where the energy dissipation rate is the highest but the flow is still HIT. Masuk et al. (2019) designed a vertical water tunnel with a jet array located at the top of the test section and firing jets co-axially with the mean flow downward into the test section. The facility and its key dimensions are shown in **Figure 2a**. Tan et al. (2023) showed that, in this facility, the flow becomes HIT at around six nozzle spacings below the jet array, and such HIT continues to decay. The decay was found to scale with the nozzle diameter (\(d_{n}\)) and the jet velocity at the nozzle exit (\(v_{j}\)). In particular, the fluctuation velocity follows \(u^{\prime}/v_{j}=(x/d_{n})^{-1}\), and the energy dissipation rate decays as \(\epsilon/(v_{j}^{3}/d)=0.76(x/d)^{-7/2}\). In this setup, the bubbles were injected at the bottom of the test section where the energy dissipation rate is the weakest. As they rise, the turbulence intensity grows, and eventually reaches a point where it is sufficient to cause bubbles to deform and break. Turbulence at this location, where the measurement volume is also placed, features large energy dissipation rates of \(\mathcal{O}(1)\) m\({}^{2}\)/s\({}^{3}\), which is sufficient for bubbles of size \(\mathcal{O}(1)\) mm to reach the condition of \(We_{t}>Eo\).
Apart from increasing turbulence intensity, another approach to reach \(We_{t}>Eo\) is to
Figure 2: (a) Schematic of the side view of a vertical water tunnel that uses a jet array to produce intense HIT to study deformation and breakup (Masuk et al. 2019, Qi et al. 2020). (b) The top view of the octagonal test section along with six cameras that were used to measure the shape of deformed bubbles simultaneously with the nearby 3D turbulence (Masuk et al. 2019, Qi et al. 2020). (c) Examples of one strongly-deformed bubble captured by three different cameras. (d) The distribution of the Weber number defined based on different velocities. The red line indicates the log-normal distribution predicted based on the distribution of the instantaneous energy dissipation rate (Masuk et al. 2021).
weaken the buoyancy effect. Risso and Fabre (1998) conducted experiments in a parabolic flight, reducing the gravitational constant to \(g=0.4\) m/s\({}^{2}\) and effectively reducing \(Eo\) by 25 times. In this experiment, turbulence was generated via an axisymmetrical momentum jet close to the bottom of the device (Risso and Fabre, 1997). The average size of bubbles investigated is about 18 mm, which would have broken due to buoyancy alone if the experiments were conducted under the Earth's gravity (Tripathi et al., 2015), but under microgravity, the buoyancy effect was much weaker, and breakup was dominated by turbulence. One sequence of images for strongly-deformed bubble in such an environment is shown in an inset of **figure 1a**. The critical Weber number averaged over all the cases was reported to be around 4.5. A numerical version of the same experiment was conducted by Qian et al. (2006) using the lattice Boltzmann method with bubbles being fragmented in homogeneous turbulence in a three-dimensional periodic box. The reported critical Weber number was around 3.
Although microgravity helps reduce the impact of the relative motion between the two phases (slip velocity) driven by buoyancy, this rising motion is not the sole source of the additional pressure gradient that could drive deformation. Even with a neutrally-buoyant dispersed phase, as assumed in the KH framework, slip velocity (\(u_{\rm slip}\)) cannot be fully eliminated due to finite size effects (Homann and Bec, 2010, Bellani and Variano, 2012). To determine the extent to which deformation is driven by slip velocity, Masuk et al. (2021c) conducted an experiment to measure the shape of deforming bubbles simultaneously with their surrounding turbulence in 3D. This challenging experiment was accomplished using a diagnostic setup, which included six cameras positioned around the test section (**figure 2b**). Typical images of a deformed bubble, along with the nearby tracers, are shown in **figure 2c**. The shadows of high concentration of tracers were tracked using the openLPT method (Tan et al., 2020), while the bubble geometry was reconstructed using a technique that employs surface tension as an additional physical constraint, resulting in improved reconstruction quality (Masuk et al., 2019a).
These simultaneous measurements provide insight into the Lagrangian evolution of the bubble Weber number and the shape of individual bubbles (Masuk et al., 2021c). The distributions of the Weber number based on different velocity scales are shown in **figure 2d**. \(We_{t}=\rho(\widetilde{\lambda_{3}}D)^{2}D/\sigma\), in this case, was determined by using the eigenvalue, \(\lambda_{3}\), that corresponds to the most compressive direction (\(\hat{\bf e}_{3}\)), of the strain rate tensor coarse-grained at the bubble scale. The ensemble average of this definition of \(We_{t}\) should be equivalent to the one proposed in the KH framework, but it provides a more accurate representation of the relevant instantaneous Weber number, which was confirmed by the fact that the semi-minor axis of the bubble preferentially aligns with \(\hat{\bf e}_{3}\). This alignment suggests that it is the converging flow near the bubble and the resulting pressure rise on the interface that leads to compression and deformation. The PDF of \(We_{t}\) can be captured through a log-normal distribution (red line in **figure 2d**), calculated by following the definition of \(We_{t}\) and the distribution of local energy dissipation rate that is described by the refined Kolmogorov theory (Kolmogorov, 1962) and the multi-fractal spectrum (Meneveau and Sreenivasan, 1991).
In addition to the strain rate, the instantaneous slip velocity was also calculated and divided into the horizontal (\(x\)) and vertical components (\(z\)). Their respective Weber numbers, i.e. \(We_{\rm slip,x}\) and \(We_{\rm slip,z}\), can be calculated by using the slip velocity as the velocity scale. The PDF of \(We_{\rm slip,z}\) contains the contribution by the buoyancy-driven deformation, but the overall shapes of \(We_{\rm slip,x}\) and \(We_{\rm slip,z}\) remain similar to each other. The difference between \(We_{\rm slip,x}\) and \(We_{t}\) (**figure 2d**), on the other hand, is significant, underscoring the difference
between two deformation mechanisms brought by turbulence and finite-sized bubbles.
Through simultaneous measurements, a simple relationship between \(We_{t}\) and aspect ratio of the bubble \(\alpha\), following \(\alpha=2We_{t}^{2/3}/5+1.2\), was determined by minimizing the difference between the PDF of \(\alpha\) obtained from direct shape measurement and that calculated from \(We_{t}\). This relationship provides a way to describe bubble deformation in turbulence, which complements studies that investigated the deformation of gas bubbles rising in quiescent liquids (Legendre et al. 2012). In turbulence, while it was noted that the fit against \(We_{t}\) is slightly better than against \(We_{\rm slip}\), the difference was not substantial, suggesting that both are important in bubble deformation. However, the orientation analysis by Masuk et al. (2021b) indicates that the bubble semi-minor axis aligns signficantly better with the slip velocity than with \(\hat{\bf e}_{3}\), emphasizing the crucial roles played by slip velocity in bubble deformation in turbulence.
#### 2.1.2 Shear or buoyancy dominated deformation (\(We_{t}<Eo\) or \(We_{t}<We_{\cal S}\)).
##### 2.1.2.1 Buoyancy dominated regime (\(We_{t}<Eo\)).
In this regime, deformation is predominantly driven by buoyancy and the turbulence effect is minimal. Sevik & Park (1973) conducted an experiment on the breakup of bubbles in a turbulent jet. Bubbles with diameter varying from 4.0 mm to 5.8 mm were injected along the centerline of the jet. \(Eo\) ranges from 2.1 to 4.5, and the critical Weber number determined was about 1.3. This critical \(We_{t}\) is much smaller than 4.5 reported by Risso & Fabre (1998) based on the experiments conducted in microgravity, indicating less stress from turbulence was needed to break bubbles thanks to the extra help from buoyancy.
Ravelet et al. (2011) conducted an experiment of large bubbles rising in weak turbulence and reported two different Weber numbers, one based on the bubble's typical rise velocity and the other on the velocity gradient across the bubble. The fact that the former Weber number was close to 11.6 indicates that bubbles were strongly deformed due to buoyancy. The latter one, \(We_{t}\) as defined in equation 1, was around 1.8, which was about an order of magnitude smaller. **Figure 1a** displays snapshots of a deforming bubble, which show resemblance to bubbles rising in a still medium (Moughin & Magnaudet 2001), with the short axis of the bubble preferentially tilted towards the vertical direction and exhibiting periodic motions. These similarities were expected since bubbles were still primarily compressed in the vertical direction and the same wake instability occurred (Zenit & Magnaudet 2008). Despite these similarities, the time series of bubble deformation in turbulence were more chaotic and the decorrelation timescale was associated with the mode-2 natural frequency of the small-amplitude bubble oscillation, i.e. \(f_{2}=\sqrt{96\sigma/\rho_{c}D^{3}}\). The natural oscillation timescale was proposed as an important timescale by Sevik & Park (1973) and Risso & Fabre (1998) based on the physical picture of a bubble resonating with turbulent perturbations at its natural frequency. However, with strong buoyancy, Ravelet et al. (2011) suggested that the preferential sliding motion between the two phases significantly changes the deformation dynamics, leading to breakup driven by single intense eddies rather than the stochastic resonance observed under microgravity (\(We_{t}>Eo\)). This work implied that persistent deformation in one direction could alter the deformation dynamics driven by turbulence more than just adding to the stress.
##### 2.1.2.2 Shear dominated regime (\(We_{t}<We_{\cal S}\)).
Levich (1962) considered the breakup of small drops immersed in the logarithmic sub-layer of a turbulent boundary layer (TBL).
The mean velocity parallel to the wall, \(\langle U\rangle\), in the wall normal direction (\(y\)) is described by \(\langle U\rangle=U_{\tau}\ln(y/y_{0})/\kappa\), where \(U_{\tau}=\sqrt{\tau_{w}/\rho_{c}}\) is the friction velocity and is determined by the wall shear stress (\(\tau_{w}\)). The characteristic length scale is expressed as \(\delta_{0}=\nu_{c}/U_{\tau}\) or \(y_{0}=\delta_{0}/9\). \(\kappa\approx 0.4\) is the von-Karman constant. Levich (1962) argued that the pressure gradient that drives the drop deformation is dominated by the persistent large-scale shear across the drop size \(D\) from \(y\) to \(y+D\). Assuming \(d\ll y\), the Weber number (\(We_{S}\)) can be expressed as a function of \(y\)
\[We_{\mathcal{S}}=\frac{2\rho_{c}U_{\tau}^{2}D^{2}}{\kappa^{2}y\sigma}\ln\frac {y}{y_{0}}\quad\text{and}\quad We_{\mathcal{S},\text{max}}=\frac{\ln(180)\rho_ {c}U_{\tau}^{3}D^{2}}{10\kappa^{2}\nu_{c}\sigma}\approx\frac{3\rho_{c}U_{\tau} ^{3}D^{2}}{\kappa^{2}\nu_{c}\sigma} \tag{3.1}\]
where \(We_{\mathcal{S},\text{max}}\) is the largest value of \(We_{\mathcal{S}}\) that can be reached near the bottom end of the log layer (\(y\approx 20\delta_{0}\)). Equation 3.1 is slightly different from the original work by Levich (1962) after I corrected some issues, e.g. the assumption of the bottom of the log layer at \(y\approx e\delta_{0}\). Although it was not explicitly mentioned in the work by Levich (1962), \(We_{\mathcal{S},\text{max}}\) can be re-written as \(We_{\mathcal{S},\text{max}}=3Re_{d}\tau_{w}D/\kappa^{2}\sigma\), which is essentially the Weber number based on the wall shear stress (\(We_{\tau}=\tau_{w}D/\sigma\)) multiplied by the droplet Reynolds number \(Re_{c}=U_{\tau}D/\nu_{c}\) based on \(U_{\tau}\) and the carrier-phase fluid properties.
Yi et al. (2021) studied the behavior of an oil-water emulsion in a Taylor-Couette (TC) system, which consists of a fluid layer between two counter-rotating cylinders. The resulting flows featured two thin turbulent boundary layers (TBLs) near the surfaces of the inner and outer cylinders, leaving a larger bulk region with more homogeneous and isotropic turbulence. Yi et al. (2021) first employed the KH framework, assuming that most of the droplet breakup occurred within the bulk region. Although the scaling law was verified, as the results were reanalyzed (Yi et al., 2022, 2023), it became evident that the value of the critical Weber number is much smaller than 1, from 0.013 to 0.018 using the bulk turbulence, implying that the turbulent stresses were not sufficient to overcome the interfacial tension in the bulk. This led them to conclude that the majority of droplets stayed in the bulk but most breakup occurred in the TBLs. When using Levich's definition of Weber number, Yi et al. (2022) found that the critical Weber number is close to 5, which is order unity and more reasonable. This finding suggests that the breakup was indeed driven primarily by the mean shear in the TBL, whose thickness was around 5 times the average droplet diameter.
Bubble breakup also occurs when they are directly injected into the near-wall region of a TBL. Madavan et al. (1985) found that the bubble size in the TBL is determined by the free-stream velocity and gas flow rate, and is not affected by the method of gas injection. This finding implies that the size distribution of bubbles is primarily controlled by breakup and coalescence. Rather than following equation 3, Pal et al. (1988) proposed a new way to calculate the Weber number by estimating the local energy dissipation rate \(\epsilon\sim U_{\tau}^{3}/\theta\) (\(\theta\) is the momentum thickness) experienced by bubbles based on the turbulence within the boundary layer. Sanders et al. (2006) revised this definition, replacing the momentum thickness \(\theta\) with \(\kappa y\), resulting in a new definition of the turbulent Weber number.
\[We_{t}=\frac{2\rho_{c}U_{\tau}^{2}D^{5/3}}{(\kappa y)^{2/3}\sigma} \tag{3.2}\]
Bubbles located approximately \(y=\)1 mm (\(y\approx 370\delta_{0}\)) away from the wall in flows with a friction velocity of \(U_{\tau}=0.37\) m/s have been observed to have a size distribution of 320\(\pm\)130 \(\mu\)m (Sanders et al., 2006). This size distribution can be explained by assuming \(We_{t}\) in equation 3.2 has a critical value. But it is important to note that, for a critical \(We_{t}\) near unity, \(We_{\mathcal{S}}\) using equation 3.2 is roughly 19.
#### 2.1.3 Mixed regime (\(We_{t}\approx We_{s}\))
Most turbulent flows, e.g. homogeneous shear flow (Rosti et al. 2019, Trefftz-Posada et al. 2023), turbulent pipe or channel flows (Angeli & Hewitt 2000, Scarbolo et al. 2015, Mangani et al. 2022), breaking waves (Garrett et al. 2000, Deane & Stokes 2002), turbulent jets (Martinez-Bazan et al. 1999), von Karman swirling flow (Ravichandar et al. 2022), or stirred tanks (Shinnar 1961), typically involve both large-scale flows and turbulence.
Efforts have been made to minimize the impact of large-scale flows by injecting bubbles or droplets in regions that are closer to HIT, such as the centerline of jets (Martinez-Bazan et al. 1999) or the center of the von Karman swirling flow (Ravichandar et al. 2022). However, as the injected bubbles or droplets are carried away from the injection point, the influence of large-scale flows may still be present.
In chemical or petroleum engineering, the size of oil droplets broken by turbulence in pipes or stirred tanks is often studied using the critical Weber number defined based on the global energy dissipation rate \(\langle\epsilon\rangle\), where \(\langle...\rangle\) represents the ensemble average over the entire device. For batch rotor-stator systems and conventional stirred tanks, \(\langle\epsilon\rangle\) scales with the rotor speed \(N\) and the rotor diameter \(L\), as expressed by \(\langle\epsilon\rangle\sim N^{3}L^{2}\) (Rushton 1950, Chen & Middleman 1967). This \(\langle\epsilon\rangle\) leads to the critical Weber number being defined as \(We_{t}^{c}=\rho_{c}N^{2}L^{4/3}D^{5/3}/\sigma\), from which the critical drop size can then be determined.
In pipe flows, the global energy dissipation rate is expressed as \(\langle\epsilon\rangle=fU_{c}^{3}/2D_{p}\), where \(f\) is the friction factor, \(U_{c}\) is the mean axial velocity of the continuous phase, and \(D_{p}\) is the pipe diameter. This leads to a critical Weber number that scales with \(Df^{2/3}\). However, Kubie & Gardner (1977) argued that the velocity scale should be the fluctuation velocity, not the centerline velocity. The fluctuation velocity is approximately equal to \(1.3U_{\tau}\), where \(U_{\tau}=(f/8)^{1/2}U_{c}\). Based on this, the critical Weber number can be rewritten as \(We_{t}^{c}=f\rho_{c}DU_{c}^{2}/\sigma\), which scales with \(f\) instead of \(f^{2/3}\). However, experiments conducted by Angeli & Hewitt (2000) found that the critical drop size scales with \(f^{-3}\), albeit from a very narrow range of \(f\). Thus, more experiments are needed to fully resolve the debate and determine the appropriate relationship between the critical drop size and friction factor in pipe flows.
#### 2.1.4 Viscous effects in the inertia-dominated regime
For droplets with size \(D\gg\eta\), the deformation is dominated by flow inertia and the viscosity of the carrier phase can be neglected. However, the inner viscous damping may still play a significant role in deformation dynamics, comparable or even exceeding the impact of surface tension, as quantified by the dimensionless number, \(Oh\).
Davies (1985) and Calabrese et al. (1986) considered this problem and assumed a total balance of stresses between the external forcing by turbulence and internal damping: \(\rho_{c}\left(\varepsilon D\right)^{2/3}\sim\sigma/D+\mu_{d}\left(\varepsilon D \right)^{1/3}\sqrt{\rho_{c}/\rho_{d}}/D\). It can also be expressed in the dimensionless form, \(We_{t}\sim c_{1}+c_{2}Oh^{2}Re_{d}\sqrt{\rho_{c}/\rho_{d}}\), where \(Re_{d}=\rho_{d}(\epsilon D)^{1/3}D/\mu_{d}\) is the droplet Reynolds number based on the eddy velocity and inner fluid properties, and \(c_{1}\) and \(c_{2}\) are two fitting constants. Droplets will deform if the left side is larger than the right side, which implies that quantities of interest \(\mathcal{Q}\) (e.g. aspect ratio or breakup frequency) should be a function of the new dimensionless number following
\[\mathcal{Q}=f\left(\frac{We_{t}}{c_{1}+c_{2}Oh^{2}Re_{d}\sqrt{\rho_{c}/\rho_{ d}}}\right) \tag{5}\]
Equation 5 indicates that, for deformation and breakup, the primary dimensionless number is \(We_{t}\) when the inner viscous damping is negligible, and \(We_{t}/Oh^{2}Re_{d}\sqrt{\rho_{c}/\rho_{d}}\) when it is
important to consider.
To investigate the viscous effect, Eastwood et al. (2004) injected oil droplets in a turbulent jet along the centerline, using the same setup as Martinez-Bazan et al. (1999) in their investigation of bubble breakup. The values of \(Oh\) in the experiment ranged from \(\mathcal{O}(10^{-2})\) to \(\mathcal{O}(10^{-1})\), and \(We_{t}\) was roughly \(\mathcal{O}(10)\). As illustrated in **figure 1b**, the experiment revealed a clear long filament, indicating a significant deformation preceding breakup. The extent of stretching increased with an increase in droplet viscosity, and droplets within the inertial sub-range stretched to lengths comparable to the local integral scale before fragmentation. This long filament was confirmed by other experiments (Andersson and Andersson, 2006; Solsvik and Jakobsen, 2015) and simulations (Hakansson et al., 2022), and it was found to be connected to the large number of daughter droplets generated from droplet breakup.
Recognizing the importance of this process, Maass and Kraume (2012) adopted the idea originally proposed by Janssen and Meijer (1993) to describe a drop elongating in one dimension and thinning in the other two exponentially over time, driven by a constant straining flow. Given a stretching rate, the breakup time could be estimated once a critical diameter of the neck was determined, which was assumed to be related to the critical Capillary number based on \(\mu_{d}\) and the stretching rate. However, this model did not account for the instability of the filament itself (Ruth et al., 2022) or the possible interruption by small-scale eddies, as it assumed a persistent elongation at the scale of the filament.
Vankova et al. (2007) investigated the size of emulsion droplets produced using a HPH with various oils, resulting in a range of \(Oh\) from \(\mathcal{O}(10^{-1})\) to \(\mathcal{O}(10)\). The authors adopted equation 5 and adjusted two constants, \(c_{1}\) and \(c_{2}\), to fit their experimental results. The obtained values were 0.78 and 0.37 for \(c_{1}\) and \(c_{2}\) respectively. However, subsequent analysis by Zhong and Ni (2023) questioned the validity of the linear combination of the restoring and dissipative terms in equation 5 and proposed a new equation to better collapse all the data.
\[\mathcal{Q}=f\left(\frac{We_{t}}{1+Oh}\right) \tag{6.1}\]
\(\mathcal{Q}\), in this case, is the non-dimensionalized breakup frequency. This quantity will be further discussed in detail in Section 3.2. Note that this relationship was established based on limited experimental data. In order to further examine the validity of this relationship, it is possible to use simulation databases, such as the one by Mangani et al. (2022), with well-controlled characteristics that cover a broad range of density and viscosity ratios in turbulence.
### Viscous-dominated regime (\(D<\eta\))
#### 2.2.1 Experimental Methods
There are three main experimental methods for producing droplets with \(D<\eta\): (a) generating turbulence with high \(\epsilon\); the typical value of \(\epsilon\) can be calculated by satisfying two criteria \(Ca_{t}=\sqrt{\rho_{c}\mu_{c}\epsilon}D/\sigma>Ca_{t}^{c}\) and \(D<\left(\nu_{c}^{3}/\epsilon\right)^{1/4}\). (b) reducing surface tension by adding surfactants, and (c) increasing the viscosity of the carrier phase. In food processing industries, including dairy, breaking droplets into nanometer sizes is important for the desired texture, color, and stability for storage. Option (b) or (c) is less ideal due to the required food-grade chemical additives, leaving option (a) as the primary method. The HPH is the key technique for this purpose (Hakansson, 2019). It uses a high-pressure piston pump (50-200 MPa) and a narrow gap (\(\mathcal{O}(100)\)\(\mu\)m) to accelerate emulsions to velocities of up to \(\mathcal{O}(100)\) m/s, creating a localized turbulent jet (Bisten and Schuchmann, 2016) with \(\epsilon\) in the range of \(\mathcal{O}(10^{8})\) to \(\mathcal{O}(10^{9})\) m\({}^{2}\)/s\({}^{3}\) and fragmenting droplets to sizes of
\(\mathcal{O}(\)10-100) nm.
The high-speed colloid mill (rotor-stator) system, is another commonly used method for emulsion preparation. This type of system is similar to the high-Reynolds-number TC system (van Gils et al., 2011, Grossmann et al., 2016), but with a small gap size (\(h=\mathcal{O}(100)\)\(\mu\)m) and high rotor spin rate (inner cylinder with the radius of \(r_{i}=\mathcal{O}(10)\) cm) at \(\omega_{i}=\mathcal{O}(10^{4})\) revolutions per minute (RPM), which results in a moderate Reynolds number (\(Re=\omega_{i}r_{i}h/\nu_{c}\)) at around \(\mathcal{O}(10^{4})\) but a significant mean shear and turbulent energy dissipation rate (Schuster et al., 2012).
#### 2.2.2 Negligible inner viscosity \(Oh\ll 1\)
By systematically increasing the viscosity of the continuous oil phase (\(\mu_{c}\)) by two orders of magnitude while keeping the dispersed aqueous phase constant (option c in the previous section), Boxall et al. (2012) studied the transition of the dynamics of droplet breakup from the inertia-dominated to the viscous-dominated regimes. The droplets were fragmented by turbulence in a customized mixing cell driven by a six-blade impeller. The droplet size was determined using the focused beam reflectance method, and the average droplet size was calculated only after the steady state was reached, which took approximately three hours.
If \(Oh\) is negligibly small, the only dimensionless number that matters to the problem is the capillary number (\(Ca_{t}\)). Assuming that a critical capillary number (\(Ca_{t}^{c}\)) exists, Shinnar (1961) suggested that the critical droplet size (\(D^{c}\)) can be determined as follows
\[D^{c}=\frac{Ca_{t}^{c}\sigma}{\sqrt{\mu_{c}\rho_{c}\epsilon}} \tag{7}\]
In the experiments conducted by Boxall et al. (2012), the impeller speed (\(N\)) and diameter (\(L\)) were kept almost constant, so the energy dissipation rate (\(\epsilon\sim N^{3}L^{2}\)) did not vary significantly. As \(\mu_{c}\) increased, it was shown that the droplet size remained unchanged for low values of \(\mu_{c}\), and scaled with \(\mu_{c}^{-1/2}\) in the viscous-dominated regime at high \(\mu_{c}\). This finding provides direct support for Equation 7.
#### 2.2.3 Large inner viscosity \(Oh\gtrsim 1\)
In the previous section, a water-in-oil emulsion was considered and the viscosity of the dispersed phase was negligible in comparison to the continuous phase. For other types of emulsions, such as oil-in-water, \(\mu_{d}\) is large and the viscous damping by the inner fluid cannot be neglected, and it is likely that \(Ca_{t}^{c}\), if it exists, is dependent on the value of \(Oh\).
To model this dependence, Gupta et al. (2016b) proposed a model based on the physical picture of a part of the droplet, with the size of the instability length scale, being extruded from the parent droplet due to the surrounding turbulence. By assuming that the propagation timescale of the instability is dominated by the viscous diffusion of eddy momentum into the droplet, a new formulation was derived, \(Ca_{t}^{c}\sim Oh^{2/5}\), indicating that the critical Capillary number is not a constant, but a function of \(Oh^{2/5}\). **Figure 3a** shows the measured \(Ca_{t}^{c}\) over a range of \(Oh\) by systematically varying the types of oils used in experiments. The oil droplet size obtained from HPH was substituted into the definition of \(Ca_{t}\) to obtain its value. The proposed scaling seems to capture the scaling between \(Ca_{t}^{c}\) and \(Oh\) well.
This model proposed by Gupta et al. (2016b) further predicts the critical droplet size relative to all other fluid properties as follows: \(D^{c}=C_{1}(\sigma^{5/6}\mu_{d}^{1/3})/[(\rho_{d}\sigma)^{1/5}(\mu_{c}\rho_{c} \epsilon)^{5/12}]\). Specifically, it implies that \(D\) scales with \(\mu_{d}^{1/3}\), \(\mu_{c}^{-5/12}\), and \(\epsilon^{-5/12}\). The scaling was com
pared to experimental data obtained by Wooster et al. (2008), who created an oil-in-water emulsion with varying \(\mu_{c}\) by adding various concentrations of polyethylene glycol into water. The comparison is shown in **figure 3b**. Although the proposed scaling of \(D\sim\mu_{c}^{-5/12}\) by Gupta et al. (2016a) agrees well with the data, it is difficult to distinguish it from the one proposed by Shinnar (1961) (\(D\sim\mu_{c}^{-1/2}\)), which did not account for \(\mu_{d}\) and \(Oh\).
In addition, Wooster et al. (2008) actually reported two datasets using the same emulsions with the only difference being the types of surfactant added. The data adopted by Gupta et al. (2016a) is the one with 5.6 wt % sodium dodecylsulphate (SDS, 98.5%). The data using 5.6 wt % polysorbate 80 (Tween 80, 98%) is also shown in **figure 3b** (triangles), which deviates noticeably from the proposed -5/12 scaling, agreeing better with \(D\sim\mu_{c}^{-1/6}\). This difference implies the possible complexity introduced by surfactant and coalescence.
Nevertheless, assuming the scaling proposed by Gupta et al. (2016a) is correct and combining the two regimes, i.e. with or without significant viscous effects, one can express quantities of interest in the problem of viscous deformation and breakup using an equation similar to equation 5
\[\mathcal{Q}=f\left(\frac{Ca_{t}}{c_{3}+c_{4}Oh^{2/5}}\right) \tag{8.1}\]
where \(c_{3}\) and \(c_{4}\) are constants that are yet to be determined to understand the critical Capillary number and the transitional \(Oh\) from the regime where the inner viscous damping is important to where it is not.
Figure 3: (a) The critical Capillary number measured from nanoemulsions made with a homogenizer as a function of \(Oh\). The black solid line shows the 2/5 scaling, with the blue shaded area representing the \(\pm 0.1\) uncertainty in the scaling exponent. (b) The critical drop size \(D^{c}\) versus the viscosity of the carrier phase for nanoemulsions made with two different types of surfactants. Figures are adapted with permission from Wooster et al. (2008) and Gupta et al. (2016a)
## 3 Deformation and breakup: timescales and dynamics
### Characteristic timescales
The prediction of the evolution of the drop and bubble size distribution over space and time can be captured by solving the population balance equation, which is a Boltzmann-type equation. This approach has been widely implemented in many simulation methods to predict the dynamics of polydispersed particles, bubbles, and drops that constantly coalesce or break (Marchisio & Fox 2013, Shiea et al. 2020). In the population balance equation, there are three key quantities to describe breakup, the breakup frequency, the daughter bubble/droplet size distribution, and the number of daughters.
For breakup frequency, selecting the right timescale to non-dimensionalize it is the first challenge. The discussions of the characteristic timescale of deformation and breakup can be traced back to Section 127 of the book by Levich (1962), who considered four different breakup timescales based on the inner viscosity and interface velocity. These regimes can be determined by estimating the magnitude of the three terms, the pressure gradient \(\nabla p/\rho_{d}\), unsteady term \(\partial u_{d}/\partial t\), and viscous term \(\nu_{d}\nabla^{2}u_{d}\), in the wave equation that describes the inner fluid motion during breakup. Four timescales have been proposed, as expressed in the following equations.
\[\frac{\partial u_{d}}{\partial t}\sim\frac{\nabla p}{\rho_{d}}\sim\frac{p}{D \rho_{d}};\frac{\partial u_{d}}{\partial t}\sim\frac{D}{\tau^{2}};p\sim\frac{ \sigma}{D}\Rightarrow\tau\sim\sqrt{\frac{\rho_{d}D^{3}}{\sigma}}\text{ (low viscosity, low speed)} \tag{10}\]
\[\frac{\nabla p}{\rho_{d}}\sim\nu_{d}\nabla^{2}u_{d}\sim\frac{\nu_{d}}{D\tau}; \frac{\nabla p}{\rho_{d}}\sim\frac{p}{D\rho_{d}};p\sim\frac{\sigma}{D}\Rightarrow \tau\sim\frac{\mu_{d}D}{\sigma}\text{ (high viscosity, low speed)} \tag{11}\]
\[\frac{\partial u_{d}}{\partial t}\sim\frac{\nabla p}{\rho_{d}}\sim\frac{p}{D \rho_{d}};\frac{\partial u_{d}}{\partial t}\sim\frac{D}{\tau^{2}};p\sim\rho_ {c}u_{c}^{2}\Rightarrow\tau\sim\frac{D}{u_{c}}\sqrt{\frac{\rho_{d}}{\rho_{c}}} \text{ (low viscosity, high speed)} \tag{12}\]
\[\frac{\nabla p}{\rho_{d}}\sim\nu_{d}\nabla^{2}u_{d}\sim\frac{\nu_{d}}{D\tau}; \frac{\nabla p}{\rho_{d}}\sim\frac{p}{D\rho_{d}};p\sim\rho_{c}u_{c}^{2} \Rightarrow\tau\sim\frac{\mu_{d}}{\rho_{c}u_{c}^{2}}\text{ (high viscosity, high speed)} \tag{13}\]
where \(\tau\) is the characteristic breakup timescale. \(u_{d}\) is the characteristic inner fluid velocity, which does not show up in the final estimation of \(\tau\) because \(u_{d}\) scales roughly with \(D/\tau\).
The eddy turnover time has been proposed as another characteristic timescale, \(t_{D}=\epsilon^{1/3}D^{-2/3}\), for describing bubble fragmentation in breaking waves (Garrett et al. 2000, Deane & Stokes 2002, Chan et al. 2021, Gao et al. 2021). In particular, the scaling between \(t_{D}\) and \(D\) directly results in the steady-state bubble size distribution scaling with \(D^{-10/3}\), which was also observed in droplet breakup in turbulence (Soligo et al. 2019, Crialesi-Esposito et al. 2023a). Note that the eddy turnover time is, in fact, in line with equation 11 given by Levich (1962), if the characteristic velocity scale (\(u_{c}\)) of the outer flow is set as the eddy velocity at the bubble size \(u_{D}=\left(\epsilon D\right)^{1/3}\), as proposed in the KH framework. The only difference left is that \(t_{D}\) does not account for the density ratio between the two phases.
Another proposed timescale is the natural oscillation frequency, \(f_{2}\), which is associated with the second eigenmode of weak-amplitude oscillations (Lamb 1879). Assuming inviscid fluids, \(1/f_{2}=\sqrt{(3\rho_{d}+2\rho_{c})D^{3}/30\sigma}\). Although similar to the timescale listed in equation 10, there is an important distinction to note: Levich's model only accounted for the density of the inner fluid, whereas a more complicated relationship with both \(\rho_{d}\) and \(\rho_{c}\) is required for \(1/f_{2}\).
### Experimental results
Zhong & Ni (2023) compiled experimental results on the breakup frequency of bubbles and oil droplets with sizes \(D>\eta\) based on the recommendation made by Hakansson (2020). The eddy turnover time of size \(D\) was used as the characteristic timescale, chosen from a list of options mentioned in the previous section. **Figure 4** clearly shows that the breakup frequency drops sharply as \(We_{t}\) decreases, indicating droplets or bubbles with smaller \(We_{t}\) take longer to break. Thus, it is evident that the definition of a critical Weber number depends on the observation time (Vela-Martin & Avila, 2022). If one waits longer during an experiment, smaller droplets or bubbles can be obtained for a given level of turbulence.
Although the data for bubbles (**figure 4a**) showed better agreement, discrepancies were still noticeable at high \(We_{t}\). Specifically, Martinez-Bazan et al. (1999) reported a plateau in breakup frequency (\(g\)) close to 1, while Vejrazka et al. (2018) claimed that \(g\) increased towards 10 without reaching a plateau. This disparity could be due to the different experimental conditions employed: the former was conducted in turbulence closer to HIT, while the latter involved injecting bubbles along the centerline of a turbulent jet. As the bubbles spread away from the centerline as they migrate downstream, they may experience mean shear.
The inverse of the mean shear rate (\(\mathcal{S}\)) was suggested by Zhong & Ni (2023) as another potential timescale, estimated by dividing the centerline velocity of the turbulent jet by its width, following the canonical turbulent jet. The estimated shear rate is plotted as a black solid line in **figure 4a**, matching the measured breakup frequency quite well and providing an alternative timescale to consider for breakup frequency in inhomogeneous and
Figure 4: Breakup frequency of (a) bubbles and (b) droplets normalized by the eddy turnover frequency, \(f_{e}=(\epsilon^{1/3}D^{-2/3})\), as a function of the key dimensionless number. The datasets that were compiled include VZS18 (Vejrazka et al., 2018), MML99 (Martinez-Bazán et al., 1999), SJ15 (Solsvik & Jakobsen, 2015), VT07(Vankova et al., 2007), EAL04 (Eastwood et al., 2004), VMA22 (Vela-Martin & Avila, 2022), HFSJ20 (Hers et al., 2020). Models include the ones by CT77 (Coulaloglou & Tavlarides, 1977), QTN22 (Qi et al., 2022),QMN20 (Qi et al., 2020), and WWJ03 (Wang et al., 2003).
anisotropic turbulence.
For droplet data in **figure 4b**, equation 6 mentioned in Section 2.1.4, proposed by Zhong & Ni (2023), was used to compile different datasets with different \(Oh\). The datasets provided by Vankova et al. (2007), with \(Oh\) ranging from \(\mathcal{O}(10^{-2})\) to \(\mathcal{O}(10)\), collapsed with one another well using equation 6. However, when including all the datasets, a nearly two-orders-of-magnitude variation in the breakup frequency was observed. This difference primarily arises from the experiments conducted by Vankova et al. (2007) in a homogenizer, where the drop size was in the order of \(\mathcal{O}(10^{-5})\) m, as compared to other experiments that involved much larger drops of \(\mathcal{O}(10^{-3})\) m, implying either large systematic uncertainties between large- and small-scale experiments or a potential hidden size dependence that was not accounted for in the current selection of dimensionless groups.
The deformation and breakup of bubbles and droplets could potentially be understood under a unified framework if a suitable set of dimensionless numbers is chosen to collapse all available data. In an attempt to achieve this, Zhong & Ni (2023) selected two models that represent the upper (Wang et al., 2003) and lower (Qi et al., 2020) bounds of bubble experiments, as illustrated by the shaded area in Figure 4a. The same shaded area was overlaid twice on top of the droplet datasets in Figure 4b, once with (lower shaded area with dashed lines as bounds) and once without (upper shaded area with solid lines as bounds) the density ratio, \(\sqrt{\rho_{d}/\rho_{c}}\), as suggested by Levich's timescales (Equation 11), to show how well the bubble and droplet data collapse. The inclusion of \(\sqrt{\rho_{d}/\rho_{c}}\) resulted in the collapse of most of the available bubble and droplet data, except for the dataset by Vankova et al. (2007). In contrast, when the density ratio was not considered, the bubble data showed better agreement with the results by Vankova et al. (2007). This finding suggests that the existing droplet data exhibits too much disparity to draw a definitive conclusion regarding the effectiveness of including the density ratio term for characterizing the breakup timescale.
### Numerical simulations
In addition to experiments, with the development of more advanced direct numerical simulation (DNS) algorithms for two-phase flows (Elghobashi, 2019) and Graphics Processing Unit (GPU) based codes (Crialesi-Esposito et al., 2023), it is possible to conduct a large number of simulations of breakup events to collect statistics. For example, Liu et al. (2021) implemented an efficient simulation scheme for the phase-field method to simulate the breakup of a large drop and the coalescence of \(\mathcal{O}(10^{3})\) drops.
In addition to the simulation schemes, in general, two strategies have been adopted so far. The first one involves a larger simulation domain with many drops and a limited number of selected dimensionless numbers, and drops are allowed to break and coalesce (Dodd & Ferrante, 2016; Roccon et al., 2017; Scarbolo et al., 2015; Mangani et al., 2022; Crialesi-Esposito et al., 2023). This strategy is particularly suitable for investigating breakup at high concentrations in complex environments that are relevant to many applications, such as emulsions and breaking waves. Since it simulates both breakup and coalescence in turbulence, it also helps illustrate the energy transferred between the two phases (Dodd & Ferrante, 2016; Crialesi-Esposito et al., 2023).
The second method relies on a smaller domain with only one drop but many more runs, ranging from hundreds (Riviere et al., 2021) to over 30,000 (Vela-Martin & Avila, 2022), to cover a wider parameter space. The advantage of this method is the isolation of
the breakup events without the complication of coalescence. This approach is particularly useful for investigating parameters under which breakup takes a long time, a regime where experiments suffer from large uncertainty and finite residence time.
### Deformation and breakup models
If reliable models for describing deformation and breakup can be developed, it is much more computationally efficient to integrate these models along the Lagrangian trajectories of point bubbles/droplets in turbulence to study their breakup frequency and probability. In the following, we review some of the deformation and breakup models.
For viscous fluids, Maffettone & Minale (1998) developed a model (M&M) to describe the evolution of both shape and orientation of neutrally-buoyant spheroidal droplets in a linear velocity gradient. This model was validated against several experimental studies in low Reynolds number. Recently, the model has been applied to simulating the deformation of many sub-Kolmogorov-scale neutrally-buoyant droplets (\(D\ll\eta\)) in turbulence by integrating the M&M equation numerically along their Lagrangian trajectories (Biferale et al. 2014, Spandan et al. 2016).
For inertia-dominated deformation and breakup, it is much more challenging to develop a deformation model. One model simplified the problem by ignoring the orientation and proposed to describe a droplet as a linear damped oscillator that is forced by the instantaneous turbulent fluctuations at the drop scale (Risso & Fabre 1998, Lalanne et al. 2019). The equation can be written in a dimensionless form as follows,
\[\frac{d^{2}\hat{a}}{d\hat{t}^{2}}+2\xi\frac{d\hat{a}}{d\hat{t}}+\hat{a}=K^{ \prime}We_{t}(t) \tag{13}\]
where \(\hat{a}\) represents the difference between the semi-major axis of the deformed geometry and the spherical-equivalent radius divided by \(D/2\). The damping coefficient is given by \(\xi=1/2\pi\tau_{d}f_{2}\), where \(\tau_{d}\) is the damping time scale defined as \(\tau_{d}=D^{2}/80\nu_{c}\) for bubbles (Risso & Fabre 1998) but a much more complicated implicit solution for droplets (Lalanne et al. 2019). In contrast to previous models that assumed an additive relationship between viscous stress and surface tension (Davies 1985, Calabrese et al. 1986), this model correctly incorporated the dissipative nature of viscous damping, and it has been successfully compared to experimental data on breakup statistics, even in inhomogeneous turbulence (Galinat et al. 2007, Maniero et al. 2012).
To account for the orientation and add multiple deformation mechanisms, Masuk et al. (2021) adapted the MnM model into the inertia-dominated regime by making three important modifications: (a) Velocity gradients were coarse-grained at the size of the bubble; (b) Deformation due to slip velocity was accounted for by using a pseudo-strain-rate tensor; and (c) A pseudo-rotation tensor was added to model the wake-induced bubble rotation. The modified model has been successfully used to predict deformation and orientation for bubbles in both turbulence and quiescent media, with the predicted statistics agreeing well with experimental data. These findings suggest that the modified model effectively captures the key mechanisms responsible for inertial deformation and breakup.
### Recent models for breakup frequency
Recent advances in modeling breakup mechanisms have highlighted the importance of several previously neglected factors, including gas density, eddies of different sizes, and tur
bulence intermittency, which we will summarize here. In the past, air bubbles were often modeled as having negligible density and viscosity. However, it has been shown that bubbles made of heavier gases can break more frequently (Wilkinson et al., 1993). This phenomenon was explained by Andersson and Andersson (2006), who pointed out that deformation typically results in a dumbbell shape with two uneven ends. As the smaller end retracts due to surface tension, air flow accelerates through the neck, which reduces the local pressure and speeds up the breakup process. Larger gas density tends to lower the local pressure and shorten the breakup time even further. These observations and proposed mechanisms have inspired new models developed by Xing et al. (2015) and Zhang et al. (2020), which incorporate the effect of backflow and gas density.
In addition to the density effect, to accurately model bubble-eddy collision, it is crucial to determine which eddy scales should be considered. The KH framework assumes that only the drop-scale eddy is significant, with both larger and smaller scales being negligible. Conversely, some models consider eddies of all length scales from the turbulent spectrum (Karimi and Andersson, 2018; Castellano et al., 2019). However, recent research by Vela-Martin and Avila (2021), which employed direct numerical simulation of a single drop being deformed in turbulence, found that the impact of eddies with different length scales on the variation of surface free energy is not equal. Turbulent fluctuations at scales smaller than the drop diameter cause the majority of surface deformation, while the contribution of scales close to or larger than \(D\) is relatively minor.
Qi et al. (2022) designed an experiment using the head-on collision between two vortex rings to isolate the turbulent scales. During the early stage before the collision, only intact large-scale vortices were accessible, while the post-collision late stage was filled with many small eddies. Despite a lower overall \(We_{t}\) in the late stage, bubbles were found to break up in a more violent and faster manner due to the presence of small eddies. Building on this finding, the authors developed a new model that considers not only the stress criterion, which requires the incoming eddy to exert sufficient stress to overcome the restoring surface tension, but also the time scale. The breakup must occur within the time before the bubble relaxes. This key idea emphasizes that, instead of being gradually and consistently stretched by flows at their own length scales, bubbles are fragmented by small eddies, resulting in a sudden and intense local deformation over a short period of time. The predicted breakup frequency as a function of \(We_{t}\) is shown as the purple solid line in **figure 4a**, which agrees with the experimental data by Vejrazka et al. (2018).
Numerous studies have investigated the effect of turbulence intermittency on bubble breakup. Recent models have examined the impact of intermittency on the turbulent energy spectrum, as noted by Bagkeris et al. (2021) and Solsvik and Jakobsen (2016). However, the effect of intermittency on the distribution of \(\epsilon\), which can be derived from the multi-fractal model and described by a log-normal distribution (Meneveau and Sreenivasan, 1991), is more pronounced than that on the energy spectrum. This distribution can be incorporated into modeling quantities such as the breakup probability (Masuk et al., 2021), eddy velocity (Qi et al., 2022), and breakup frequency (Qi et al., 2020).
In particular, Qi et al. (2020) modified the model originally proposed by Martinez-Bazan et al. (1999) to account for the non-negligible breakup frequencies for small bubbles when exposed to intermittent turbulent eddies. The model prediction is shown as the blue solid line in **figure 4a** and the lower bounds for the two shaded areas in **figure 4b**. The classical model by Coulaloglou and Tavlarides (1977) (black dashed line) fits the data by Vankova et al. (2007) well. However, for most other datasets, a slower decay of the breakup frequency
as \(We_{t}\) decreases is observed, which is better predicted by Qi et al. (2020).
## 4 Modulation of mass, momentum, and heat transfer by deformation
### Deformation affecting effective bubble forces
The motion of large bubbles and droplets in turbulence can be characterized by the combined effect of multiple hydrodynamic forces, such as buoyancy, drag, lift, added mass, Basset history, and pressure forces (Magnaudet & Eames 2000, Sridhar & Katz 1995). As the majority of these forces are shape-dependent, it is not surprising that bubble and droplet deformability can significantly impact their translational motion and local concentration in turbulence.
Most research on forces experienced by bubbles has focused on their behavior in laminar shear (Legendre & Magnaudet 1998, Tomiyama et al. 2002, Lu & Tryggvason 2008, Dijkhuizen et al. 2010, Hessenkemper et al. 2020). Bubble deformation, driven primarily by buoyancy, is measured by \(Eo\). As \(Eo\) increases, the lift force undergoes a transition from positive to negative values. This shift in direction is attributed to the stretching and tilting of vorticity generated at the bubble surface, which transforms into a pair of counter-rotating streamwise vortices in the bubble wake. These vortices have the opposite sign compared to those produced around a spherical bubble, resulting in a negative lift force. In addition to vorticity production, direct asymmetric deformation caused by external shear can also lead to negative lift (Zhang et al. 2021, Hidman et al. 2022).
In turbulence, Sugrue (2017) proposed a new dimensionless number taking the product of \(Eo\) and the ratio between the local turbulent kinetic energy and the squared relative velocity between the two phases. This new number is linked to the Weber number based on the fluctuation velocity. The authors carried out extensive simulations, varying lift coefficients, and minimizing the differences between experimental and simulated results. This allowed them to extract lift coefficients for different flow conditions. The results showed that the lift coefficients exhibited a similar inversion to those observed in laminar shear flow. However, two key differences were noted. Firstly, the magnitude of the coefficients was much smaller, and secondly, the inversion diameter was smaller for turbulence-driven cases.
To measure the lift coefficient in turbulence, Salibindla et al. (2020) conducted an experiment in nearly HIT. Although the flow does not have a mean shear, the bubbles were constantly subjected to local shear and vorticity. The transition of bubble rising velocity in turbulence from lower to faster than its counterpart in an otherwise-quiescent medium was found as the bubble size increased. Based on this finding and the access to the statistics of both phases, the lift and drag forces experienced by bubbles with different sizes were determined, and the lift inversion at smaller bubble size was observed experimentally. The lift inversion was correlated to the turbulence-induced deformation measured by \(We_{t}\), which is close to 1 as the inversion occurs, suggesting that turbulence-induced bubble deformation becomes essential. The transition of the bubble's rising velocity was linked to the preferential sampling of different regions (upward or downward) in turbulence. This work also supports the mechanism proposed by Spelt & Biesheuvel (1997) that small spherical bubbles tend to preferentially sample the downward flows in turbulence instead of being trapped in the vortex cores (Wang & Maxey 1993) and also quantitatively explains other previous experiments (Poorte & Biesheuvel 2002, Aliseda & Lasheras 2011, Prakash et al. 2012).
It is worth noting that the transition of bubble rising velocity was not observed in an
other work conducted by Ruth et al. (2021). In this study, the change in rise velocity was attributed mainly to drag rather than lift, highlighting the need for further investigation into how deformable bubbles modify lift and drag forces in intense turbulence where deformation is driven by local turbulence instead of buoyancy. Nevertheless, once lift and drag forces are determined, the added mass force can also be evaluated experimentally. Recent work by Salibindla et al. (2021) showed that the added mass force experienced by bubbles in turbulence can be accurately modeled using the solid spheroid approximations (Lamb 1879). These findings suggest that the instantaneous added-mass force experienced by deformable bubbles can be approximated by appropriately oriented spheroids with the correct instantaneous aspect ratios.
### Turbulent drag reduction
The deformation and breakup of bubbles and drops in turbulent boundary layers have been extensively studied in the context of drag reduction (Ceccio 2010, Murai 2014) in various configurations, including turbulent TC (van Gils et al. 2013), flat plates (Sanders et al. 2006), channel flows (Murai et al. 2007, Tanaka et al. 2021), and even under the model ship hull (Tanaka et al. 2022). Several mechanisms have been proposed to explain the origin of the bubble-induced drag reduction effect (Ferrante and Elghobashi 2004, Lu et al. 2005, Lu and Tryggvason 2008, van den Berg et al. 2007). The successful drag reduction experiments have been summarized by Murai (2014) in two regimes: relatively small bubbles in high-speed flows or large bubbles in low-speed flows. A recent overview of this topic was presented by Lohse (2018), who highlights the difference between bubble-induced drag reduction at small Reynolds numbers and large Reynolds numbers, attributed to the effects of Froude number and Weber number, respectively.
While most experiments on bubble-mediated drag reduction focused on large-scale aver
Figure 5: The drag of turbulent Taylor–Couette flow during its transition from non-boiling (grey shaded areas) to boiling at \(t=t_{boil}\) and the key quantities, including (a) Liquid temperature \(T_{TC}\), (b) volume fraction \(\alpha\), (c) drag reduction (DR) as a function of time; Two time steps correspond to the photographs shown in (d) and (e). Figures are adapted with permission from Ezeta et al. (2019).
aged skin friction, a few studies measured the couplings between the two phases. Kitagawa et al. (2005) conducted 2D simultaneous measurements of both phases in a horizontal turbulent channel to determine the mechanism of drag reduction caused by bubbles. The bubbles, which were roughly 530 \(\mu\)m and deformable close to the wall, were shown to cause a drop in the Reynolds stress of the carrier phase. The reduction ratio was almost the same as that of the skin friction coefficient. Murai et al. (2007) measured the fluctuation of the wall shear stress due to the passage of individual deformable bubbles with size comparable to the boundary layer thickness using a shear transducer. They found that the bubbles considerably reduced the local wall shear stress, which was induced by the two roll vortices upstream and downstream of the bubble that modified the local turbulent shear stress (Oishi & Murai, 2014).
Drag reduction has been a topic of extensive study in the context of high-Reynolds-number TC systems because it is a closed system where frictional drag can be measured as the global torque. van Gils et al. (2013) showed that the system transitions from moderate drag reduction of 7% to a more significant one with nearly 40% drag reduction at Reynolds number (\(Re\)) above \(10^{6}\) and gas void fraction of 4%. This transition was observed as the Weber number crosses one and the bubble becomes more deformable, even as the size becomes smaller as \(Re\) increases. As \(Re\) increases and drag decreases, a larger bubble aspect ratio was observed (as shown in **figure 1a**), signaling the connection between deformation and drag reduction. This point was further supported by Verschoof et al. (2016), who showed that the large drag reduction (40%) could be 'turned off' by adding some surfactant. The surfactant reduces surface tension and hinders coalescence, which leads to much smaller bubbles with smaller \(We_{t}\). Similar levels of drag reduction were also observed in boiling TC driven by vapor bubbles, again due to their deformation in turbulence, as shown in **figure 5**. As the vapor bubble volume fraction \(\alpha\) increases, the probability of finding a larger value of \(We_{t}\) increases, probably due to the presence of more-deformable larger bubbles formed by coalescence. Finally, a recent study by Wang et al. (2022) investigated how viscosity ratios between the two phases affect drag in TC and found that the drag coefficient increases as the inner viscosity increases and drop deformability weakens, further reaffirming the importance of deformation in turbulent drag reduction.
Extensive simulations have been also conducted to explore the potential mechanism of drag reduction driven by deformable bubbles/droplets. Iwasaki et al. (2001) demonstrated that the droplets can attenuate near-wall streamwise vortices via deformation. Lu et al. (2005) found that large deformable bubbles can lead to significant drag reduction by suppressing streamwise vorticity near the wall, whereas less-deformed bubbles tend to bring additional shear rate near the viscous sublayer to increase drag. Spandan et al. (2018) reported that deformable bubbles can reduce drag in TC flows by modulating dissipation in their wakes, regardless of whether the carrier fluid is weakly or highly turbulent. Overall, these studies underscore different mechanisms at play in bubble/droplet-mediated turbulent drag reduction through a deformable interface.
### Turbulence modulation
Dodd & Ferrante (2016) performed direct numerical simulations to investigate the behavior of finite-sized drops in decaying isotropic turbulence, exploring a range of Weber numbers, density ratios, and viscosity ratios between the two phases. In this work, the turbulence kinetic energy (TKE) equations were derived to capture the energy transfer between two
phases, with a particular focus on the role of interfacial energy. It was shown that, while the presence of droplets always enhances the dissipation rate near the droplet interface, the initial turbulence decay rate is faster in the presence of more deformable drops (i.e., larger Weber numbers). However, the decay rate becomes independent of the Weber number later on, likely because the turbulence has decayed to a point where it is no longer strong enough to deform or break any drops. The study also demonstrated that droplet coalescence acts as a source of TKE through the power of surface tension, while breakup serves as a sink of TKE.
In their investigation of turbulence modulation at different scales, Freund & Ferrante (2019) analyzed the same data generated by Dodd & Ferrante (2016) using wavelet transforms instead of Fourier transforms because wavelet restricts the effects of non-smoothness locally while preserving spatial information. At a distance larger than \(5\eta\) or \(D/4\), the carrier-phase spectra remained nearly unaffected, but the energy at high wavenumbers increased close to the drop interface due to enhanced local velocity gradients. They also observed that drops with larger density ratios reduced the energy at low wavenumbers compared to neutrally-buoyant drops. In a separate study, Scarbolo et al. (2013) examined the interaction between turbulence and one large deformable droplet and showed that the presence of the interface results in vorticity generation and turbulence damping near the interface, and the distance from the interface where these effects are present depends on the surface tension.
The spectrum analysis of turbulence modulation by drops in HIT was also studied by Mukherjee et al. (2019). They showed that the presence of dispersed drops leads to a transfer of energy from large scales to small scales, as the drops subtract energy from the former and inject it into the latter. This transfer of energy is reflected in the energy spectra, which cross the spectra of the single-phase turbulence at a length scale close to the Kolmogorov-Hinze scale, as initially proposed by Perlekar et al. (2014) for a different system. Crialesi-Esposito et al. (2022) provided further insights into the mechanisms behind this phenomenon, showing that surface tension forces play a key role in absorbing energy from large scales and reducing its transfer through advection terms. Eventually, this energy is transferred to small scales by surface tension. They also noted that the modulation of turbulence spectra is more sensitive to the viscosity ratio, while the scale-by-scale energy budget depends more on the volume fractions.
Bubble-induced turbulence in a swarm of rising bubbles in an otherwise quiescent fluid has been extensively studied in recent years. The phenomenon has been investigated (Riboux et al. 2010, Innocenti et al. 2021, Pandey et al. 2022), modelled (Ma et al. 2017, Du Cluzeau et al. 2019), and also recently reviewed by Risso (2018) and Mathai et al. (2020). Mercado et al. (2010) measured the energy spectrum of the carrier phase using a phase-sensitive anemometry and found that the energy decays with wave number following a power law of \(-3.2\), which is close to the \(-3\) scaling proposed by Lance & Bataille (1991). They also observed that even at a small gas volume fraction, typically from \(0.28\%\) to \(0.74\%\), deformable bubbles tend to cluster along the vertical direction at both small and large scales, which was attributed to deformation (Bunner & Tryggvason 2003).
### Heat and mass transfer
The study of heat and mass transfer in turbulent multiphase flows is a complex and multifaceted topic that deserves a dedicated review because a wide range of relevant inter
facial transfer phenomena, including boiling/condensation (Russo et al., 2014), dissolution (Mac Huang et al., 2015, Farsoiya et al., 2023), melting (Machicoane et al., 2013), evaporation (Birouk and Gokalp, 2006, Duret et al., 2012, Marie et al., 2014, Mees et al., 2020), and the induced Stefan flow, can be potentially modulated by turbulence and a deformable interface.
Deformation and breakup have been shown to affect heat and mass transfer. However, the extent of their influence on these processes is not yet fully understood beyond their effect on the size distribution and interfacial area. Recently, Albernaz et al. (2017) investigated the deformation and heat transfer of a single drop in HIT and found a negative correlation between local curvature and temperature on the droplet surface. Wang et al. (2019) found that the kinematics of deformable bubbles and droplets could significantly enhance the heat transfer in turbulent convection, and revealed that the emergent size distribution of the bubbles and droplets in the system governed the degree of heat transfer enhancement achievable. Dodd et al. (2021) used direct numerical simulation (DNS) to study finite-size, deformable, and evaporating droplets in HIT, and they showed that higher surface curvature induced by deformation and breakup leads to higher evaporation rates, especially for cases with large Weber numbers. Shao et al. (2022) demonstrated that the Stefan flow induced by evaporation reduces the coalescence rate and attenuates the turbulence kinetic energy. Scapin et al. (2022) extended the problem to homogeneous shear flow and found that the larger surface area due to deformation leads to an overall larger mass transfer rate for drops with higher Weber numbers in persistent mean shear. They also observed a weak correlation between the interfacial mass flux and curvature at high temperature and a positive correlation at large Weber number, low ambient temperature, and slower evaporation. Boyd and Ling (2023) simulated the aerodynamic breakup of an acetone drop in a high-speed and high-temperature vapor stream and showed that as the drop deforms, the increase of frontal surface area results in a significantly increased rate of evaporation and a nonlinear decrease in drop volume over time.
## Summary points
1. New experiments capable of measuring the shape of deforming bubbles and drops simultaneously with the detailed surrounding turbulence in 3D have been made possible with the advancement of diagnostic methods and new facilities that can generate controlled turbulence.
2. Turbulence in many applications is usually inhomogeneous and anisotropic. Deformation and breakup in these systems are often subjected to both a non-uniform distribution of turbulence intensity and a persistent large-scale shear.
3. The primary dimensionless parameter for breakup driven by the flow inertia of the carrier phase (\(D\gg\eta\)) is \(We/(1+Oh)\), while for breakup driven by the viscous stress of the carrier phase (\(D\ll\eta\)), it is \(Ca/(c_{3}+c_{4}Oh^{2/5})\). These relationships were established based on limited data, and further studies are required to validate them.
4. Deformation is driven by large-scale eddies and breakup is accelerated by small-scale eddies. The multiscale nature of breakup is a key component in understanding the inertia-dominated breakup.
5. Existing works showed the intricacies of the interplay between the local interface curvature, local interfacial mass flux, and the induced Stefan flow for two-phase heat and mass transfer with a deformable interface.
## Disclosure Statement
The author is not aware of any affiliations, memberships, funding, or financial holdings that might be perceived as affecting the objectivity of this review.
## Acknowledgments
The author thanks colleagues who provided advice and suggestions on the manuscript. The author also acknowledges support from NSF grant CBET 1854475 and CAREER-1905103 and Office of Naval Research grant N00014-21-1-2083 and N00014-21-1-2123.
|
2302.13482 | PyReason: Software for Open World Temporal Logic | The growing popularity of neuro symbolic reasoning has led to the adoption of
various forms of differentiable (i.e., fuzzy) first order logic. We introduce
PyReason, a software framework based on generalized annotated logic that both
captures the current cohort of differentiable logics and temporal extensions to
support inference over finite periods of time with capabilities for open world
reasoning. Further, PyReason is implemented to directly support reasoning over
graphical structures (e.g., knowledge graphs, social networks, biological
networks, etc.), produces fully explainable traces of inference, and includes
various practical features such as type checking and a memory-efficient
implementation. This paper reviews various extensions of generalized annotated
logic integrated into our implementation, our modern, efficient Python-based
implementation that conducts exact yet scalable deductive inference, and a
suite of experiments. PyReason is available at: github.com/lab-v2/pyreason. | Dyuman Aditya, Kaustuv Mukherji, Srikar Balasubramanian, Abhiraj Chaudhary, Paulo Shakarian | 2023-02-27T02:40:05Z | http://arxiv.org/abs/2302.13482v3 | # PyReason: Software for Open World Temporal Logic
###### Abstract
The growing popularity of neuro symbolic reasoning has led to the adoption of various forms of differentiable (i.e., fuzzy) first order logic. We introduce PyReason, a software framework based on generalized annotated logic that both captures the current cohort of differentiable logics and temporal extensions to support inference over finite periods of time with capabilities for open world reasoning. Further, PyReason is implemented to directly support reasoning over graphical structures (e.g., knowledge graphs, social networks, biological networks, etc.), produces fully explainable traces of inference, and includes various practical features such as type checking and a memory-efficient implementation. This paper reviews various extensions of generalized annotated logic integrated into our implementation, our modern, efficient Python-based implementation that conducts exact yet scalable deductive inference, and a suite of experiments. PyReason is available at: github.com/lab-v2/pyreason.
L ogic programming, Neuro Symbolic Reasoning, Generalized annotated logic, Temporal logic, First order logic, Open world reasoning, Graphical reasoning, AI Tools +
Footnote †: 2023 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
## 1 Introduction
Various neuro symbolic frameworks utilize an underlying logic to support capabilities such as fuzzy logic [1], parameterization [2], and differentiable structures [3]. Typically, implementations of such frameworks create custom software for deduction for the particular logic used, which limits modularity and extensibility. Further, emerging neuro symbolic use cases including temporal logic over finite time periods [4] and knowledge graph reasoning [5] necessitate the need for a logical frmaework that encompasses a broad set of capabilities. Fortunately, generalized annotated logic [6] with various extensions [7, 8, 9] capture many of these capabilities. **In this paper we present a new software package called PyReason for performing deduction using generalized annotated logic that captures many of the desired capabilities seen in various neuro symbolic frameworks including fuzzy, open world, temporal, and graph-based reasoning.** Specifically, PyReason includes a core capability to reason about
first order (FOL) and propositional logic statements that can be annotated with either elements of a lattice structure or functions over that lattice. Further, we have provided for additional practical syntactic and semantic extensions that allow for reasoning over knowledge graphs, temporal logic, reasoning about various network diffusion models, and predicate-constant type checking constraints. This implementation provides for a fast, memory optimized, implementation of the fixpoint operator used in the deductive process. By implementing the fixpoint operator directly (as opposed to a black box heuristic) the software enables full explainability of the result. As such is the case, this framework captures not only classical logic, but a wide variety of other logic frameworks including fuzzy logic [10, 11, 12], weighted real valued logic used in logical neural networks [2], van Emden's logic [13], Fitting's bilattice logic [14], various logic frameworks for reasoning over graphs or social networks [9, 8, 15] (as well as the various network diffusion models captured by those frameworks), and perhaps most importantly, logic frameworks where syntactic structure can be learned using differentiable inductive logic programming [3, 16] as well as other neuro symbolic frameworks [17, 7]. The key advantages of our approach include the following:
1. **Direct support for reasoning over knowledge graphs.** Knowledge graph structures are one of the most commonly-used representations of symbolic data. While black box frameworks such as [18] also permit for reasoning over graphical structures, they do not afford the explainability of our approach.
2. **Support for annotations.** Classical logic implementations such as Prolog [19] and Epilog [20] inherently do not support annotations or annotation functions, hence lack direct support for capabilities such as fuzzy operators. Further, our framework goes beyond support for fuzzy operators by enabling arbitrary functions that can be used over real values or intervals of reals. This is a key advantage to reasoning about constructs learned with neuro symbolic approaches such as [2, 3, 16, 17, 7].
3. **Temporal Extensions.** While the framework of [6] was shown to capture various temporal logics, extensions such as [9] have provided for syntactic and semantic addons that explicitly represent time and allow for temporal reasoning over finite temporal sequences. Following [9], we use a semantic structure that represents multiple time points, but we have implemented this in a compact manner to preserve memory. Our solution allows for fuzzy versions of rules such as "if \(q(A)\) then \(r(A)\) in \(t\) time steps." Note that these capabilities are not present in nearly every current implementation of fuzzy logic.
4. **Use of interpretations.** We define interpretations as annotated function over predicates and time together. It allows us to capture facts which are true before \(t=0\). While annotated logic [6] can subsume various temporal logics without additional constructs, we have enabled temporal reasoning through incorporating a temporal component in interpretations. By combining annotated predicates and the time variable, we believe our framework is more flexible and suitable for emerging neuro symbolic applications involving time - as such applications will inherently require both time and real-valued annotations. Additionally, it is to be noted that we do not make a closed world assumption i.e. anything that is not mentioned in the initial set of interpretations is \(false\). Instead, we consider all other interpretations to be unknown at the beginning of time.
5. **Graphical Knowledge Structures.** We also implement [8] which provides graphical
syntactic extensions to [6]. This is included in our implementation, notably adding extended syntactic operators for reasoning in such structures (e.g., an existential operator requiring the existence of \(k\) items). An example of such a rule would be a fuzzy version of "if \(q(A)\) and there exist \(k\) number of \(B\)'s such that \(b(A,B)\) then \(r(A)\)".1 Footnote 1: Note that while this example is classical, PyReason supports fully annotated logic, allowing for arbitarily defined fuzzy operators (e.g., t-norms); See section 2 and online supplement for technical details.
6. **Reduction to computational complexity due to grounding.** Our software leverages both the inherent sparsity of the graphical structure along with a novel implementation of predicate-constant type checking constraints that significantly improves utility in a variety of application domains but also provide drastic reduction to complexity induced by the grounding problem. We are not aware of any other framework for first-order logic that provides both such capabilities.
7. **Ability to detect and resolve inconsistencies in reasoning.** As logical inferences are deduced through applications of the fixpoint operator over predefined logical rules, logical inconsistencies can not only be detected but also located exactly where in the inference process the inconsistency occurred. We resolve any such inconsistencies by leveraging uncertainty. In the software implementation, as soon as an inconsistency is detected we relax and fix the bounds to complete uncertainty. The ability to check and locate inconsistencies enhance the explainability feature. Neuro symbolic approaches like [2, 7] may also look to leverage inconsistency as part of loss during the training phase.
In section 2, we outline the syntax and semantics of [6] as well as our extensions. Our software implementation is described in section 3 and is expanded upon in the online only supplement. In section 4, we provide experimental results of our framework to demonstrate reasoning capabilities in two different real-world domains. We have conducted experiments on a supply-chain [21] (\(10K\) constants), and a social media [22] (\(1.6M\) constants) dataset. For evaluation, we used various manually-curated logic programs specifying rules for the temporal evolution of the graph, completion of the graph, and other such practical use-cases (e.g., identifying potential supply chain disruptions) and examined how various aspects affect runtime and memory usage (e.g., number of constants, predicates, timesteps, inference steps, etc.). The results show that both runtime and memory remain almost constant over large ranges, and then scale sub-linearly with increase in network size.
#### Online Resources
Open source python library is available at: pypi.org/project/pyreason.
PyReason codebase can be found at: github.com/lab-v2/pyreason.
Online only supplement is available at: github.com/lab-v2/pyreason/tree/main/lit
## 2 Logical Framework
In this section, we provide an overview of the annotated logic framework with a high-level description of the logical constructs, knowledge graph structure, key optimizations, and
operation of the fixpoint algorithm.
**Knowledge graph.** We assume the existence of a graphical structure \(G=(\mathcal{C},E)\) where the nodes are also constants (denoted set \(\mathcal{C}\)) in a first-order logic framework. The edges, denoted \(E\subseteq\mathcal{C}\times\mathcal{C}\), specify whether any type of relationship can exist between two constants. Similar to recent frameworks combining knowledge graphs and logic [3, 18], we shall assume that all predicates in the language are either unary (which can be thought of as labeling nodes) or binary (which can be thought of as labeling edges). We note that we assume the existence of a special binary predicate \(\mathit{rel}\), which we shall treat as a reserved word. For \((a,b)\in E\) we shall treat \(\mathit{rel}(a,b)\) as a tautology and for \((a,b)\notin E\) we shall treat \(\mathit{rel}(a,b)\) as uncertain. Note that we can support no restrictions among the pairing of constants by creating \(G\) as a fully connected graph. Likewise, we easily support the propositional case by using a graph of a single node (essentially treating unary predicates as ground atoms). We provide a running example in this section. In Figure 1, we illustrate how a knowledge graph is specified in our framework.
**Example 2.1** (Knowledge Graph).: _Consider the following nodes: three students- Phil, John, Mary and two classes- English and Math. Nodes and edges have unary and binary predicates as shown in Fig. 1. Hence we get the following non-ground atoms:_
_student(S), gpa(S), promoted(S)_
_class(C), difficulty(C)_
_friend(S,S')_
_takes(S,C), grade(S,C), expertise(S,C)_
_Here, S, S', and C are variables which when grounded with constants from the graph, produce ground atoms such as:_
_student(john), student(phil), student(mary)_
_class(math), class(english)_
_takes(john,math), takes(mary,english)_
_all._
Figure 1: Example of a knowledge graph
_In the propositional case, a non-ground atom reduces to a propositional statement. For e.g. The predicate "takes(john,math)" can be represented as a propositional statement: "John takes Math class" and can be either True or False. It is true in this example, as shown in Fig. 1._
**Real-valued Interval Annotations.** A key advantage of annotated logic [6] is the ability to annotate the atoms in the framework with elements of a lattice structure as well as functions over that lattice. In our software, we use a lower lattice structure consisting of intervals that are a subset of \([0,1]\). This directly aligns with the truth interval for fuzzy operators [12], as well as paradigms in neuro symbolic reasoning [2, 7], and social network analysis [8, 9]. We can fully support scalar-valued annotations by simply limiting manipulations to the lower bound of the interval and keeping the upper bound set at \(1\). These annotations can support classical logic by limiting annotations to be \([0,0]\) (false) and \([1,1]\) (true). It can also support tri-valued logic by permitting \([0,1]\), which represents no knowledge. Of course, there is no need to conduct restrictions, especially if it is desirable to support logics that make full use of the interval [2, 8, 9]. Additionally, we support literals as detailed in [7]. We treat negations the same way as in [1] - for an atom annotated with \([\ell,u]\), we annotate its strong negation(\(\neg\)) with \([1-u,1-\ell]\).
**Example 2.2** (Real-valued Interval Annotations).: _Continuing with the previous example, we can support a variety of annotations as described above._
_Propositional logic:_
_student(john): [1,1] (example of a True statement)_
_takes(mary,math): [0,0] (example of a False statement)_
_Fuzzy logic (using scalar values):_
_gpa(john): [X,1], X \(\in\) [0,1]_
_Full interval usage:_
_difficulty(english): [0.3,0.7] (both bounds are used here to capture the variation among students regarding the perceived difficulty of the subject "english")._
_Modeling uncertainty and/or tri-valued logic:_
_Let's assume that we do not have complete knowledge of this network - specifically, we do not have any information about the friendship between John and Phil. So, they might be friends (annotated [1,1]) or not friends (annotated [0,0]). Our framework can model such a case as:_
_friend(john,phil): [0,1]_
**Interpretations.** Commonly in logic frameworks, an initial set of facts is used. We use the term "initial interpretations" to capture annotations correct at the beginning of a program. In the envisioned domains - to include the ones in which we perform experiments - these initial interpretations shall be represented as a knowledge graph that not only includes graph \(G\) but also attributes on the nodes and edges (resembling predicates) and real-valued interval annotations (specifying the initial annotations for each element). Additionally, following intuitions from various temporal logic frameworks that incorporate both temporal and other real-valued annotations [9, 8, 23, 24, 25], we extend our syntax to provide for temporal annotations as part of the interpretations. Following the related work, time is represented as finite discrete time-points. The initial interpretations comprises what is to be treated as true before time \(0\). Further, with the initial interpretations we can specify predicates as being either static (in other words,
ground atoms formed with those predicates retain the same annotation across all time periods) or non-static (which are permitted to change). The ability to add this restriction has clear benefit in certain domains, and also allows for key implementation efficiencies for reasoning across time periods. Further, it is noted various inductive logic programming paradigms [3, 26] utilize "extensional" predicates that are also unchanging - which could be treated as "static" in PyReason.
**Syntax**:
\(I(A,\hat{t}):[L,U]\)
where, \(A\) can be an atom (propositional case) or predicate (first order logic), \(\hat{t}\) is either the time point \(T=t\) for which the interpretation \(I\) is valid, or if the interpretation is static, i.e. remains unchanged for all time-points then \(\hat{t}=s\). So,
\[\hat{t}=\begin{cases}s,&\text{if }I(A,\hat{t})\text{ is static}\\ t,&\text{$t\in T$ if }I(A,\hat{t})\text{ is time-variant}\end{cases} \tag{1}\]
Annotation \([L,U]\rightarrow[0,1]\) (or, in propositional case \([L,U]\in[0,0],[1,1]\)). We incorporate literals in our system by having separate interpretations for an atom and its negation. We note that, excepting the case of static atoms, ground atoms at different time points need not be dependent upon each other. For example, atom "a" at time \(1\) can be annotated with \([0.5,0.7]\) and annotated with \([0.1,0.2]\) at time \(2\). There is no monotonicity requirement between time points.
**Example 2.3** (Interpretations).: _Continuing the previous example, Initial set of facts regarding student enrollment:_
_I(student(john),0) = [1,1] (John is enrolled as a student)_
_I(student(mary),0) = [1,1] (Mary is enrolled as a student)_
_I(student(phil),0) = [0,0] (Phil is not enrolled as a student)_
_Static interpretations can be used for always true facts like:_
_I(class(english), s) = [1,1] (English is a class offered at all time-points)_
_Using temporal annotation to capture variation over time:_
_I(takes(john,math),1) = [1,1] (John takes Math class at time \(t=1\))_
_I(takes(john,math),5) = [0,0] (But is no longer taking Math at \(t=5\))_
_All other interpretations, if unspecified at \(t=0\), are initialized with \([0,1]\)._
**Logical Rules.** Rules are the key syntactic construct that enables changes to atoms formed with non-static predicates. Historically logical rules had mostly been written by domain experts, until early work like Apriori [27] and FOIL [28] to learn association rules from data followed by the emergence of rule mining techniques like causal rule mining [29] and annotated probabilistic temporal logic [24, 30, 31]. More recently, there has been research on Differentiable Inductive Logic Programming (\(\partial\)ILP) - an inductive rule learning method to learn logical rules from examples [3, 16, 32]. In the below list \(UnaSet\) and \(BinSet\) are arbitrarily sets of unary and binary predicates relevant to the rules while \(pred\) is always a non-static predicate. Note that the total number of atoms in the body is assumed to be \(n\) (across all different conjunctions). The symbol \(\exists_{k}\) means there exists at least \(k\) number of constants such that the ensuing logical sentence is satisfied.
1. Ground rule for reasoning within a single constant or edge: \(pred(c):f(x_{1},\ldots,x_{n})\leftarrow_{\Delta t}\bigwedge_{pred_{i}\in UnaSet} pred_{i}(c):x_{i}\) \[pred(c,c^{\prime}):f(x_{1},\ldots,x_{n})\leftarrow_{\Delta t}\bigwedge_{pred_{i} \in BinSet}pred_{i}(c,c^{\prime}):x_{i}\]
2. Universally quantified non-ground rule for reasoning within a single constant or edge: \(\forall X:pred(X):f(x_{1},\ldots,x_{n})\leftarrow_{\Delta t}\bigwedge_{pred_{i} \in UnaSet}pred_{i}(X):x_{i}\) \[\forall X,X^{\prime}\ s.t.\ (X,X^{\prime})\in E:pred(X,X^{\prime}):f(x_{1}, \ldots,x_{n})\leftarrow_{\Delta t}\bigwedge_{pred_{q}\in BinSet}pred_{q}(X,X^{ \prime}):x_{q}\land\bigwedge_{pred_{r}\in UnaSet}pred_{r}(X):x_{r}\land \bigwedge_{pred_{s}\in UnaSet^{\prime}}pred_{s}(X^{\prime}):x_{s}\]
3. Universally quantified non-ground rule for reasoning across an edge: \(\forall X:pred(X):f(x_{1},\ldots,x_{n})\leftarrow_{\Delta t}\exists_{k}X^{ \prime}:rel(X,X^{\prime}):[1,1]\land\bigwedge_{pred_{q}\in BinSet}pred_{q}(X,X^ {\prime}):x_{q}\land\bigwedge_{pred_{r}\in UnaSet}pred_{r}(X):x_{r}\land \bigwedge_{pred_{s}\in UnaSet^{\prime}}pred_{s}(X^{\prime}):x_{s}\)
4. Non-ground rule with rule based quantifier in the head: \(pred(X):[A_{s}(l_{1},l_{2},\ldots,l_{n}),A_{s}(u_{1},u_{2},\ldots,u_{n})] \leftarrow\bigwedge_{X_{i}\text{ s.t. }(X,X_{i})\in E}pred^{\prime}(X,X_{i}):[l_{i},u_{i}]\) Here, \(A_{s,m}^{k}(S)\) could be the \(m^{th}\) rule based quantifier defined over set \(S\) such that, \(A_{s,m}^{k}(S)=k^{th}\) highest value in set \(S\).
**Example 2.4** (Logical Rules): _For the continuing example we can formulate some interesting rules based on the formats given above as:_
1. \(promoted(X):[T(l_{1},l_{2}),U(u_{1},u_{2})]\leftarrow_{\Delta t=1}student(X):[l _{1},u_{1}]\wedge gpa(X):[l_{2},u_{2}]\) _which says, "If_ \(X\) _is a student with bounds_ \([l_{1},u_{1}]\) _and has a gpa with bounds_ \([l_{2},u_{2}]\)_, then_ \(X\) _is likely to be promoted, at the next timestep, with bounds given by a function of_ \([l_{1},u_{1}]\) _and_ \([l_{2},u_{2}]\)_."_ _Here,_ \(T\) _could be a T-norm. Some well known examples of T-norms are:_ 1. _Minimum:_ \(T(a,b)=T_{min}(a,b)=min(a,b)\)__ 2. _Product:_ \(T(a,b)=T_{prod}(a,b)=a\cdot b\)__ 3. _Lukasiewicz:_ \(T(a,b)=T_{luk}(a,b)=max(0,a+b-1)\)__ _PyReason also supports other well known logical functions like_ \(T-conorm\)_, algebraic functions like_ \(max\)_,_ \(min\)_,_ \(average\)_, among others._
2. \(\forall X,Y\ expertise(X,Y):[0.6*L,1]\leftarrow_{\Delta t=0}grade[X,Y]:[L,1]\wedge student(X):[1,1]\wedge class(Y):[1,1]\) _which says, "If_ \(X\) _is a student who obtains a grade_ \([L,1]\) _in class_ \(Y\)_, then we can estimate_ \(X\)_'s expertise of subject_ \(Y\) _by defining an annotation function_ \([0.6*L,1]\) _over a single annotation_ \([L,1]\)_."_
3. \(gpa(john):[\frac{x_{1}+x_{2}}{2},1]\leftarrow_{\Delta t=0}\exists_{i=2}C_{i}\in \mathcal{C}:class(C_{i}):[1,1]\wedge takes(john,C_{i}):[1,1]\wedge grade(john,C_{i}):[ x_{i},1]\) _which says, "If_ \(john\) _takes and earns grades for two classes, then his_ \(gpa\) _can be calculated using the algebraic function_ \(avg\) _in the head of the given existentially quantified ground rule."_
4. \(friend(S,S^{\prime}):[1,1]\leftarrow_{\Delta t=2}takes(S,C):[1,1]\wedge takes (S^{\prime},C):[1,1]\wedge class(C):[1,1]\) _a propositional rule with temporal extension which states, "If two students_ \(S\) _and_ \(S^{\prime}\) _take the same class_ \(C\)_, they develop a friendship after two timesteps."_
5. \(\forall S,S^{\prime},S^{\prime\prime}\ friend(S,S^{\prime\prime}):[1,1] \leftarrow_{\Delta t=1}friend(S,S^{\prime}):[1,1]\wedge friend(S^{\prime},S^{ \prime\prime}):[1,1]\) _an universally quantified non-ground rule analogous to the associative rule in mathematics which encapsulates, "Having a common friend_ \(S^{\prime}\) _leads to friendship between two people_ \(S\) _and_ \(S^{\prime\prime}\)_."_
**Fixpoint Operator for Deduction.** Central to the deductive process is a fixpoint operator (denoted by \(\Gamma\)) which has previously been proven to produce all atoms entailed by a logic program (rules and facts) in [6, 7] and these results were extended for the temporal semantics in [9, 8]. It is noteworthy that this is an exact computation of the fixpoint, and hence providing the minimal model associated with the logic program allowing one to easily check for entailment of arbitrary formulae. Further, the result is fully explainable as well: for any entailment query we would have the series of inference steps that lead to the result. This differs significantly from other frameworks that do not provide an explanation for deductive results [18] though a key difference is that the reasoning framework implemented in PyReason allows for exact and efficient polynomial time inference, while others have an intractable inference process.
**Example 2.5** (Fixpoint Operator(\(\Gamma\))): _Consider we have the following set of initial interpretations in addition to the ones specified before:_
I(takes(john,english),1) = I(takes(john,english),2) = [1,1] I(takes(mary,english),2) = I(takes(mary,english),3) = [1,1] (John takes English at t=1,2 and Mary takes English at t=2,3) I(friend(mary,phil),s) = [1,1] (Mary and Phil are friends for the entire time considered)__
_And we consider the rule set_ \(\boldsymbol{R}\) _to be made of rule 4 and 5 from above. We initialize:_
\(\forall\)_S,S' I(friend(S,S'),0) = [0,1] (all_ \(friend\) _relationships initialized as unknown) and then update:_
I(friend(mary,phil),s) = [1,1] (from initial interpretations)__
_Application of_ \(\Gamma\) _at T=0 and 1 yields no change in_ \(\boldsymbol{I}\) _as none of the rules are fired._
_At T=2, rule 4 fires with the following groundings:_
\(friend(john,mary):[1,1]\leftarrow_{\Delta t=2}takes(john,english):[1,1]\wedge takes (mary,english):[1,1]\wedge class(english):[1,1]\)
\(friend(mary,john):[1,1]\leftarrow_{\Delta t=2}takes(mary,english):[1,1]\wedge takes (john,english):[1,1]\wedge class(english):[1,1]\)
_This would result in a change in \(\boldsymbol{I}\) at T = 4, as \(\Delta t=2\) for the rule above and it is fired at T=2._
I(friend(john,mary),4) _= [1,1]_
_At T=3, as \(\boldsymbol{I}\) is still unchanged, application of \(\Gamma\) does not lead to any of the rules firing._
_At T=4, application of \(\Gamma\) with the updated interpretation leads to firing of grounded rule 5 as:_
\(friend(john,phil):[1,1]\leftarrow_{\Delta t=1}friend(john,mary):[1,1]\wedge friend (mary,phil):[1,1]\)
_And results in:_
I(friend(john, phil),5) _= [1,1]_
_The above illustrates how PyReason makes logical inferences by exact application of the fixpoint operator(\(\Gamma\)). In this example, we are able to trace how the interpretation I(friend(john, phil),t) changed over time, and which rules caused these changes. This shows that this process is completely explainable, and can be leveraged in emerging neuro symbolic applications._
**Constant-Predicate Type Checking Constraints.** Key to reducing the complexity and speeding up of the inference process is type-checking. We leverage the sparsity commonly prevalent in knowledge graphs to significantly cut down on the search space during the grounding process. We noticed that typically a graph will have nodes of different types, and predicates typically were defined only over constants of a specific type. While initializing the interpretations, type checking takes this into account and only creates ground atoms for the subset of predicate-constant pairs which are compatible with each other. However, we note that this is an option, as in some applications such information may not be available.
**Example 2.6** (Constant-Predicate Type Checking).: _In the continuing example we see that the predicates student, gpa, promoted are only limited to constants of type student. Similarly, predicates class, difficulty are exclusive to the constants english and math. Type checking ensures that we do not consider ground atoms like student(english) or class(phil). Likewise for binary predicate takes(S,C), the first variable is always grounded with a student type constant, and the second with a class type constant. Even in this miniature example, type checking reduces the number of ground atoms under consideration from 25 to only 6 - a 76% reduction. Such gains significantly reduce complexity as size and sparsity of the graph increases._
**Detecting and Resolving Inconsistencies.** Inconsistency can occur in the following cases:
1. For some ground atom, a new interpretation is assigned an annotation \([L^{\prime},U^{\prime}]\) that is not a subset of the current interpretation \([L,U]\) (we assume \(L\leq U\)). i.e. if either \(U<L^{\prime}\) or \(U^{\prime}<L\).
2. When an inconsistency occurs between an atom and its negation like "a" and "not a". Or between complementary predicates like "\(bachelor(X)\)" and "\(married(X)\)" which cannot hold simultaneously. e.g. Literal A has annotation \([L_{1},U_{1}]\) and Literal B is the negation of literal A with
annotation \([L_{2},U_{2}]\). The fixpoint operator attempts to assign \([L^{\prime}_{1},U^{\prime}_{1}]\) to Literal A, and \([L^{\prime}_{2},U^{\prime}_{2}]\) to Literal B. But new bounds are inconsistent, i.e. either \(L^{\prime}_{1}>1-L^{\prime}_{2}\) or \(U^{\prime}_{1}<1-U^{\prime}_{2}\).
PyReason flags all such inconsistencies arising during the execution of the fixpoint operator and reports them. Further, as the fixpoint operator provides an explainable trace, the user can see the precise cause of the inconsistency. As an additional, practical feature, PyReason includes an option to reset the annotation to \([0,1]\) for any identified inconsistency and set the atom to static for the remainder of the inference process. In this way, such inconsistencies cannot propagate further. These initial capabilities provide a solid foundation for more sophisticated consistency management techniques such as providing for local consistency or iterative relaxation of the initial logic program.
**Example 2.7** (Detecting and Resolving Inconsistencies.): _Consider we have the following prior knowledge:_
_I(takes(phil,math), 4) = [1,1]_
_I(takes(mary,math), 4) = [1,1]_
_I(friend(phil,mary), 5) = [0,0]_
_However, the following logical rule with grounding \(S\gets phil\), \(S^{\prime}\gets mary\), \(C\gets math\):_
\(friend(S,S^{\prime}):[1,1]\leftarrow_{1}takes(S,C):[1,1]\wedge takes(S^{ \prime},C):[1,1]\) _gets fired at \(t=4\)._
_resulting in:_
_I(friend(phil,mary), 5) = [1,1] But clearly this is an inconsistency as I(friend(phil,mary), 5) cannot be both \([0,0]\) and \([1,1]\) simultaneously. So, we conclude that at least one of those two interpretations must be incorrect. If there is no way to ascertain which is correct, we may resolve this logical inconsistency by setting:_
_I(friend(phil,mary), s) = [0,1] at \(t=5\)._
## 3 Implementation
We have endeavored to create a modern Python-based framework to support scalable yet correct reasoning. We allow graphical input via convenient _Graphml_ format, which is commonly used in knowledge graph architectures. The python library Networkx is used to load and interact with the graph data. We are currently in the process of directly supporting Neo4j. The initial conditions and rules are entered in YAML format and we use memory-efficient implementation techniques to correctly capture semantic structures. We use the Numba open-source JIT compiler to translate many key operations into fast, optimized machine code while allowing the user to interact with Python and the aforementioned front-ends. Our implementation can support CPU parallelism, as evidenced by our experiments run on multi-CPU machines.
Our software stores interpretations in a nested dictionary. For computational efficiency and ease of use, our software allows specification of a range of time-points \(T=t_{1},t_{2},\ldots\) instead of a single time-point \(t\), for which an interpretation \(I\) remains valid. To reduce memory requirements, only the one set of interpretations (current) are stored at any point in time. However, past interpretations can be obtained using _rule traces_, which retains the change history for each
interpretation and the corresponding grounded logical rules that caused each change. _Rule traces_ make our software completely explainable, as every inference can be traced back to the cascade of rules that led to it.
MANCaLog [9] showed the use of the fixpoint operator for both canonical and non-canonical models. By recomputing interpretations at every time step, we not only require significantly less memory but also, support both the canonical and the non-canonical cases. Due to this design, increase in computation time is observed to be minimal.
Furthermore, we make significant advances on [33] by supporting static predicates, and having in-built capabilities for non-graph reasoning, and type checking as detailed in section 2.
Our implementation can be found online as specified in section 1 and detailed pseudo-code can be found in the supplemental information.
## 4 Experiments
### Honda Buyer-Supplier Dataset
We conduct our experiment on a Honda Buyer-Supplier network [21]. The dataset (network) contains 10,893 companies (nodes) and 47,247 buyer-supplier relationships between them (edges).
We design an use case, where we assume that operations of all companies from a particular country are disrupted, and observe the effects that this may have on companies across the world. We feel this is akin to supply chain issues faced worldwide during the COVID-19 pandemic. For our tests, we use the following logical rule which in practice would be either learned or come from an expert.
\(disrupted(Buyer):[1,1]\leftarrow_{\Delta t=1}\forall_{k}supplies(Sup_{k},Buyer): [1,1],\exists_{k/2}disrupted(Sup_{k}):[1,1]\)
It states that, a company is disrupted at a particular timestep if at least 50% of its suppliers are totally disrupted in the previous timestep. We conduct this experiment for three different countries (USA, Taiwan, and Australia), having a wide range of proportion of companies in the dataset. We do not fix the number of inference steps, instead we let the diffusion process run until it converges (in bold). The results are shown in Table 1.
To test if our approach could scale, we use two inference rules which jointly state, a company is disrupted at a particular timestep if any of its supplier(s) are completely disrupted in the previous timestep, or if at least 50% of its suppliers are disrupted to at least 50% of their capacity. We conduct this experiment for different graph sizes, and for different number of timesteps to
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline \multicolumn{2}{c}{Companies} & \multicolumn{4}{c}{Companies disrupted across the world at time t=} & \multicolumn{4}{c}{\% of companies disrupted} \\ \hline Based & Count & 0 & 1 & 2 & 3 & 4 &... & 38 & Initial & Final & **Change** \\ \hline USA & 1599 & 1599 & 1965 & 2057 & 2203 & 2313 &... & **3336** & 14.68 & 30.75 & **16.07** \\ Taiwan & 603 & 603 & 644 & **647** & 647 & 647 &... & 647 & 5.54 & 5.94 & **0.40** \\ Australia & 128 & 128 & **131** & 131 & 131 & 131 &... & 131 & 1.18 & 1.21 & **0.03** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Honda network: How disruption on a country’s industry, caused by a pandemic, may spread worldwide
show the scaling capability of our software in Table 2.
The results show that both runtime and memory remain almost constant over large ranges, and then scale sub-linearly with increase in network size.
### Pokec Social Media dataset
Pokec is a popular slovakian social network, and this dataset [22] contains personal information like gender, age, pets (attributes) of 1.6 million people (nodes), and 30.6 million connections between them (edges).
We take inspiration from the advertising community to design our use case. We consider, a small proportion of the population, who has pet(s), to be customers of a pet food company. The company, using Pokec data, must identify relevant advertising targets among the population. A realistic strategy can be captured by two logical rules:
1. \(\forall X,Yrelevance(X):[0.6,1]\leftarrow_{\Delta t=1}relevance(Y):[1,1]\wedge friend (X,Y):[1,1]\) Friend of a relevant target or existing customer (always relevant), is at least 60% relevant.
2. \(\forall X,Yrelevance(X):[1,1]\leftarrow_{\Delta t=1}relevance(Y):[1,1]\wedge friend (X,Y):[1,1]\wedge hasPet(X,P):[1,1]\wedge hasPet(Y,P):[1,1]\) Friend of a relevant target is totally relevant if they have pet(s) of same kind - dog, cat,...
The diffusion process converged after 8 timesteps, took 42 minutes to complete and used 58.36 GB of memory - which further showcases the scalability of our framework. The results are shown in Table 3.
The process of inference is completely explainable, and an user may use _rule traces_, an optional output of PyReason, to identify the logical rules that led to change in each interpretation. An example of a rule trace from the previous experiment is presented in Table 4.
All experiments were performed on an AWS EC2 container with 96 vCPUs (48 cores) and 384GB memory.
\begin{table}
\begin{tabular}{c c c c c c c} \hline Nodes (N) & Edges (E) & Total attributes & Density & Timesteps & Runtime (in s) & Memory (in MB) \\ \hline
1000 & 410 & 5012 & 4.10 x \(10^{-4}\) & 2 & 0.36 & 4.9 \\ & & & 5 & 0.42 & 1.8 \\ & & & 15 & 0.34 & 0.1 \\ \hline
2000 & 1640 & 13269 & 4.10 x \(10^{-4}\) & 2 & 0.43 & 1.2 \\ & & & 5 & 0.55 & 2.1 \\ & & & 15 & 0.81 & 8.2 \\ \hline
5000 & 10244 & 57852 & 4.10 x \(10^{-4}\) & 2 & 1.54 & 17.2 \\ & & & 5 & 1.84 & 16.0 \\ & & & 15 & 3.38 & 54.6 \\ \hline
10000 & 41034 & 197752 & 4.10 x \(10^{-4}\) & 2 & 4.83 & 80.3 \\ & & & 5 & 6.29 & 60.3 \\ & & & 15 & 12.34 & 210.8 \\ \hline \end{tabular}
\end{table}
Table 2: Scalability of our framework
## 5 Related work
In section 1, we discussed how PyReason extends on the early modern logic programming languages like Prolog [19], Epilog [20] and Datalog [34] by supporting annotations. Recent neuro symbolic frameworks show great promise in the ability to learn or modify logic programs to align with historical data and improve robustness to noise. Many such frameworks rely on an underlying differentiable, fuzzy, first order logic. For example, logical tensor networks [1] uses differentiable versions of fuzzy operators to combine ground and non-ground atomic propositions while logical neural networks [2] associate intervals of reals with atomic propositions and uses special parameterized operators. Meanwhile, induction approaches such as differentiable ILP [3] fuzzy logic programs (using the product t-norm) are learned from data based on template rule structures in a manner that support recursion and multi-step inference. In [17], Logical Neural Networks was used interpret learned rules in a precise manner. Here also, gradient descent was used to train the parameters of the network. In the last two years, two paradigms have emerged with much popularity in the neuro symbolic literature. Logical Tensor Networks (LTN) [1] extend neural architectures through fuzzy, real-valued logic. Logical Neural Networks (LNN) [2] provide a neuro symbolic framework with parameterized operators that supports open world reasoning in the logic. As stated earlier, both can be viewed as a subset of annotated logic. Hence, PyReason can be used to conduct inference on the logic for both frameworks,
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline t & Old & New & Rule & Clause-1 & Clause-2 & Clause-3 & Clause-4 \\ & Bound & Bound & fired & & & & \\ \hline
1 & [0.0,1.0] & [0.6,1.0] & rule\_1 & [’354455’] & [[’354365’, ’354455’]] & & \\
2 & [0.6,1.0] & [1.0,1.0] & rule\_2 & [’354455’, ’354455’], & [[’718503’, ’544’]] & [[’718503’, ’54365’, ’718503’]] & ‘cat’)] & ‘cat’)] \\ \hline \hline \end{tabular}
\end{table}
Table 4: Rule trace for a single node for label relevance. Application of rule 1 above caused the first change from \([0,1]\) to \([0.6,1]\), followed by, an update to \([1,1]\) due to firing of rule 2. A list of node and edge IDs which were used to ground the rule clauses are also provided.
\begin{table}
\begin{tabular}{c c c c} \hline \hline & & \multicolumn{3}{c}{Advertising targets} \\ \cline{3-4} Population size & Current Customers & Timesteps & Fully relevant & Partially relevant \\ \hline
1,632,803 & 2,308 & 0 & 2,308 & 0 \\ & & 1 & 2,596 & 39,836 \\ & & 2 & 2,657 & 47,405 \\ & & 3 & 2,679 & 49,174 \\ & & 4 & 2,690 & 50,046 \\ & & 5 & 2,692 & 50,412 \\ & & 6 & 2,693 & 50,455 \\ & & 7, 8,... & 2,693 & 50,608 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Pokec social media: How brands may use consumer data to identify prospective customers
in addition to providing key capabilities such as graph-based and temporal reasoning, which currently are not present in the logics of those frameworks.
In both the forward pass of various neuro symbolic frameworks [35, 2, 1], as well as for subsequent problems (e.g., entailment, abductive inference, planning, etc.), a deduction process is required. PyReason is designed to provide this precise capability. Generalized annotated programs [6] has been shown to capture a wide variety of real-valued, temporal, and fuzzy logics as it associates logical atoms with elements of a lattice structure as opposed to scalar values. As a result it can capture all the aforementioned logics, while retaining polynomial-time deduction due to the monotonicity of the lattice. The use of a lattice structure allows for us to associate logical constructs with intervals, thus enabling open world reasoning. In our recent work [7], we provided extensions to [6] that allows for a lower lattice structure for annotations. This enables the framework to capture paradigms such as LNN [2] and the MANCALog [9] for graph-based reasoning. However, that work only showed that analogs to the theorems of [7] for the lower lattice case and did not provide an implementation or experimental results.
By supporting generalized annotated logic, and its various extensions PyReason enables system design that is independent of the learning process. As a result, once a neuro symbolic learning process creates or modifies a logic program based on data, PyReason can be used to efficiently answer deductive queries (to include entailment and consistency queries) as well as support more sophisticated inference such as abductive inference or planning.
Today knowledge graphs are crucial in representing data for reasoning and analysis. Recent research on creation of knowledge graphs [36, 37] proposes methods to automatically convert conceptual models into knowledge graphs in GraphML format for enterprise architecture and a wide range of applications. PyReason, which supports the graphml format, could be an effective tool to reason about knowledge graphs obtained from one of these platforms.
## 6 Conclusion and Future Work
In this paper, we presented PyReason: an explainable inference software supporting annotated, open world, real-valued, graph-based, and temporal logics. Our modern implementation extends established generalized annotated logic framework to support scalable and efficient reasoning over large knowledge graphs and diffusion models. We are currently working on a range of extensions to this work. This includes adding more temporal logic operators for specification checking, learning rules from data through induction, and using the inference process to create new knowledge in non-static graphs (e.g., adding nodes and edges). We will also look to explore how PyReason can be used in conjunction with LTN [1], and LNN [2]. In supporting frameworks such as these, we will look to add capabilities for symbol grounding [38], leveraging the results of the training process from frameworks such as LTN. Finally, we also plan on extending PyReason to act as a simulator for reinforcement learning based agents.
## Acknowledgments
The authors are supported by internal funding from the Fulton Schools of Engineering and portions of this work is supported by U.S. Army Small Business Technology Transfer Program
Office or the Army Research Office under Contract No.W911NF-22-P-0066.
|
2308.09481 | Types, equations, dimensions and the Pi theorem | The languages of mathematical physics and modelling are endowed with a rich
"grammar of dimensions" that common abstractions of programming languages fail
to represent. We propose a dependently typed domain-specific language (embedded
in Idris) that captures this grammar. We apply it to explain basic notions of
dimensional analysis and Buckingham's Pi theorem. We hope that the language
makes mathematical physics more accessible to computer scientists and
functional programming more palatable to modelers and physicists. | Nicola Botta, Patrik Jansson, Guilherme Horta Alvares Da Silva | 2023-08-16T14:33:18Z | http://arxiv.org/abs/2308.09481v2 | [
###### Abstract
The languages of mathematical physics and modelling are endowed with a rich "grammar of dimensions" that common abstractions of programming languages fail to represent. We propose a dependently typed domain-specific language (embedded in Idris) that captures this grammar. We apply it to explain basic notions of dimensional analysis and Buckingham's Pi theorem. We hope that the language makes mathematical physics more accessible to computer scientists and functional programming more palatable to modelers and physicists.
Types, equations, dimensions and the Pi theorem]Types, equations, dimensions and the Pi theorem
Nicola Botta] NICOLA BOTTA
Potsdam Institute for Climate Impact Research, Potsdam, Germany,
Chalmers University of Technology, Goteborg, Sweden. (e-mail: [email protected])
Patrick Jansson
Chalmers University of Technology and University of Gothenburg, Goteborg, Sweden. (e-mail: [email protected])
Guiltherme Horta ALVARES DA SILVA
Chalmers University of Technology, Goteborg, Sweden. (e-mail: [email protected])
## 1 Introduction
**Motivation.** The main motivation for this work comes from a failure. During more than one decade, two of the authors have been been advocating mathematical specifications, type-driven analysis and functional programming (FP) as methodologies to better understand, specify and solve problems in climate impact research, climate policy advice and, by large, global systems science (Botta et al., 2011; Ionescu and Jansson, 2013; Botta et al., 2017, 2018; Ionescu et al., 2018; Botta et al., 2023).
Alas, after ten years of proselytism and intense collaborations, we have hardly been able to convince any climate modeler of the usefulness of FP, let apart convert them to "thinking functionally" with Haskell (Bird, 2014), Agda (Norell, 2007), Idris (Brady, 2017), or Coq (The Coq Development Team, 2021). Have we just done a bad job or is this failure a symptom of a deeper problem?
**There is no need for FP in mathematical physics, is there?** Physicists and modelers are well trained in exploiting established numerical libraries (The Numerical Algorithms Group (NAG), 1970 - 2023; Galassi et al., 1996 - 2023; The Python community, 2005 - 2023) and frameworks (OpenCFD, 2004 - 2023; Zhang et al., 2019) for approximating solutions of (ordinary, partial, stochastic) differential equations efficiently, implementing
large computer-based models in FORTRAN, C, C++, Java, Python, or Julia and testing model predictions against analytical solutions or observations.
In engineering and in many physical sciences, this methodology has lead to reliable computations and to computer-based modelling almost fully replacing physical modelling: for many applications, running computer programs is more flexible and much cheaper than running wind tunnels or full-scale experiments.
But there are important research areas in which empirical validations are beyond reach and the predictive capability of computer-based models is poorly known and needs to be questioned. In climate science but also in plasma physics, for example, it is simply impossible (or just too dangerous or too expensive) to test the correctness of computations empirically. It is not possible to study the effectiveness of a policy designed to reduce greenhouse gas (GHG) emissions without implementing that policy or, as argued in (Lucarini, 2004), "the usual Galilean scientific validation criteria do not apply to climate science". In much the same way, plasma physicists and engineers cannot afford to damage hundreds of experimental tokamak fusion reactors to assess and validate optimal control options for such devices (Hoppe et al., 2021; Pusztai et al., 2023).
In these domains, scientists need methodologies that bring confidence that their computations are correct well before such computations can actually be applied to real world systems. Formal specifications is the key both for testing programs and for showing the _absence_ of errors in computations (Ionescu and Jansson, 2013) and dependently typed FP languages have reached enough expressive power to support formulating very precise specifications. And yet climate modelers and physicists have, by large, stayed away from FP languages. Why so?
**Educational gaps.** It is probably fair to say that most physicists and modelers have never heard about mathematical program specifications, not to mention FP and dependently typed languages. In much the same way, most computer scientist have hardly been exposed to the language of mathematical physics, say, for example, that of Courant and Hilbert (1989); Arnold (1989); Barenblatt et al. (1996); Kuznetsov (1998).
Originally very close to elementary set theory and calculus, this language has evolved over the last decades and fragmented into a multitude of dialects or DSLs, perhaps through the pervasive usage of imperative programming and computer-based modelling.
Common traits of these dialects are the limited usage of currying and higher order functions, the overloading of the equality sign, the lack of referential transparency and explicit type information (although mnemonic rules are often introduced for encoding such information like in \(x_{[0,T]}\) instead of \(x\,:[0,T]\to\mathbb{R}\)) and the usage of parentheses to denote both function application and function composition as in \(\dot{x}=f(x)\) instead of \(\forall t,\ \dot{x}(t)=f(x(t))\) or, in point-free notation, \(\dot{x}=f\circ x\).
These DSLs represent a major difficulty for computer scientists and systematic efforts have been undertaken by one of the authors to make them more accessible to computer science students (Ionescu and Jansson, 2016; Jansson et al., 2022; Jansson et al., 2022) and improve the dialogue between the computational sciences and the physical sciences. Our paper is also a contribution to such dialogue.
We argue that the DSLs of mathematical physics are endowed with a rich but hidden "grammar of dimensions" that (functional) programming languages have failed to exploit or
even recognize. This grammar informs important notions of consistency and of correctness which, in turn, are the bread and butter of program analysis, testing and derivation.
From this perspective, it is not very surprising that physicists and modelers have hardly been interested in FP. Standard FP abstractions emphasize type annotations that do not matter to physicists and modelers (for the climate scientist, all functions are, bluntly speaking, of type \(\mathbb{R}^{m}\to\mathbb{R}^{n}\) for some natural numbers \(m\) and \(n\)), while at the same time failing to highlight differences that do matter like the one between a _length_ and a _time_. We hope that this work will also help making FP a bit more palatable to physicists and modelers.
### Outline
In Section2 we briefly revise the role of equations, laws, types and _dimensions_ in computer science and in mathematical physics and modelling.
In Section3 we discuss the ideas of dimension, _physical quantity_, and _units of measurement_ informally. This is mainly meant to guide the computer scientist out of her comfort zone but also to answer a question that should be of interest also to readers who are familiar with modelling: "what does it mean for a parameter or for a variable to have a dimension?"
Section4 is a short account of similarity theory (Buckingham, 1914; Rayleigh, 1915; Bridgman, 1922) and of Buckingham's Pi theorem, mainly following Section1 of (Barenblatt et al., 1996). This is going to be new ground for most computer scientists but, again, we hope to also provide a new angle to modelers who are young and have therefore mainly been imprinted with computer-based modelling.
In Section5 we formalize the notions of dimension function, dimensional judgment (in analogy to type judgment) and that of physical quantity. We also propose an encoding of Buckingham's Pi theorem (an eminently non-implementable result) as a propositional type.
In Section6 we introduce a concrete DSL based on the formalization in Section5 and discuss possible generalization and more desiderata for this DSL. We also discuss "dimensional functions" and the possibility of exploiting the type system for implementing programs that are dimensionally consistent. Section7 wraps-up and discusses links between dimensional analysis and more general relativity principles.
### Related work
Dimensional analysis (DA) (Bridgman, 1922; Barenblatt et al., 1996; Gibbings, 2011), see also (Jonsson, 2014), is closely connected with the theory of physical similarity (under which conditions is it possible to translate observations made on a scaled model of a physical system to the system itself, see Section4) and thus with fundamental principles of invariance and relativity (of units of measurement), see (Arnold, 1989).
The theory of physical similarity was formulated well before the advent of digital computers and programming languages (Buckingham, 1914, 1915; Rayleigh, 1915; Bridgman, 1922) and its formalizations have been rare (Quade, 1961; Whitney, 1968\(a\),b). With the advent of digital computers and massive numerical simulations, physical modeling has been almost completely replaced by computer-based modeling, mainly for economic reasons, and the theory of physical similarity and DA are not any longer an integral component of the education of physicists, modelers and data analysts1.
Footnote 1: But Bridgman’s work has been republished in 2007 by Kessinger Publishing and in 2018 by Forgotten Books.
The notions of invariance (with respect to a group of transformations) and relativity (e.g., of units of measurement) in physics are similar to the notions of parametricity
and polymorphism in computer science and thus it is perhaps not surprising that, as programming languages have gained more and more expressive power, mathematicians and computer scientists have started "rediscovering" DA, see (Kennedy, 1997; Atkey et al., 2015; Newman, 2011). More recently the idea that dependent types can be applied to enforce the dimensional consistency of expressions involving scalar physical quantities has been generalized to expressions with vectors, tensors and other derived quantities (McBride and Nordvall-Forsberg, 2022).
Dependent types can certainly be applied to enforce the dimensional consistency of expressions and libraries for annotating values of standard types with dimensional information are available in most programming languages (Eisenberg and Muranushi, 2022; Buckwalter, 2006 - 2022; Keller et al., 2023; Pint Developers, 2012 - 2023; The Astropy Developers, 2011 - 2023; Schabel and Watanabe, 2003 - 2010) and since quite some time (House, 1983).
But dependently typed languages can do more. The type checker of Idris, for example, can effectively assist the implementation of verified programs by interactively resolving the types of holes, suggesting tactics and recommending type consistent implementations. Similar support is available in other languages based on intensional type theory.
A DSL built on top of a dependently typed language that supports expressing Buckingham's Pi theorem should in principle be able to assist the interactive implementation of dimensionally consistent programs. It should support the programmer formulating the question of how a function that computes a force, say \(F\), may depend on arguments \(m\) and \(a\) representing masses and accelerations, leverage on the type system of the host language and automatically derive \(F\,m\,a=\alpha\,*\,m\,*\,a\). The work presented here is a first step in this direction. We are not yet there and in Section 7 we discuss which steps are left.
## 2 Equations, physical laws and types
In the preface to the second edition of "Programming from specifications" (Morgan, 1994), Carroll Morgan starts with the observation that, in mathematics, \(x^{2}=1\) is an equation and that \(x=1\) and \(x=-1\) are equations too. He then goes on pointing out that, because of the relationships between these three equations (the implications \(x=1\Rightarrow x^{2}=1\) and \(x=-1\Rightarrow x^{2}=1\)) and because \(x=1\) and \(x=-1\) define the value of \(x\) "without further calculation", these two equations are called _solutions_ of \(x^{2}=1\).
Thus equations in mathematics sometimes represent _problems_. In dependently typed languages, these problems can be formulated explicitly and the resulting expressions can be checked for consistency. For example, in Idris (Brady, 2017) one can specify the problem of finding a real number \(x\) whose square is \(1\) as
\[\begin{array}{lcl}x&:&\mathbb{R}\\ xSpec&:&x\,\uparrow\,2=1\end{array}\]
where
\[\begin{array}{lcl}(\uparrow)&:&\mathbb{R}&\rightarrow&\mathbb{N}&\rightarrow& \mathbb{R}\\ x\,\uparrow\,Z&=1&\\ x\,\uparrow\,(Sn)&=&x*\,(x\,\uparrow\,n)\end{array}\]
It is worth pointing out that 1) it is a context that is _not_ immediately deducible from \(x^{2}=1\) that determines the meaning of the equality sign in this equation and 2) that it is the type of \(x\) and that of the "to the power of 2" function that make such context clear. For example, in mathematics, \(x^{n}\) can also denote the \(n\)-th iteration of an endofunction, this time with:
\((\uparrow):\{A:\,\text{\em Type}\}\to(A\to A)\to\mathbb{N}\to(A\to A)\)
\(x\uparrow Z\qquad=id\)
\(x\uparrow(Sn)=x\circ(x\uparrow n)\)
In this context and with 1 denoting the identity function (\(id\)), the equation \(x^{2}=1\) represents the problem of finding involutions and the equality in \(x^{2}=1\) extensional equality. For example
\(x\qquad:\,\text{\em Bool}\,\to\,\text{\em Bool}\)
\(x\qquad:\,\text{\em XSpec}\,:\,x\uparrow 2\,\doteq\,1\)
with the identity and the negation functions as solutions. In this _specification_ and throughout this paper we use the definition of \((\,\doteq\,)\) from (Botta et al., 2021)
\((\,\doteq\,)\,:\{A,B:\,\text{\em Type}\}\to(A\to B)\to(A\to B)\to\text{\em Type}\)
\((\,\doteq\,)\,\{A\}\,f\,g=(x\,:\,A)\,\to f\,x=g\,x\)
Because of the equivalence between logical propositions and types (Wadler, 2015), the type \(f\,\doteq\,g\) means \(\forall x,f\,x=g\,x\) and values of this type are proofs of this equality.
### Generic differential equations
In mathematical physics but also in the social sciences and in modelling it is common to specify problems implicitly in terms of equations.
Typically, such specifications are _generic_ and come in the form of systems of differential equations. In this context, generic means that the problem equations are given in terms of functions which are not defined explicitly2. The focus is on the semantics and the syntax can be confusing. For example, the ordinary differential Equation (4) at page 18 of (Kuznetsov, 1998)
Footnote 2: Thus, one has a family of systems in a function parameter. The idea is then to study how properties of these systems are defined.
\[\dot{x}=f(x) \tag{1}\]
is said to define a continuous-time _dynamical system_\((\mathcal{T},X,\varphi)\). In this context, \(\mathcal{T}\) is the _time_ set (a real interval), \(X\) is the _state space_ of the system, \(x\) is a function of type \(\mathcal{T}\to X\), \(\dot{x}\) (also of type \(\mathcal{T}\to X\)) is the first derivative of \(x\), and \(f\) is a function of type \(X\to X\) smooth enough to grant existence and uniqueness of solutions. Thus, Eq. (1) contains a type error. The twist here is that the equation is just an abbreviation for the specification
\(x\qquad:\,\mathcal{T}\to X\)
\(x\qquad:\,\mathcal{D}\,x\,\doteq\,f\circ x\)
where we adopt the notation of (Jansson and Ionescu, 2014 - 2016) and use \(D\,x\) to denote the derivative of \(x\). When not otherwise stated, \(D\,x\) has the type of \(x\). When quoting textbooks
verbatim, we also denote \(D\,x\) by \(\dot{x}\) (and \(D\,(D\,x)\) by \(\ddot{x}\)), \(dx/dt\) or similar. The third element of the dynamical system associated to Eq. (1), \(\varphi\), is a function of type \(\mathcal{T}\,\to\,X\,\to\,X\). The idea is that \(\varphi\,t\,x_{0}\) is the value of _the_ solution of Eq. (1) for initial condition \(x_{0}\) at time \(t\). Thus \(\varphi\) does depend on \(f\) although this is not immediately visible from its type.
Footnote 2: The _first_ of the system is the _first_ of the system, for example that of having stationary solutions, depend on properties of the parameter.
In mathematical physics, it is very common to use the same notation for function application and function composition as in Eq. (1). For example, Newton's principle of determinacy3 is formalized in Equation (1) of (Arnold, 1989) as
Footnote 3: Newton’s principle of determinacy maintains that the initial state of a mechanical systems (the positions and the velocities of its points at an initial time) uniquely determines its motion, see (Arnold, 1989), page 4
\[\ddot{\mathbf{x}}=F(\mathbf{x},\dot{\mathbf{x}},t) \tag{2}\]
where \(\mathbf{x},\,\dot{\mathbf{x}},\,\ddot{\mathbf{x}}:\mathcal{T}\to\mathbb{R}^{N}\) and \(F:(\mathbb{R}^{N},\,\mathbb{R}^{N},\,\mathcal{T})\to\mathbb{R}^{N}\). Again, the equation is to be understood as an abbreviation for the composition between \(F\) and a higher order function.
Keeping in mind that \(f(x)\) often just means \(f\circ x\) and with a little bit of knowledge of the specific domain, it is often not difficult to understand the problem that equations represent. For example, in bifurcation theory, a slightly more general form of Eq. (1)
\[\dot{x}=f(x,\,p) \tag{3}\]
with parameter \(p\,:\,P\) and \(f\,:\,(X,P)\,\to\,X\) (again, with some abuse of notation) is often put forward to discuss three different but closely related problems:
* The problem of predicting how the system evolves in time from an initial condition \(x_{0}\,:\,X\) and for a given \(p\,:\,P\).
* The problem of finding _stationary_ points that is, values of \(x\,:\,X\) such that \(f(x,\,p)=0\) for a given \(p\,:\,P\) and to study their _stability_.
* The problem of finding _critical_ parameters that is, values of \(p\,:\,P\) at which the set of stationary points \(\{x\,|\,x:X,f(x,\,p)=0\}\) (or a more general _invariant_ set associated with \(f\)) exhibits structural changes4. Footnote 4: In the context of climate research, such critical values are often called “tipping points”.
Mathematical models of physical systems often involve _functional_ equations and systems of partial differential equations. Understanding the problems associated with these equations from a FP perspective requires some more domain-specific expertise. But even these more advanced problems can often be understood by applying "type-driven" analysis, see for example the discussion of the Lagrangian in Chapter 3 "Types in Mathematics" of (Jansson et al., 2022).
### Physical laws, specific models
Perhaps surprisingly, understanding basic physical principles but also mathematical models of specific systems in terms of well typed expressions is often tricky. Consider, for example, Newton's second law as often stated in elementary textbooks
\[F=ma \tag{4}\]
or the relationship between pressure, density and temperature in an ideal gas
\[p=\rho RT \tag{5}\]
In these equations \(F\) denotes a _force_, \(m\) a _mass_, \(a\) an _acceleration_, \(p\) a _pressure_, \(\rho\) a _density_, \(T\) a _temperature_ and \(R\) a gas-specific constant. But what are the types of these quantities? What kind of equalities do these types imply?
In climate science, "conceptual" models of specific components of the climate system often consist of simple, low-dimensional systems of ordinary differential equations. For example, Stommel's seminal 1961 paper starts by describing a simple system consisting of a vessel of water with "temperature \(T\) and salinity \(S\) (in general variable in time) separated by porous walls from an outside vessel whose temperature \(T_{e}\) and salinity \(S_{e}\) are maintained at constant values", see Fig. 1 at page 1 of Stommel (1961).
The evolution of the temperature and of the salinity in the vessel are then modeled by two uncoupled linear differential equations:
\[\frac{dT}{dt}=c(T_{e}-T)\quad\text{and}\quad\frac{dS}{dt}=d(S_{e}-S) \tag{6}\]
In this context, \(T\) and \(S\) represent functions of type \(\mathbb{R}\,\rightarrow\,\mathbb{R}\) and \(T_{e}\), \(S_{e}\), the "temperature transfer coefficient" \(c\), and the "salinity transfer coefficient" \(d\) are real numbers. We can formulate the problem of computing solutions of Eq. (6) through the specification:
\[\mathit{TSpec}\,:\,D\,T\,\doteq\,\lambda t\,\Rightarrow\,c*(T_{e}-T\,t);\qquad \mathit{SSpec}\,:\,D\,S\,\doteq\,\lambda t\,\Rightarrow\,d*(S_{e}-S\,t)\]
The \(\lambda\)-expression in \(\mathit{TSpec}\) denotes the function that maps \(t\) to \(c*(T_{e}-T\,t)\). Thus (again, because of the equivalence between types and logical propositions) any (total) _definition_ of \(\mathit{TSpec}\) is (equivalent to) a proof that, \(\forall t,\,\,dT(t)/dt=c(T_{e}-T(t))\) and its _declaration_ specifies the task5 of providing one such proof.
Footnote 5: _Aufgabe_, Kolmogoroff (1932).
The next step at the end of page 1 of Stommel's paper is to make Eq. (6) "non-dimensional" by introducing
\[\tau=c\,\,t,\quad\delta=\frac{d}{c},\quad y=T/T_{e},\quad x=S/S_{e} \tag{7}\]
This yields
\[\frac{dy}{d\tau}=1-y\quad\text{and}\quad\frac{dx}{d\tau}=\delta*(1-x) \tag{8}\]
at the top of page 2. From there, Stommel then goes on discussing how a _density_ of the vessel (a function of its temperature and salinity and, thus, of \(x\) and \(y\)) evolves in time as the solution of Eq. (8) for arbitrary initial conditions \(x_{0}\),
\[y(\tau)=1+(y_{0}-1)e^{-\tau}\quad\text{and}\quad x(\tau)=1+(x_{0}-1)e^{-\delta\tau} \tag{9}\]
approaches 1 as \(\tau\) increases. Although Stommel's discussion is in many ways interesting, we do not need to be concerned with it here. What we need to understand, however, are the types of \(\tau\), \(\delta\), \(x\) and \(y\) and how Eq. (8) can be derived from Eq. (6) given Eq. (7). The first two equations of Eq. (7) are definitions of \(\tau\) and \(\delta\):
\[\begin{array}{ll}\tau\,:\,\mathbb{R}\to\mathbb{R};&\delta\,:\,\mathbb{R}\\ \tau\,t\,=c\ast t;&\delta=d\,/\,c\end{array}\]
However, the last two equations are not _explicit_ definitions of \(y\) and \(x\). Instead, they are _implicit_ definitions, corresponding to the specifications
\[\begin{array}{ll}y&:\,\mathbb{R}\to\mathbb{R};&x\qquad:\,\mathbb{R}\to \mathbb{R}\\ ySpec\,:\,y\circ\tau\,\doteq\,\lambda t\,\Rightarrow\,T\,t\,/\,T_{e};&xSpec\,: \,x\circ\tau\,\doteq\,\lambda t\,\Rightarrow\,St\,/\,S_{e}\end{array}\]
and it is easy to see that the definitions
\[y\,\sigma=T\,(\sigma\,/\,c)\,/\,T_{e};\qquad x\,\sigma=S\,(\sigma\,/\,c)\,/ \,S_{e}\]
fulful \(ySpec\) and \(xSpec\):
\[ySpec\,t=((y\circ\tau)\,t)=\{\,\text{\em Reft}\,\}=(T\,(c\ast t\,/\,c)\,/\,T_{e} )=\{\,\text{\em?}\,\mathbf{h_{0}}\,\}=(T\,t\,/\,T_{e})\,QED\]
and similarly for \(xSpec\). Filling the \(\text{\em?}\,\mathbf{h_{0}}\) hole in the last step requires invoking congruence and a proof of \(c\ast t\,/\,c=t\). Here, the types of \(c\) and \(t\) are aliases for double precision floating point numbers for which such an equality needs to be postulated. The differential equations Eq. (8) for \(y\) and \(x\) follow then from the definitions of \(\tau\) and \(\delta\), from the specifications \(TSpec\), \(ySpec\), \(SSpec\), \(xSpec\) and from the rules for derivation. Informally:
\[\begin{array}{ll}y\circ\tau\,\doteq\,\lambda t\,\Rightarrow\,T\,t\,/\,T_{e} \\ \,=\
We can turn this into a valid Idris proof but the point here is that the computation looks significantly more contrived than the straightforward (again, informal) derivation
\[y=T\,/\,T_{e}\] \[=\
Before getting there, however, we need to build a better understanding of the questions raised in this section. In the next one, we start with the question of what it means for a physical quantity to _have a dimension_.
## 3 Dimensions, physical quantities, units of measurement
In the last section we have discussed simple examples of equations and implicit problem specifications in the context of mathematical physics and modelling. We have seen that the variables that appear in equations like Newton's second law Eq. (4) or in the ideal gas relationship are endowed with properties, like being a force or a temperature, that we do not know how to represent through the type system.
In the case of the Stommel model, we have encountered variables that were said to "have a dimension", for example \(c\). Other expressions were said to be "non-dimensional". But what does it mean for \(c\) to have a dimension? In a nutshell, it means two things:
1. That \(c\) represents a _physical quantity_ that can be measured in a system of _units_ of measurement of a given _class_.
2. That, given a class, a system of units in that class and a measurement of \(c\) in that system, one can define another system of units in the same class that gives a different measurement.
An example will help illustrating the idea. Consider the sheet of paper on which this article is printed. Assume its width to be 20 centimeters and its height to be 30 centimeters.
If we measure _lengths_ in centimeters, the measures of width and height will be 20 and 30, respectively. In meters, these measures will instead be 0.2 and 0.3. A change in the units of measurement of lengths has resulted in a change in the measures of the width and of the height of the paper: we say that the width and the height have a dimension or, equivalently, that they are dimensional quantities.
By contrast, the ratio between the height and the width of the paper is 3/2 no matter whether we measure lengths in centimeters, meters or in other units: we say that the ratio is a non-dimensional quantity.
Notice that the distinction between dimensional and non-dimensional quantities crucially relies on the (implicit) assumption of measuring _both_ the height and the width of the paper (more generally, all lengths) with the same units.
In strongly typed languages like Idris and Agda, the judgment \(e\,:\,t\) means that expression \(e\) has type \(t\). In the physical sciences, the judgment \([\,e\,]=d\) means that expression \(e\) (representing a physical quantity) has dimension \(d\). At this point, we do not know how to formalize this judgment in type theory (we discuss how to do so in Section 5) but the idea is that \(d\) is a function of type \(\mathbb{R}_{+}^{n}\to\mathbb{R}_{+}\) where the number \(n\,:\,\mathbb{N}\) is domain-specific and \(\mathbb{R}_{+}\) denotes the set of positive real numbers.
For example, in mechanics \(n=3\). In this domain, the judgment \([\,e\,]=\lambda(L,\,T,\,M)\Rightarrow L*T^{-1}\) means that \(e\) is a quantity that can be measured in a system of units of measurements, for example SI (international) or CGS (centimeter-gram-second), for _lengths_, _times_ and _masses_, and that the measure of \(e\)_increases_ by a factor \(L*T^{-1}\) when the units for lengths, times and masses are _decreased_ by factors \(L\), \(T\) and \(M\), respectively. Another way to express
the same judgment is to say that "\(e\) is a velocity" or that "\(e\) has the dimension of a velocity". The notation was originally introduced by Maxwell, see (Barenblatt et al., 1996), section 1.1.3 and in DA (dimensional analysis) it is common to write \([\,e\,]=L\,T^{-1}\) as an abbreviation for \([\,e\,]=\lambda(L,T,M)\Rightarrow L\ast T^{-1}\).
In mechanics, physical quantities can also be measured in a system of units for _lengths_, _times_ and _forces_. This defines a different _class_ of units in the same domain. In plain geometry \(n=1\) and in classical mechanics with heat transfer \(n=4\): beside units for lengths, times and forces, in this domain we also need units for _temperatures_.
A physical quantity \(\varphi\) is called dimensionless if \([\,\varphi\,]=const\,\,1\) and dimensional otherwise.
For the reader who finds all this very confusing and suspiciously far away from the clear and deep waters of type theory: it is! As we have seen in Section 2, the standard arsenal of functional programming abstractions is not yet ready to encode the grammar of dimensions that informs the language of mathematical physics and modeling.
If we want to develop DSLs that are palatable to mathematicians, physicists and modelers, we need to spend some time wading in shallow and muddy waters. The ideas summarized in this section are discussed more authoratively and to a greater extent in the introduction and in section 1 of (Barenblatt et al., 1996).
We will give a precise specification of the higher order function \([\,\,\,\,]\) and formalize the notion of physical quantity in type theory in Section 5. To get there, however, we need to first get an idea of the problems addressed by the theory of _similarity_ and by Buckingham's Pi theorem. This is done in the next two sections. We conclude this one with three remarks:
The first remark is that equations like Newton's second principle Eq. (4) or the ideal gas law Eq. (5) represent relationships between physical quantities. The idea is that these equations summarize empirical facts about measurements (or put forward axioms about such measurements) of these quantities. These facts (or axioms) about measurements are understood to hold under a number of assumptions, typically implicit. A crucial one is that all measurements are done in the same class and system of units.
For example, Eq. (5) maintains that measurements of the pressure \(p\) of an ideal gas are proportional to the product of measurements of density \(\rho\) and measurements of temperature \(T\), the factor of proportionality being a gas-specific constant \(R\). We come back to the idea that equations like Newton's second principle represent relationships between physical quantities and we present a more consistent interpretation of such equations in Section 6.3.
The second remark is that the result of such measurements (and thus the type of the variables entering the equations) depends on a context that is not visible in the equations themselves. For example, when the measurements of pressure, density and temperature are pertinent to a homogeneous gas, Eq. (5) can be interpreted as the specification
\[pSpec\,:\,\mu\,p=\mu\,\rho\ast\mu\,R\ast\mu\,T\]
In a context in which \(p\), \(\rho\) and \(T\) represent the pressure, the density and the temperature of a gas in local thermodynamical equilibrium, the same equation can be interpreted as the specification
\[pSpec\,:\,p\,\doteq\,\lambda(x,t)\Rightarrow\mu\,(\rho\,(x,t))\ast\mu\,R \ast\mu\,(T\,(x,t))\]
This is typically the case when a symbol that represents the pressure appears in the right hand side of a system of partial differential equations like the Euler or the Navier-Stokes equations of fluid mechanics (Chorin and Marsden, 2000). In yet another context, \(p\), \(\rho\) and
\(T\) could represent probability density functions or other higher order functions. But what could be the types of \(p\), \(\rho\), \(R\) and \(T\) in these contexts? We propose answers to this question in Sections 5.2 and 6.1.
The last remark is that, when \(p\) is a function taking values in, e.g., \((\mathbb{R},\mathcal{T})\), the judgment "\(p\) is a pressure" (or "\(p\) has the dimension of a pressure" or, \([p]=M\,L^{-1}\,T^{-2}\)) is just an abbreviation for "\(\forall\,(x,t)\,:\,(\mathbb{R},\mathcal{T}),\,\,p\,(x,t)\) is a pressure". If \(p\) is differentiable with respect to both space and time and if the space and the time coordinates are dimensional (that is, \(\forall\,(x,t)\,:\,(\mathbb{R},\mathcal{T})\), \([x]=L\) and \([t]=T\)) then the partial derivatives of \(p\) with respect to space and time have the dimensions \(M\,L^{-2}\,T^{-2}\) and \(M\,L^{-1}\,T^{-3}\), respectively. More generally, if \(p\) is a function from a dimensional space-time set into real numbers and \(p\) has dimension \(d\), the partial derivatives of \(p\) with respect to time and space are functions of the same type as \(p\) but with dimensions \(d\,T^{-1}\) and \(d\,L^{-1}\). Again, these are shortcuts for \(\lambda(L,T,M)\Rightarrow d\,(L,T,M)*T^{-1}\) and \(\lambda(L,T,M)\Rightarrow d\,(L,T,M)*L^{-1}\), respectively.
As already mentioned in Section 2, the last specification for \(p\) could be written more concisely as \(p\doteq p*R*T\) by introducing canonical abstractions for functions that return numerical values. To the best of our knowledge, no standard Idris library provides such abstractions although, as discussed in the introduction, there have been proposals for making types in FP languages more aware of dimensions and physical quantities, for example (Baumann et al., 2019).
## 4 Similarity theory and the Pi theorem
The notion of dimension function informally introduced in the last section is closely related with a fundamental principle in physical sciences and with a very pragmatic question in physical modeling.
We start with the latter. At the turn of the 20th century, no computers were available for approximating numerical solutions of mathematical models of physical systems, typically in the form of partial differential equations. Thus, models of physical systems, say of a ship cruising in shallow waters or of an airplane, were themselves physical systems, for convenience often at a reduced _scale_. This raised the obvious question of under which conditions careful measurements made on the scaled model could be "scaled up" to the real system and how this scaling up should be done.
For example, if the drag on a 1:50 model of a ship cruising in a water channel was found to be \(x\) Newton, what would be the drag of the real ship? And under which conditions is it possible to give a definite answer to this question?
At first glance, the problem seems to be one of engineering. But the theory that was developed to answer this question, similarity theory or dimensional analysis (DA), turned out to be a logical consequence of a fundamental principle: "physical laws do not depend on arbitrarily chosen basic units of measurement", see (Barenblatt et al., 1996), section 0.1. This is, in turn, an instance of Galileo's principle of relativity, see (Arnold, 1989), page 3.
The core results of DA can be summarized in Buckingham's Pi theorem and this theorem is also an answer to the question(s) of model similarity raised above. We do not need to be concerned with these answers and with specific applications of the Pi theorem in the
physical sciences here. But we need to understand the notions that are at core of this theorem in order to develop a DSL that is suitable for mathematical physics and for modeling.
We introduce Buckingham's Pi theorem as it is stated in (Barenblatt et al., 1996). This formulation is consistent with textbook presentations and raises a number of questions. We flag these questions here and then tackle them from a functional programming perspective in Section 5. This will be the basis for the DSL that we build in Section 6.
### Buckingham's Pi theorem
The theorem is stated at page 42 of (Barenblatt et al., 1996):
A physical relationship between some dimensional (generally speaking) quantity and several dimensional governing parameters can be rewritten as a relationship between some dimensionless parameter and several dimensionless products of the governing parameters; the number of dimensionless products is equal to the total number of governing parameters minus the number of governing parameters with independent dimensions.
This formulation comes at the end of a derivation that starts at page 39 by positing a "physical relationship" between a "dimensional quantity" \(a\) and \(k+m\) "dimensional governing parameters" \(a_{1},\ldots,a_{k}\) and \(b_{1},\ldots,b_{m}\)
\[a=f(a_{1},\ldots,a_{k},\,b_{1},\ldots,\,b_{m}) \tag{1}\]
such that \(a_{1},\ldots,a_{k}\) "have independent dimensions, while the dimensions of parameters \(b_{1},\ldots,b_{m}\) can be expressed as products of powers of the dimensions of the parameters \(a_{1},\ldots,a_{k}\)":
\[[b_{i}]=[a_{1}]^{p_{1}}\ldots[a_{k}]^{p_{ik}}\quad i=1,\ldots,m \tag{2}\]
With these premises, the conclusions are then 1) that "the dimension of \(a\) must be expressible in terms of the dimensions of the governing parameters \(a_{1},\ldots,a_{k}\)":
\[[a]=[a_{1}]^{p_{1}}\ldots[a_{k}]^{p_{k}} \tag{3}\]
and 2) that the function \(f\) "possesses the property of generalized homogeneity or symmetry, i.e., it can be written in terms of a function of a smaller number of variables, and is of the following special form":
\[f(a_{1},\ldots,a_{k},\,b_{1},\ldots,\,b_{m})=a_{1}^{p_{1}}\ldots a_{k}^{p_{k}} \,\,\Phi(\Pi_{1},\ldots,\Pi_{m}) \tag{4}\]
where \(\Pi_{i}=b_{i}/(a_{1}^{p_{1}}\ldots a_{k}^{p_{ik}})\) for \(i=1,\ldots,m\). On page 42ff, the author comments that the term "physical relationship" for \(f\) "is used to emphasize that it should obey the covariance principle" and, further, that the \(\Pi\)-theorem is "completely obvious at an intuitive level" and that "it is clear that physical laws should not depend on the choice of units" and that this "was realized long ago, and concepts from dimensional analysis were in use long before the \(\Pi\)-theorem had been explicitly recognized, formulated and proved formally" among others, by "Galileo, Newton, Fourier, Maxwell, Reynolds and Rayleigh". Indeed, one of the most successful applications of the theorem was Reynolds' scaling law for fluid flows in pipes in 1883, well before Buckingham's seminal papers (Buckingham, 1914, 1915).
Here we do not discuss applications of the \(\Pi\)-theorem, but its relevance for data analysis, parameter identification, and sensitivity analysis is obvious: the computational cost of these
algorithms is typically exponential in the number of parameters. The theorem allows cost reductions that are exponential in the number of parameters with independent dimensions. In the case of \(f\), for example, the theorem allows to reduce the cost from \(N^{k+m}\) to \(N^{m}\) where \(N\) denotes a sampling size. In data-based modelling and machine learning, this can make the difference between being able to solve a problem in principle and being able to solve it in practice.
The bottom line is that, even though DA and the \(\Pi\)-theorem were formulated at a time in which computer based modeling was not available and were mainly motivated by the questions of model similarity mentioned above, they are still very relevant today.
For us, the challenge is to understand how to apply dependent types to 1) systematically check the dimensional consistency of expressions and 2) assist DA.
In Section3, we have seen that what in mathematical physics and modeling are called physical quantities are equipped with a dimension function. In analogy with the judgment \(e\,:\,t\) (or, _type_\(e\,{=}\,t\)), we have introduced informally the judgment \([\,e\,]\,{=}\,d\) to denote that expression \(e\) has dimension function \(d\). With the \(\Pi\)-theorem, we have seen that physical quantities may have "independent dimensions" or be dimensionally dependent. With Eqs.4.2 and 4.3, and we have also encountered (systems of) equations between dimension functions whose solutions play a crucial role in the \(\Pi\)-theorem Eq.4.4. In the next section we come back to the notion of dimension function of a physical quantity \([\,x\,]\,:\,\mathbb{R}_{+}^{n}\,\rightarrow\,\mathbb{R}_{+}\), discuss its relationship with units of measurement, and argue that \([\,x\,]\) is a power-law monomial. This property is at the core of the \(\Pi\)-theorem and of the dependently typed formalization of DA outlined in Sections5 and 6.
## 5 The Pi theorem in Type Theory
In Section3, we said that the dimension function of a physical quantity \(x\), \([\,x\,]\,:\,\mathbb{R}_{+}^{n}\,\rightarrow\,\mathbb{R}_{+}\) encodes the idea that \(x\) can be measured in a system of \(n\) fundamental units of measurement and that the number \([\,x\,]\,(L_{1},\,\ldots,\,L_{n})\) denotes the factor by which the measure of \(x\)increases (is multiplied) when the \(n\) units are decreased (divided) by \(L_{1},\,\ldots,\,L_{n}\).
We can give a precise meaning to this idea by denoting the measure of \(x\) in the units of measurement \(u_{1},\,\ldots,\,u_{n}\) by \(\mu\,x\,(u_{1},\,\ldots,\,u_{n})\). For the time being, we posit \(\mu\,x\,(u_{1},\,\ldots,\,u_{n})\,:\,\mathbb{R}\) but the reader should keep in mind that \(x\) could represent a velocity or a stress tensor and the result of measuring \(x\) could be a vector or a tensor of real numbers.
Similarly, we do not specify the types of \(x\) and of \(u_{1},\,\ldots,\,u_{n}\) here: one of the goals of this work is to come up with suitable types for physical quantities and units of measurement, see Section5.2.
### Dimension function
With these premises, we can make precise the notion of dimension function through:
\[[\,x\,]\,(L_{1},\,\ldots,\,L_{n})=\frac{\mu\,x\,(u_{1}\,/\,L_{1},\,\ldots,u_{ n}\,/\,L_{n})}{\mu\,x\,(u_{1},\,\ldots,u_{n})} \tag{1}\]
The specification suggests that the dimension function of \(x\) does not depend on the units of measurement. It formalizes the principle (of covariance, relativity of measurements) that
there is no privileged system of units of measurement or, in other words, that all systems are equally good:
\[\frac{\mu\,x\,(u_{1}\,/\,L_{1},\ldots,u_{n}\,/\,L_{n})}{\mu\,x\,(u_{1},\ldots,u_ {n})}=\frac{\mu\,x\,(u_{1}^{\prime}\,/\,L_{1},\ldots,u_{n}^{\prime}\,/\,L_{n})}{ \mu\,x\,(u_{1}^{\prime},\ldots,u_{n}^{\prime})} \tag{10}\]
for any physical quantity \(x\), systems of units \(u_{1},\ldots,u_{n}\) and \(u_{1}^{\prime},\ldots,u_{n}^{\prime}\) and scaling factors \(L_{1},\ldots,L_{n}\). It is easy to see that the principle 10 implies that the dimension function fulfills
\[[x]\,\left(L_{1}\,/\,L_{1}^{\prime},\ldots,L_{n}\,/\,L_{n}^{\prime}\right)= \frac{[x]\,\left(L_{1},\ldots,L_{n}\right)}{[x]\,\left(L_{1}^{\prime},\ldots,L _{n}^{\prime}\right)} \tag{11}\]
by equational reasoning
\[[x]\,\left(L_{1},\ldots,L_{n}\right)/\,[x]\,\left(L_{1}^{\prime},\ldots,L_{n}^{\prime}\right)\] \[= \quad\mbox{-- use Eq.~{}\eqref{eq:10}: def. of $[x]$ for units $(u_{1},\ldots,u_{n})$}\] \[\mu\,x\,(u_{1}\,/\,L_{1},\ldots,u_{n}\,/\,L_{n})\,/\,\mu\,x\,(u_{ 1}\,/\,L_{1}^{\prime},\ldots,u_{n}\,/\,L_{n}^{\prime})\] \[= \quad\mbox{-- let $u_{1}^{\prime}=u_{1}\,/\,L_{1}^{\prime}$, \ldots, $u_{n}^{\prime}=u_{n}\,/\,L_{n}^{\prime}$}\] \[\mu\,x\,(u_{1}^{\prime}\,/\,(L_{1}\,/\,L_{1}^{\prime}),\ldots,u_{ n}^{\prime}\,/\,(L_{n}\,/\,L_{n}^{\prime}))\,/\,\mu\,x\,(u_{1}^{\prime},\ldots,u_{n}^{ \prime})\] \[= \quad\mbox{-- use Eq.~{}\eqref{eq:10}: def. of $[x]$ for units $(u_{1}^{\prime},\ldots,u_{n}^{\prime})$}\] \[[x]\,\left(L_{1}\,/\,L_{1}^{\prime},\ldots,L_{n}\,/\,L_{n}^{\prime}\right)\]
From Eq. (11) it follows that
\[[x]\,\left(1,\ldots,1\right)=1 \tag{12}\]
and that dimension functions have the form of power-law monomials, see (Barenblatt et al., 1996) section 1.1.4
\[[x]\,\left(L_{1},\ldots,L_{n}\right)=L_{1}^{d_{1}\,\,x}*\cdots*L_{n}^{d_{n}\,\,x} \tag{13}\]
The exponents \(d_{1}\,x,\ldots d_{n}\,x\) are sometimes called (perhaps confusingly) the "dimensions" of \(x\) and \(x\) is said to "have dimensions" \(d_{1}\,x,\ldots d_{n}\,x\). Their value can be obtained by recalling the specification Eq. (10).
For concreteness, consider the case of mechanics already discussed in Section 3. Here \(n=3\) and, in a system of units of measurements for lengths, times and masses, the scaling factors \(L_{1}\), \(L_{2}\) and \(L_{3}\) are denoted by \(L\), \(M\) and \(T\). For consistency, we denote the exponents \(d_{1}\,x,\,d_{2}\,x\) and \(d_{3}\,x\) by \(d_{L}\,x,\,d_{M}\,x\) and \(d_{T}\,x\), respectively.
Thus, in mechanics, Eq. (10) tells us that \(L^{d_{L}\,\,x}*M^{d_{M}\,\,x}*T^{d_{T}\,\,x}\) is the factor by which the measure of \(x\) gets multiplied when the units of measurement for lengths, masses and times are divided by \(L\), \(M\) and \(T\). Therefore, when \(x\) represents a length (for example, the distance between two points or a space coordinate) we have \(d_{L}\,x=1\) and \(d_{M}\,x=d_{T}\,x=0\). Similarly, when \(x\) represents a mass we have \(d_{M}\,x=1\) and \(d_{L}\,x=d_{T}\,x=0\) and when \(x\) represents a time we have \(d_{T}\,x=1\) and \(d_{L}\,x=d_{M}\,x=0\). And when \(x\) represents a velocity (a distance divided by a time), the factor by which the measure of \(x\) gets multiplied when the units of measurement for lengths, masses and times are divided by \(L\), \(M\) and \(T\) shall be \(L/T\) and thus \(d_{L}\,x=1\), \(d_{M}\,x=0\) and \(d_{T}\,x=-1\).
These judgments are a consequence of the notion of (direct or indirect) measurement as _counting_: when we say that the length of a pen is 20 centimeters we mean that we have to add 20 times the length of a centimeter to obtain the length of that pen.
The above analysis suggests that, in classical mechanics, the type of dimension functions is isomorphic to \(\mathbb{Z}^{3}\). Thus, in this domain, we can represent a dimension function with a vector of integers of length 3:
**namespace**_Mechanics_
**namespace**_LTM_
**data**_Units = SI | _CGS_
_D : Type_
_D = Vect 3 Z_
Here, we have embedded the datatypes _Units_ (with just two data constructors _SI_ and _CGS_ to denote the international and the centimeter-gram-second systems of units, respectively) and \(D\) in the namespaces _Mechanics_ and _LTM_, the latter representing the class of units for lengths, times and masses, see Section3. Following our analysis, we can model the syntax of dimensional expressions in terms of a _DimLess_ vector for dimensionless quantities, of _Length_, _Time_ and _Mass_ vectors for lengths, times and masses (the dimensions associated with the fundamental units of measurement in _LTM_)
\begin{tabular}{l l l l} _DimLess :_D; & _Length :_D; & _Time :_D; & _Mass :_D \\ _DimLess = \([0,0,0]\); & _Length = \([1,0,0]\); & _Time = \([0,1,0]\); & _Mass = \([0,0,1]\)_ \\ \end{tabular} and of two combinators _Times_ and _Over_:
\begin{tabular}{l l l} _Times :_D_ \(\rightarrow\)_D_ \(\rightarrow\)_D_ \(\rightarrow\)_D_ \\ _Times = \((+)\); & _Over = \((-)\)_ \\ \end{tabular} These correspond to the idea that the dimensions of derived units of measurement (for example, for velocities or for energies) are obtained by multiplying or by dividing the dimensions of the fundamental units:
\begin{tabular}{l l l} _Velocity :_D; & _Acceleration :_D \\ _Velocity = Length 'Over' Time_; & _Acceleration = Velocity 'Over' Time_ \\ _Force :_D; & _Work :_D \\ _Force = Mass 'Times' Acceleration_; & _Work = Force 'Times' Length_ \\ _Energy :_D & _Energy = Mass 'Times' (Velocity 'Times' Velocity)_ \\ \end{tabular} One can easily check that energy and mechanical work are equivalent, as one would expect
\begin{tabular}{l l} _check_\({}_{1}\) : & _Energy = Work_ \\ _check_\({}_{1}\) = Refl \\ \end{tabular} that force and energy are different notions
\begin{tabular}{l l} _check_\({}_{2}\) : & _Not (Force = Energy)_ \\ _check_\({}_{2}\) & _Refl impossible_ \\ \end{tabular} and, of course compute the dimension functions of _D_-values:
\begin{tabular}{l l} _df :_D_ \(\rightarrow\)_R\({}_{3}^{+}\)_\(\rightarrow\)_R\({}_{+}\)_ \\ _df d Is = fddr \((*)\) 1.0 (zipWith pow Is ds)_ **where** \\ _ds :_Vect 3_R \\ _ds = map fromInteger d_ \\ \end{tabular} Notice that the fact that the exponents of the dimension function are integers makes dimensional judgments decidable (which not not hold for real numbered exponents).
We come back to this idea in Section 6 where we put forward specifications for data types that implement dimension functions in terms of type classes. For the rest of this section, we stick to the example of mechanics and to the representation of dimension functions in terms of vectors of three integer exponents.
### Physical quantities
We are now ready to formalize the notion of physical quantity informally introduced in Section 3. There, we posited that a parameter (e.g., the parameter \(c\) of the Stommel model discussed in Section 2.2) represents a _dimensional_ physical quantity when 1) that parameter can be measured in a system of units of a given class and 2) one can define another system of units in the same class that gives a different measurement for the parameter. By contrast, measurements of _dimensionless_ physical quantities do not change when the units of measurement are re-scaled.
This suggests that, in the context of mechanics and in the class of units for lengths, times and masses _LTM_, a (dimensional or dimensionless) physical quantity can be represented by values of type
**data**\(Q\) : \(D\) _\(\rightarrow\) Type_**where**
_Val_ : \(\{d\) : \(D\}\) _\(\rightarrow\) (\(u\) : Units) \(\rightarrow\)\(\mathbb{R}\) _\(\rightarrow\) Q \(d\)
This allows one to annotate \(\mathbb{R}\) values with different dimensions and systems of units, for example
\(x\) : \(Q\) _Length_; \(t\) : \(Q\) _Time_; \(m\) : \(Q\) _Mass_
_x = Val St 3_; \(t\) = Val CGS 1_; \(m\) = Val SI 2_
What kind of combinators shall be defined for physical quantities? As a minimum, one wants to be able to "compute" the \(D\)-value of a physical quantity
\(dim\) : \(\{d\) : \(D\}\) \(\rightarrow\) _Q _d \(\rightarrow\) D_
_dim_ \(\{d\}\) _= \(d\)
and its dimension function _df_\(\circ\)_dim_ and to measure physical quantities in different units of measurement
\(\mu\) : \(\{d\) : \(D\}\) \(\rightarrow\) _Q _d \(\rightarrow\) Units \(\rightarrow\)_\(\mathbb{R}\)
\(\mu\) {_d_} (_Val u x u
by pattern matching, as done in the definition of \(\mu\). With this minimal infrastructure one can define new physical quantities from existing ones
* \(v\,:\,Q\)_Velocity_
* \(v=(x+x)\)/\(t\)
and implement dimensional judgments for verified programming like
* \(\begin{array}{l}\mathit{check}_{3}\,:\,\mathit{dim}\,(x\,/\,(t*t))=\mathit{ Acceleration}\\ \mathit{check}_{3}=\mathit{Ref}\end{array}\)
Here, values of type \(\mathit{dim}\,(x\,/\,(t*t))=\mathit{Acceleration}\) witness that \(x\,/\,(t*t)\) is an acceleration. We have seen many examples of this kind of dimensional judgments in previous sections: \(f\) is a force, \(m\) is a mass, \(T\) is a temperature, \(p\) is a pressure, etc. We can express these judgments through the idiom
* \(\begin{array}{l}\mathit{Is}\,:\,\{d\,:\,D\}\,\rightarrow\,Q\,d\,\rightarrow \,D\,\rightarrow\,\mathit{Type}\\ \mathit{Is}\,q\,d=\mathit{dim}\,q=d\end{array}\)
and write \(\mathit{Is}\,q\,d\) instead of \(\mathit{dim}\,q=d\):
* \(\begin{array}{l}\mathit{check}_{4}\,:\,\mathit{Is}\,(m*x\,/\,(t*t))\, \mathit{Force}\\ \mathit{check}_{4}=\mathit{Ref}\end{array}\)
Similarly, we can assess whether a physical quantity is dimensionless or not
* \(\begin{array}{l}\mathit{IsDimLess}\,:\,\{d\,:\,D\}\,\rightarrow\,Q\,d\, \rightarrow\,\mathit{Type}\\ \mathit{IsDimLess}\,q=\mathit{dim}\,q=\mathit{DimLess}\\ \mathit{check}_{5}\,:\,\mathit{IsDimLess}\,((x+x)\,/\,x)\\ \mathit{check}_{5}=\mathit{Ref}\end{array}\)
As one would expect, dimensionless quantities are invariant under re-scaling of the units of measurement
* \(\begin{array}{l}\mathit{\lambda}\sqcap>\,\mu\,((x+x)\,/\,x)\,\mathit{SI}\\ \mathit{2.0}\,:\,\mathit{Double}\\ \mathit{\lambda}\sqcap>\,\mu\,((x+x)\,/\,x)\,\mathit{CGS}\\ \mathit{2.0}\,:\,\mathit{Double}\end{array}\)
and the dimension function fulfills the specification Eq. (5.1): by the definition of \(\mu\), measurements only depend on the scaling factors between the units of measurement, not on the units themselves.
Notice, however, that for the (perhaps most discussed application of dimensional analysis) computation of the period \(\tau\) of "small" oscillation of a simple pendulum of length \(l\) in a gravitational field of strength \(g\)
* \(\begin{array}{l}\mathit{l}\,:\,Q\,\mathit{Length};\quad g\,:\,\mathit{Q} \,\mathit{Acceleration};\quad\pi\,:\,Q\,\mathit{DimLess}\\ \mathit{l}\,=\mathit{Val}\,\mathit{SI}\,\mathit{0.5};\quad g\,=\mathit{Val} \,\mathit{SI}\,\mathit{9.81};\quad\quad\quad\pi\,=\mathit{Val}\,\mathit{SI} \,\mathit{3.14}\end{array}\)
the definition
* \(\begin{array}{l}\tau\,:\,Q\,\mathit{Time}\\ \tau\,=\,2*\pi*\mathit{sqrt}\,(1\,/\,g)\end{array}\)
does not type check. This is because of two reasons. The first one is that we have not defined rules for the multiplication between numerical literals and physical quantities. The second
reason is that we have not defined the computation of the square root of physical quantities. We come back to this point in the next section.
### Dimensional (in)dependence
**Dependence.** Remember that the Pi theorem is about (two) properties of a generic "physical relationship" \(f\) between a "dimensional quantity" \(a\) and \(k+m\) "dimensional governing parameters" \(a_{1}\), \(\ldots\), \(a_{k}\) and \(b_{1}\), \(\ldots\), \(b_{m}\).
One conclusion of the theorem is that "the dimension of \(a\) can be expressed as a product of the powers of the dimensions of the parameters \(a_{1}\), \(\ldots\), \(a_{k}\)" as formulated in Eq. (4.2). We have just seen examples of physical quantities whose dimension functions can be expressed as products of powers of the dimension functions of other physical quantities:
\[[l]=[g]\,[\tau]^{2},\quad[g]=[l]\,[\tau]^{-2},\quad\text{or}\quad[\tau]=[l]^{1/ 2}[g]^{-1/2}\]
In the \(D\)-language, we can express and assert the first equality with
\(\mathit{check}_{6}\,:\,\mathit{dim}\,l=(\mathit{dim}\,g\,\text{`Times'}\,( \mathit{dim}\,\tau\,\text{`Times'}\,\mathit{dim}\,\tau))\)
\(\mathit{check}_{6}=\mathit{Refl}\)
or, equivalently
\(\mathit{check}_{7}\,:\,\mathit{dim}\,l=\mathit{dim}\,g\,\text{`Times'}\, \mathit{Pow}\,(\mathit{dim}\,\tau)\,2\)
\(\mathit{check}_{7}=\mathit{Refl}\)
and similarly for the second equality. In the definition of \(\mathit{check}_{7}\) we have used the integer exponentiation function \(\mathit{Pow}\,:\,D\,\rightarrow\,\mathbb{Z}\,\rightarrow\,D\). This fulfills the specification
\(\mathit{Pow}\,d\,0\quad=\mathit{DimLess}\)
\(\mathit{Pow}\,d\,(n+1)=\mathit{pow}\,D\,m\,\text{`Times'}\,D\)
\(\mathit{Pow}\,d\,(n-1)=\mathit{pow}\,D\,m\,\text{`Over'}\,D\)
Notice, however, that formulating \([\tau]=[l]^{1/2}[g]^{-1/2}\) would require fractional exponents and extending our representation of dimension functions to vectors of rational numbers. In other words: the exponents in Eq. (4.2) (and those in Eq. (4.3)) are, in general, rational numbers and representing dimension functions in terms of vectors of rational numbers is perhaps the most natural setting for formulating the Pi theorem in type theory.
The drawback of this approach is that it requires implementing rational numbers in type theory. This is not a problem in principle, but integers have a simpler algebraic structure (initial ring) and more efficient implementations. Also, formulating the Pi theorem in terms of rational exponents does not seem strictly necessary: if dimensional analysis with rational exponents allows one to deduce that the period \(\tau\) of a simple pendulum scales with \((l/g)^{\frac{1}{2}}\), integer-based dimensional analysis should be enough to deduce that \(\tau^{2}\) scales with \(l/g\)!
We explore this possibility in the next section. To this end, let's formalize the notion that \(d\,:\,D\) is dependent on \(ds\,:\,\mathit{Vect}\,k\,D\) iff a non-zero integer power of \(d\) can be expresses as a product of integer powers of \(ds\):
\(\mathit{IsDep}\,:\,\{k:\mathbb{N}\}\rightarrow(d:D)\rightarrow(ds:\,\mathit{ Vect}\,k\,D)\,\rightarrow\,\mathit{Type}\)
\(\mathit{IsDep}\,\{k\}\,d\,ds=\mathit{Exists}\,(\mathbb{Z},\,\mathit{Vect}\,k\, \mathbb{Z})\) (\(\lambda(p,ps)\Rightarrow(\mathit{Not}\,(p=0),\,\mathit{Pow}\,d\,p=\mathit{ ProdPows}\,ds\,ps)\))
where the function \(\mathit{ProdPows}\) implements the computation of products of integer powers of vectors of \(D\) values. It fulfills the specification
_ProdPows_ :{\(n:\mathbb{N}\)} \(\rightarrow\)_Vect_ \(n\,D\) _Vect_ \(n\,\mathbb{Z}\)__\(\rightarrow\)__\(D\) _ProdPows_ _Nil Nil_ \(=\)_DimLess_
_ProdPows_ (_\(d::ds\)_) (_\(p::ps\)_) = _Pow d p 'Times' ProdPows ds ps_
Notice that, because of the definition of _DimLess_, _Times_ and _Over_ from Section 5.1, _Pow_ and _ProdPows_ also fulfill
_Pow d n = n * d_
_ProdPows ds ps = foldr_ (+) _DimLess_ (_zipWith_ (*) _ps ds_)
where (*) is generic scaling for vectors of numerical types:
\((*):\{n:\mathbb{N}\}\rightarrow\{T:\textit{Type}\}\rightarrow\textit{Num}\,T \Rightarrow T\rightarrow\textit{Vect}\,n\,T\rightarrow\textit{Vect}\,n\,T\)
\(x*v=\textit{map}\,(x*)\,v\)
Therefore, in the \(D\)-language, dependence between dimension functions boils down to linear dependence between their representations, as one would expect. By extension, we say that a physical quantity \(q\) is _dimensionally dependent_ on a vector of physical quantities \(qs\) if _dim_\(q\) is dependent on the dimensions of \(qs\):
_IsDimDep_ :{\(d:D\)} \(\rightarrow\{k:\mathbb{N}\}\rightarrow\{ds:\textit{Vect}\,k\,D\} \rightarrow\textit{Q}\,d\rightarrow\textit{QVect}\,k\,ds\rightarrow\textit{ Type}\)
_IsDimDep_ {\(d\)} {\(ds\)} _q qs = IsDep d ds
In the definition of _IsDimDep_ we have applied the data type _QVect_ for vectors of physical quantities (of different dimensions):
**data**_QVect_ :(_n_ : \(\mathbb{N}\)) \(\rightarrow\)_Vect_ \(n\,D\)__\(\rightarrow\)_Type_**where**
_Nil_ : QVect Z Nil_
\((::):\{n:\mathbb{N}\}\rightarrow\{d:D\}\rightarrow\{ds:\textit{Vect}\,n\,D\} \rightarrow\textit{Q}\,d\rightarrow\textit{QVect}\,n\,ds\rightarrow\textit{ QVect}\,(S\,n)\) (_\(d::ds\)_)
Notice that _IsDimDep a as_ is an existential type. In order to assess that \(a\) is dimensionally dependent on \(as\), one has to provide suitable integer exponents and an equality proof. For the simple pendulum, for example:
\(\textit{check}_{\textit{S}}\) : _IsDimDep_ \(\tau\)[\(l,g\)]
\(\textit{check}_{\textit{S}}\) = _Evidence_ (2, [1, - 1] ) _(not2eq0, Reft)_
where _not2eq0_ is a proof that 2 is not equal to 0. This evidence is just another way of asserting the equality
\(\textit{check}_{\textit{S}}\) : _Pow_ (_dim_ \(\tau\)) 2 = _Pow_ (_dim_ \(l\)) 1 'Times' _Pow_ (_dim_ \(g\)) (-1)
\(\textit{check}_{\textit{S}}\) = _Ref_
For the simple pendulum example, it allows one to deduce that, under quite general assumptions6, the period of oscillation \(\tau\) is proportional to the square root of \(l/g\). Given a physical quantity which is dimensionally dependent on other physical quantities, one can make it dimensionless like the "\(\Pi\)" quantities of the Pi theorem:
Footnote 6: The assumptions are that the period of oscillation of the pendulum only depends on its mass, its length, the period of oscillation of the
_dimMakeDimLess_ : \(\{k:\mathbb{N}\}\to(d:D)\to(ds:\mathit{Vect}\,k\,D)\to\mathit{IsDep}\,d\,ds\to D\) _dimMakeDimLess d ds_ (_Evidence_ (_p_,_ps_)_) = _Pow d p_ 'Over' _ProdPows ds ps_
It is easy to check that _makeDimLess_ does indeed yield dimensionless results for specific computations, for example:
\(\mathit{check}_{10}\) : \(\mathit{IsDimLess}\) (_makeDimLess_ \(\tau\) [_l_, _g_] \(\mathit{check}_{8}\)) \(\mathit{check}_{10}\) = _Ref_
However, proving this in general requires postulating that \(d\) '_Over' \(d\) is equal to _DimLess_ for arbitrary _d_:
\(\mathit{dodlsDimLess}\) : \(\{d:D\}\) \(\to\) \(d\) '_Over' \(d\) = _DimLess_
This is not a problem and suggests that \(D\) has to be a group (see Section 6).
**Independence.** Remember that a second condition for the function \(f\) of the Pi theorem to be reducible to the form of Eq. (4.4) is that the parameters \(a_{1},\ldots,a_{k}\) "have independent dimensions". Perhaps not surprisingly, the idea is that \(\mathit{qs}\) : \(\mathit{QVect}\,(S\,n)\,\mathit{ds}\,us\) are dimensionally independent iff expressing _DimLess_ as a product of powers of their dimension functions requires all exponents to be zero. This is equivalent to say the vectors associated with their dimensions are linearly independent, see (Barenblatt et al., 1996) section 1.1.5:
\(\mathit{AreDimIndep}\) : \(\{n:\mathbb{N}\}\to\{ds:\mathit{Vect}\,n\,D\}\to\mathit{QVect}\,n\,ds\to\mathit{Type}\) _AreDimIndep_ \(\{ds\}\) _= _AreIndep ds_
In the definition of _AreDimIndep_ we have applied the predicate _AreIndep_. In mechanics, \(m=3\) and _AreIndep_ can be defined straightforwardly and applied to assess the dimensional independence of physical quantities:
\(\mathit{check}_{11}\) : \(\mathit{AreDimIndep}\) [_l_, _g_] \(\mathit{check}_{12}\) : \(\mathit{Ref}\) \(\mathit{check}_{13}\) : \(\mathit{Not}\) (_AreDimIndep_ \(\tau,l,g\)) \(\mathit{check}_{14}\) : \(\mathit{check}_{15}\) : \(\mathit{Ref}\) : \(\mathit{check}_{16}\) : \(\mathit{Ref}\) : \(\mathit{check}_{17}\) : \(\mathit{check}_{18}\) : \(\mathit{check}_{19}\) : \(\mathit{check}_{20}\) : \(\mathit{check}_{21}\) : \(\mathit{check}_{22}\) : \(\mathit{check}_{23}\) : \(\mathit{check}_{24}\) : \(\mathit{check}_{25}\) : \(\mathit{check}_{26}\) : \(\mathit{check}_{27}\) : \(\mathit{check}_{28}\) : \(
and similarly for \(Pi_{\ref{eq:P1}}\) with implicit parameters \(k,m\,:\,\mathbb{N},\,ds\,:\,\)_Vect_\(k\,D,ds^{\prime}\,:\,\)_Vect_\(m\,D\) and \(d\,:\,D\). As discussed in Section 4.1, the term "physical relationship" is used in (Barenblatt et al., 1996) to denote a function that fulfills the "covariance principle".
We have seen in Section 5.1 that the covariance principle (or principle of relativity of measurements) posits that that there is no privileged system of units of measurement or, equivalently, that all systems are equally good. So far, we have formalized the notion of covariance for dimension functions (through the specification 5.1) but we have not discussed this notion for a generic function between dimensional quantities. We will come back to this in Section 6.3. For the time being, let's assume that \(IsCovariantf\) is a type that explains what it means for \(f\) to fulfill the covariance principle:
\[Pi_{\ref{eq:P1}}=(f\,:\,Q\mbox{\it Vect}\,k\,ds\,\to\,Q\mbox{\it Vect}\,m\,ds^ {\prime}\,\to\,Q\,d)\,\to\,(h_{1}\,:\,IsCovariantf)\,\to\ldots\]
We need to formalize the two assumptions Eqs. (4.1) and (4.2) about the arguments of \(f\). The first one states that \(a_{1},\ldots,a_{k}\) "have independent dimensions". We have seen how to formalize this assumption in Section 5.3:
\[Pi_{\ref{eq:P1}}=(f\,:\,Q\mbox{\it Vect}\,k\,ds\,\to\,Q\mbox{\it Vect}\,m\,ds^ {\prime}\,\to\,Q\,d)\,\to\,(h_{1}\,:\,IsCovariantf)\,\to\] \[(h_{2}\,:\,AreIndep\,ds)\,\to\,\ldots\]
The second assumption of the Pi theorem, Eq. (4.2), specifies \(m\) equalities between dimension functions. We have seen that equality between dimension functions boils down to equality in \(\mathbb{Z}^{n}\) (in mechanics \(n=3\)) and is thus decidable.
In Section 5.3, we have also seen that the exponents in Eq. (4.2) are rational numbers and that we can rewrite these equalities as
\[[b_{i}]^{p_{i}}=[a_{1}]^{p_{i1}}\ldots\,[a_{k}]^{p_{ik}}\quad i=1,\ldots,m \tag{5.6}\]
with integers \(p_{i},p_{i,1},\ldots,p_{i,k}\) as we have done for the simple pendulum example. This states that the dimension functions of the physical quantities of the second argument \(bs\) of \(f\) can be expressed as products of powers of the dimension functions of the physical quantities of the first argument
\[Pi_{\ref{eq:P1}}=(f\,:\,Q\mbox{\it Vect}\,k\,ds\,\to\,Q\mbox{\it Vect}\,m\,ds ^{\prime}\,\to\,Q\,d)\,\to\,(h_{1}\,:\,IsCovariantf)\,\to\] \[(h_{2}\,:\,AreIndep\,ds)\,\to\,(h_{3}\,:\,AreDep\,ds^{\prime}\, ds)\,\to\,\ldots\]
where \(h_{3}\,:\,AreDep\,ds^{\prime}\,ds\) is a vector of \(IsDep\) proofs, one for each element of \(ds^{\prime}\):
\[\mbox{\bf data}\,AreDep:\,\{k^{\prime},k:\mathbb{N}\}\to(ds^{\prime}\,:\,Vect \,k^{\prime}\,D)\,\to\,(ds\,:\,Vect\,k\,D)\,\to\,Type\,\mbox{\bf where}\] \[Nit:\,\,\{k:\mathbb{N}\}\to\{ds\,:\,Vect\,k\,D\}\,\to\,AreDep\, Nil\,ds\] \[(::)\,:\,\{d^{\prime}:D\}\to\{k:\mathbb{N}\}\to\{ds\,:\,Vect\,k\,D\} \to\{k^{\prime}:\mathbb{N}\}\to\{ds^{\prime}\,:\,Vect\,k^{\prime}\,D\}\,\to\] \[\mbox{\it IsDep}\,d^{\prime}\,ds\,\to\,AreDep\,ds^{\prime}\,ds\, \to\,AreDep\,(d^{\prime}\,::\,ds^{\prime})\,ds\]
With these premises, the Pi theorem warrants the existence of exponents \(p_{1},\ldots,p_{k}\) and of a function \(\Phi\) such that the equalities (4.3) and (4.4) do hold. As for Eq. (5.6), these are rational numbers but we can reformulate Eq. (4.3) as:
\[[a]^{p}=[a_{1}]^{p_{1}}\ldots\,[a_{k}]^{p_{k}} \tag{5.7}\]
with integers exponents \(p,p_{1},\ldots,p_{k}\) and the first conclusion of the Pi theorem (with all its implicit arguments) as
\[Pi_{\ref{eq:P1}}=\{k,m:\mathbb{N}\}\to\{ds:\,Vect\,k\,D\}\to\{ds^{\prime}\,: \,Vect\,m\,D\}\to\{d:D\}\to\] \[(f\,:\,Q\mbox{\it Vect}\,k\,ds\,\to\,Q\mbox{\it Vect}\,m\,ds^{ \prime}\,\to\,Q\,d)\,\to\,(h_{1}\,:\,IsCovariantf)\,\to\] \[(h_{2}\,:\,AreIndep\,ds)\,\to\,(h_{3}\,:\,AreDep\,ds^{\prime}\, ds)\,\to\,IsDep\,d\,ds\]
The second conclusion of the Pi theorem is Eq. (4.4). This states the existence of a function \(\Phi\) that allows one to express \(f\)_as_\(bs\) to the power of \(p\) as a product of powers of the \(as\) times \(\Phi\) applied to the non-dimensional "\(\Pi\)" fractions of Eq. (4.4):
\[\begin{array}{l}\mathit{Pi}_{\ref{eq:p1}}=\{k,m:\mathbb{N}\}\to\{ds:\mathit{ Vect}\,k\,D\}\to\{ds^{\prime}:\mathit{Vect}\,m\,D\}\to\{d:D\}\to\\ \hskip 14.226378pt(f:\,Q\mathit{Vect}\,k\,ds\to Q\mathit{Vect}\,m\,ds^{\prime} \to Q\,d)\to(h_{1}:\mathit{IsCovariant}f)\to\\ \hskip 14.226378pt(h_{2}:\mathit{AreIndep}\,ds)\to(h_{3}:\mathit{AreDep}\,ds ^{\prime}\,ds)\to\\ \hskip 14.226378pt\mathit{Exists}\hskip 14.226378pt(Q\mathit{Vect}\,m\,(dimMakeAllDimLess \,ds^{\prime}\,ds\,h_{3})\to Q\,DimLess)\\ \hskip 14.226378pt(\lambda\Phi\Rightarrow(as:\,Q\mathit{Vect}\,k\,ds)\to(bs: \mathit{QVect}\,m\,ds^{\prime})\to\\ \hskip 14.226378pt\mathbf{let}\hskip 14.226378pt(p,ps)=\mathit{exponents}\hskip 14.226378pt (\pi_{\ref{eq:p1}}\,f_{1}\,h_{2}\,h_{3})\\ \hskip 14.226378pt\Pi s\quad=\mathit{makeAllDimLess}\,bs\,\,as\,h_{3}\\ \hskip 14.226378pt\mathbf{in}\,\,pow\,(f\,as\,bs)\,p=\mathit{prodPows}\,as\, ps*\Phi\,\Pi s)\end{array}\]
Thus, the second conclusion of the Pi theorem is an existential type which depends on the first conclusion through the integer exponents \(p\), \(p_{1}\), \(\ldots\), \(p_{k}\) and one needs to postulate \(\pi_{\ref{eq:p1}}:\mathit{Pi}_{\ref{eq:p1}}\) in order to define \(\mathit{Pi}_{\ref{eq:p1}}\). Alternatively, one could formulate the two conclusions as a dependent pair. Notice that, in order to define \(\mathit{Pi}_{\ref{eq:p1}}\), we need to "compute" the type of \(\Phi\), the exponents \(p\), \(ps\) and the "\(\Pi\)" fractions \(\Pi s\). We compute the domain of \(\Phi\) by applying the function \(\mathit{dimMakeAllDimLess}\). This is an extension of \(\mathit{dimMakeDimLess}\) from Section 5.3. Similarly, the "\(\Pi\)" fractions \(\Pi s\) are compute with an extension of \(\mathit{makeDimLess}\) also from Section 5.3.
In this section we have formalized the notions of dimension function, physical quantity, dimensional (in)dependence and given an account of the Pi theorem in type theory. We conclude with the remark that, while values of type \(\mathit{Pi}_{\ref{eq:p1}}\), \(\mathit{Pi}_{\ref{eq:p1}}\) allow one to compute \(\Phi\) from \(f\), the latter is typically an unknown function. In other words: \(\mathit{Pi}_{\ref{eq:p1}}\) and \(\mathit{Pi}_{\ref{eq:p1}}\) merely encodes Buckingham's Pi theorem in type theory and, in applications of DA, the function \(\Phi\) has to be identified via data analysis or through a theory, as discussed in Section 4 and, in greater detail, in section one of (Barenblatt et al., 1996). For example, the constant \(\mathit{Phi}=(2*\pi)^{2}\) in the equation for the period of a simple pendulum \(\tau^{2}=l/g*\mathit{Phi}\) can be estimated empirically or obtained by solving the second order differential equation obtained by applying Newton's second law in the limit of small amplitudes.
## 6 Towards a DSL for DA and dimensionally consistent programming
We review some notions from Section 5, discuss possible generalizations, and put forward a number of desiderata for a DSL for DA and dimensionally consistent programming.
### Dimensions and physical quantities
In the last section we have introduced the concrete data types \(D\) and \(Q\) and encoded the notions of dimensions and of physical quantities in the domain of mechanics and for the _LTE_ (lengths, times and masses) class of units of measurement. This has allowed us to formalize basic notions of DA (among others, Buckingham's Pi theorem) in type theory and to apply dependent types to ensure the dimensional consistency of expressions involving physical quantities. In doing so, we have exploited a number of properties that values of type \(D\) fulfilled by definition. For example, that equality is decidable. In Section 5.3 we also suggested that \(D\) together with the binary operation _Times_ form a group. In this section, we
generalize the approach of Section5 and discuss what are suitable requirements for data types that encode the notions of dimensions and of physical quantities. As in Section5, we do so in the domain of mechanics and for the _LTE_ class of units of measurement.
As done in (Botta et al., 2021) for the notions of functor and monad, we discuss the notion encoded by \(D\) through a type class. This is just for consistency, we could as well use Agda records or modules instead. In Idris, type classes are introduced through the **interface** keyword. For example
```
interfaceDecEqtwhere \(\mathit{decEq}:(x_{1}:t)\to(x_{2}:t)\to\mathit{Dec}\)(\(x_{1}=x_{2}\))
```
explains what it means for a type \(t\) to be in \(\mathit{DecEq}\), the class of types for which propositional equality is decidable. The data constructor \(\mathit{Dec}\) in the definition of \(\mathit{DecEq}\) is defined as
```
dataDec:Type\(\to\)Typewhere \(\mathit{Yes}:(\mathit{prf}:\mathit{prop})\to\mathit{Dec}\mathit{prop}\) No:(contra:prop\(\mathit{void})\to\mathit{Dec}\mathit{prop}\)
```
A value of type \(\mathit{Dec}\mathit{prop}\) can only be constructed in two ways: either by providing a proof of \(\mathit{prop}\) (a value of type \(\mathit{prop}\)) or by providing a proof of \(\mathit{Not}\mathit{prop}\) (a function that maps values of type \(\mathit{prop}\) to values of the empty type, that is, a contradiction). Thus, a value of type \(\mathit{Dec}\) (\(x_{1}=x_{2}\)) is either a proof of \(x_{1}=x_{2}\) or a proof of \(\mathit{Not}\) (\(x_{1}=x_{2}\)) which is what it means for the equality to be decidable.
We can explain what it means for a type \(D\) to encode the notion of dimension through a \(\mathit{Dim}\) interface. As discussed in Section5.1, we need dimensional judgments to be decidable. This can be expressed by introducing \(\mathit{Dim}\) as a _refinement_ of \(\mathit{DecEq}\):
```
interfaceDecEqD\(\Rightarrow\mathit{Dim}\)Dwhere
```
Perhaps confusingly, this says that \(\mathit{Dim}\)\(D\) implies \(\mathit{DecEq}\)\(D\) or, in other words, that being in \(\mathit{DecEq}\) is a necessary condition for being in \(\mathit{Dim}\). This condition is certainly not sufficient. We have seen in Section5.1 that, as a minimum, we need to be able to define dimensionless physical quantities and the 3 fundamental dimensions of the _LTE_ class:
```
DimLess:D;Length:D;Time:D;Mass:D
```
Further, we need the _Times_ and _Over_ combinators
```
Times:D\(\to\)D\(\to\)D\(\to\)D
```
It is time to put forward some axioms. In Section5.3 we mentioned that \(d\)'\(\mathit{Over}^{*}\)\(d\) has to be equal to \(\mathit{DimLess}\) (for any \(d:D\)) and that \(D\) is a group. The idea is that \(D\) together with the _Times_ operation is the free Abelian group generated by the fundamental dimensions. Thus, writing (\(*\)) for _Times_, (\(/\)) for _Over_, and 1 for _DimLess_ we have
```
isCommutativeTimes:{d_1,d_2,d_3:D}\(\rightarrow\)d_1*d_2*d_1
``` isAssociativeTimes:{d_1,d_2,d_3:D}\(\rightarrow\)(d_1*d_2)*d_3=d_1*(d_2*d_3) isLeftIdentityDimLess:{d:D}\(\rightarrow\)1*d=d isRightIdentityDimLess:{d:D}\(\rightarrow\)d*1=d isLeftInverseDimLessOver:{d:D}\(\rightarrow\)(1/d)*d=1 isRightInverseDimLessOver:{d:D}\(\rightarrow\)d*(1/d)=1
In order to derive \(d/d=1\) one also needs _Times_ to associate with _Over_(Gibbons, 1991):
\[\begin{array}{l}\mbox{\it noPrec}\,:\{\,d_{1},d_{2},d_{3}\,:\,D\}\,\to\,(d_{1} \ast d_{2})/d_{3}=d_{1}\ast(d_{2}/d_{3})\end{array}\]
With _DimLess_, _Times_ and _Over_ one can implement the functions _Pow_ and _ProdPows_ from Section 5.3 generically
\[\begin{array}{l}\mbox{\it Pow}\quad:\{\,D\,:\,\mbox{\it Type}\,\}\,\to\,\mbox{ \it Dim}\,D\,\Rightarrow\,D\,\to\,\mathbb{Z}\,\to\,D\end{array}\]
\[\begin{array}{l}\mbox{\it Pow}\,\,d\,n\,=\,pow\,d\,\,(integerRec\,n)\mbox{\bf where }\\ \mbox{\it Pow}\,\,:\{\,n\,:\,\mathbb{Z}\}\,\to\,D\,\to\,IntegerRec\,n\,\to\,D \end{array}\]
\[\begin{array}{l}\mbox{\it Pow}\,\,d\,\,\,\mbox{\it Integer}Z\,\,\,\,\,\,\,\, \
completing the proof requires invoking the axioms of _Dim_, see the literate Idris code that generates this document at [https://gitlab.pik-potsdam.de/botta/papers](https://gitlab.pik-potsdam.de/botta/papers). Alas, we know that implementing generic proofs can be awkward!
Thus, a DSL that supports _generic_ DA and dimensionally consistent programming would have to provide a library of proofs of elementary equalities like the one between energy and work. Perhaps more importantly, it would also have to provide proofs of elementary inequalities, for example that _Not_ (_Force = Energy_). In Section 5.1, we could assess this inequality by
\[\begin{array}{l}\mathit{check}_{2}\,:\,\mathit{Not}\,\left(\mathit{Force}= \mathit{Energy}\right)\\ \mathit{check}_{2}\,\mathit{Refl}\,\mathit{impossible}\end{array}\]
Implementing a generic proof on the only basis that the type of _Force_ is equal to the type of _Energy_ and that such type is in _Dim_ would not be as easy. As a minimum, it would require extending the _Dim_ interface with axioms that guarantee that the generators are not equal.
Beside providing the basic grammar of the \(D\)-language, a data type in _Dim_ also needs to provide a dimension function. There are (at least) two ways of encoding this requirement. One is to require _Dim_ to be equipped with a dimension function method
\[\begin{array}{l}\mathit{df}\,:\,D\,\rightarrow\,(\mathbb{R}_{+}^{3}\, \rightarrow\,\mathbb{R}_{+})\end{array}\]
that fulfills the specifications (5.3) and (5.4):
\[\begin{array}{l}\mathit{dfSpec}_{1}\,:\,\{d\,:\,D\}\,\rightarrow\,\mathit{ df}\,d\,\{1.0,1.0,1.0\}=1.0\\ \mathit{dfSpec}_{2}\,:\,\{d\,:\,D\}\,\rightarrow\,\{L,L^{\prime},T,T^{\prime},M,M^{\prime}\,:\,\mathbb{R}_{+}\}\,\rightarrow\\ \mathit{df}\,d\,\{L\,/\,L^{\prime},T\,/\,T^{\prime},M\,/\,M^{\prime}\}= \mathit{df}\,d\,\{L,T,M\}\,/\,\mathit{df}\,d\,\{L^{\prime},T^{\prime},M^{ \prime}\,\}\end{array}\]
In Section 5.2 we have seen that the dimension function indeed fulfilled these requirements up to floating point accuracy. But, with \(\mathit{dfSpec}_{1}\) and \(\mathit{dfSpec}_{2}\), implementing _Dim_ would have to rely on non-implementable assumptions (if \(\mathbb{R}_{+}\) is just an alias for floating-point numbers) or on a formalization of real numbers. One way to circumvent this difficulty would be to restrict the type of \(\mathit{df}\,d\) to \(\mathbb{Q}_{+}^{3}\,\rightarrow\,\mathbb{Q}_{+}\). This is awkward and conceptually unsatisfactory.
Another way of making _Dim_ support the definition of a dimension function is to require it to provide the integer exponents of the dimension function of Eq. (5.5):
\[\begin{array}{l}\mathit{ds}\,:\,D\,\rightarrow\,\mathit{Vect}\,3\,\mathbb{Z }\end{array}\]
One could then define the dimension function associated with a \(D\) type in _Dim_ on the basis of such exponents, as done in Section 5.2. For example
\[\begin{array}{l}\mathit{df}\,:\,\{D\,:\,\mathit{Type}\,\}\,\rightarrow\, \mathit{Dim}\,D\,\Rightarrow\,D\,\rightarrow\,\mathit{Vect}\,3\,\mathbb{R}_{+ }\,\rightarrow\,\mathbb{R}_{+}\\ \mathit{df}\,d\,\mathit{ls}\,=\mathit{foldr}\,\left(\ast\right)\,1.0\,\left( \mathit{zipWith}\,\mathit{pow}\,\mathit{ls}\,\mathit{rds}\right)\,\mathbf{where}\\ \mathit{rds}\,:\,\mathit{Vect}\,3\,\mathbb{R}\\ \mathit{rds}\,=\mathit{map}\,\mathit{from}\,\mathit{Integer}\,\left(\mathit{ds}\,d\right)\end{array}\]
We do not follow this idea any further. The discussion above strongly suggests that a DSL for DA and dimensionally consistent programming should provide a concrete implementation of \(D\), like the one discussed in Section 5.1. We argue that this conclusion holds even if we define _Dim_ as a refinement of a _Group_ type class. By a similar token, we argue that a DSL for DA and dimensionally consistent programming should also provide a concrete implementation of the data type \(Q\) for physical quantities.
In the next section, we discuss some more desiderata for such a DSL. As always, implementing these features would necessarily be an iterative process and require a close collaboration between domain experts and computer scientists.
### Towards a DSL: desiderata
In Section 5.2 we have introduced the data type
**data \(Q\,:\,D\,\rightarrow\,\)**_Type_**where**
\(Val\,:\{d\,:\,D\}\,\rightarrow\,\)(\(u\,:\,\)Units) \(\rightarrow\,\mathbb{R}\,\rightarrow\,Q\,d\)
of physical quantities. The idea behind the definition of \(Q\) was very straightforward: annotating \(\mathbb{R}\) values with units of measurement and dimensions allows one to 1) prevent computations that are dimensionally inconsistent (like adding areas to times) and 2) ensure the correctness of computations that involve conversions between units of measurement, like adding meters to centimeters.
#### 6.2.1 Functions
As we have seen in Sections 2 to 4, most computations in mathematical physics involve operations on functions between physical quantities, for example
\(pos\,:\,Q\,\)_Time \(\rightarrow\,Q\,\)Length_
\(pos\,t\,=\,v\,*\,t\)
for a function that describes the position of a body moving at constant speed. Standard arithmetic operations between such functions can be defined straightforwardly by lifting the corresponding operations on \(Q\) values. For example:
\((+)\,:\,\{\,d_{1},d_{2}\,:\,D\}\,\rightarrow\,(Q\,d_{1}\,\rightarrow\,Q\,d_{ 2})\,\rightarrow\,(Q\,d_{1}\,\rightarrow\,Q\,d_{2})\)
\((+)f_{1}f_{2}\,=\,\lambda q\,\Rightarrow\,f_{1}\,\,q\,+f_{2}\,q\)
Other operations, however, require some more care. Let \(pos^{\prime}\) represent the first derivative of \(pos\). What shall be the type of \(pos^{\prime}\)? The discussion at the end of Section 3 suggests that this has to be \(Q\,\)_Time \(\rightarrow\,Q\,\)Velocity_.
A DSL should provide primitives for computing the types of (partial) derivatives of functions of physical quantities and for dimensionally consistent differentiation
\(pos^{\prime\prime}\,:\,\)_type (derivative (derivative \(pos\)))
\(pos^{\prime\prime}\,t\,=\,v\,/\,t\)
support elementary dimensional judgments,
\(check_{13}\,:\,\)dimCodomain \(pos^{\prime\prime}\,=\,\)Acceleration_
\(check_{13}\,=\,\)_Refl_
and reject definitions that are dimensionally inconsistent like for example \(pos^{\prime\prime}\,t\,=\,x\,/\,t\). By the same token, a DSL for dimensionally consistent programming should provide primitives for dimensionally consistent "nabla" operations, integration and probabilistic reasoning.
#### 6.2.2 More advanced features: DA driven program derivation and data analysis
Beside supporting program specification and verified programming, dependently typed languages are also powerful tools for type-driven program development (Brady, 2017). For
example, the Idris system can be queried interactively and asked to assist filling in holes like?**h\({}_{\mathbf{0}}\)**. We can try to exploit this system to derive implementations of physical relationships that fulfill the Pi theorem. For example, coming back to the simple pendulum example from Section 5.2, implementing a function that computes the length of the pendulum that obtains oscillations of period \(\tau\) given its mass \(m\), the acceleration of gravity \(g\) and the amplitude \(\alpha\) of the oscillations:
\[\mathit{length}:Q\,\mathit{DimLess}\to Q\,\mathit{Acceleration}\to Q\,\mathit{Mass}\to Q\,\mathit{Time}\to Q\,\mathit{Length}\]
As a first step, we assess that _Acceleration_, _Mass_ and _Time_ are independent and that _DimLess_ depends on these three dimensions. This can be done straightforwardly:
\[\mathit{check}_{14} :\,\mathit{AreIndep}\,\left[\mathit{Acceleration},\mathit{Mass}, \mathit{Time}\right]\] \[\mathit{check}_{14} =\mathit{Refl}\] \[\mathit{check}_{15} :\,\mathit{IsDepDimLess}\,\left[\mathit{Acceleration},\mathit{Mass}, \mathit{Time}\right]\] \[\mathit{check}_{15} =\mathit{Evidence}\,\left(1,\left[0,0,0\right]\right)\,\left( \mathit{not}_{1}\mathit{eq0},\mathit{Refl}\right)\]
Then we define _length_\(\alpha\,g\,m\,\tau\) as a product of powers of \(g\), \(m\) and \(\tau\), consistently with the Pi theorem
\[\mathit{length}\,\alpha\,g\,m\,\tau=\mathit{pow}\,g\,\text{?}\mathbf{h_{1}}\, \ast\,\mathit{pow}\,m\,\text{?}\mathbf{h_{2}}\,\ast\,\mathit{pow}\,\tau\, \text{?}\mathbf{h_{3}}\,\ast\,\mathit{Psi}\,\alpha\]
and fill in the holes with exponents that match the type of _length_: 1, 0 and 2. The function \(\mathit{Psi}\,:\,Q\,\mathit{DimLess}\,\mathit{SI}\,\to Q\,\mathit{DimLess}\, \mathit{SI}\) remains undefined, but is the only part left to deduce from experiments. The type checker will not accept other implementations of _length_ but notice that we are solving the system of equations?**h\({}_{\mathbf{1}}\)** = 1, \(-2\ast\text{?}\mathbf{h_{1}}+\text{?}\mathbf{h_{3}}\)** = 0 and?**h\({}_{\mathbf{2}}\)** = 0 by hand, with little help from the type checker!
A better approach would be to ask the type checker to solve the system for us, e.g. by searching for suitable values of?**h\({}_{\mathbf{1}}\)**,?**h\({}_{\mathbf{2}}\)** and?**h\({}_{\mathbf{3}}\)** in certain ranges. Perhaps more importantly, we would like the type checker to detect situations in which the system has no solutions and recommend possible decompositions of the arguments of physical relationships into lists of dimensionally independent and dimensionally dependent components: the _as_ and the _bs_ parameters of the Pi theorem. This would be particularly useful for data analysis: it would provide domain experts with alternative, dimensionally consistent views of a given dataset and help practitioners reducing the complexity of data-based studies.
### Physical laws revisited, the covariance principle
In Section 3, we have argued that equations like Newton's second principle, Eq. (4), or the ideal gas law, Eq. (5), summarize empirical facts about measurements (or put forward axioms about such measurements) of physical quantities. Specifically, we have argued that Eq. (4) posits that measurements of \(F\) (force) are equal to the product of measurements of \(m\) (mass) and measurements of \(a\) (acceleration). In Section 5.2 we have formalized the notion of physical quantity and the form of functions between physical quantities that fulfill the covariance principle as expressed by the Pi theorem.
Now we can provide a better interpretation of Eqs. (4) and (5) and, more generally, equations that represent physical laws. The idea is that these equations specify functions between physical quantities. For example, Eq. (4) can be interpreted as
\(F:Q\,Mass\,\to\,Q\,Acceleration\,\to\,Q\,Force\)
\(F\,m\,a=m\,*\,a\)
Because of the definition of multiplication between physical quantities from Section 5.2, measurements of \(F\,m\,a\) are equal to the product of measurements of \(m\) and measurements of \(a\) in any system of units of measurement.
This is equivalent to saying that \(F\) fulfills the covariance principle and suggests how _IsCovariant_ from Section 5.4 shall be defined for generic functions. Informally, a function \(f:Q\,Vect\,m\,ds\,\to\,Q\,d\) fulfills the covariance principle iff the diagram
commutes. Here _mapQ_\(\mu_{u}\) is the function that applies \(\lambda q\Rightarrow\mu\,q\,u\) to the physical quantities of a _QVect_, \(u\) is an arbitrary system of units of measurement and\(f^{\prime}:\,Vect\,m\,\mathbb{R}\,\to\,\mathbb{R}\) is, up to the type, just\(f\).
## 7 Conclusions
Specialization and the pervasive usage of computer-based modelling and simulation in the physical sciences have widened the gap between the languages of mathematical physics and modelling and those of mathematics and functional programming. This gap is a major obstacle to fruitful communication and to interdisciplinary collaborations: computer-based modelling badly needs precise specifications and dependently typed programming languages have enough expressive power to support formulating such specifications. But dependently typed programming languages are not (yet) well equipped for encoding the "grammar of dimensions" which rules the languages of mathematical physics and modelling. Our contribution is a first step towards making FP more suitable for developing applications in these domains.
We have studied the role of equations, laws and dimensions on the basis of established examples from the physical sciences and of seminal works in modelling. We have analyzed the notions of dimension function, physical quantity and units of measurement and we have provided an account of the theory of physical similarity and of Buckingham's Pi theorem from the point of view of computer science and FP. Finally, we have proposed a small DSL that encodes these notions in Idris, supports dimensional judgments, and leverages the type system of the host language to provide tests of dimensional consistency, dependence, independence and to ensure the consistency of expressions involving physical quantities.
The DSL also supports dependently typed formulations of Buckingham's Pi theorem and we have discussed and type checked one such formulation. From this perspective, our work is also a contribution towards understanding relativity principles through formalization. In the physical sciences these principles are well understood and appreciated. They have lead
to important applications in engineering and data science but also to surprising conceptual breakthroughs (Pauli, 2013). But it is not clear how relativity principles could be formulated in the economic sciences, in the biological sciences, and thus also in climate science. We believe that type theory and FP can contribute towards answering this question.
## Acknowledgments
The authors thank Prof. Jeremy Gibbons and Dr. Julian Newman, whose comments have lead to significant improvements of the original manuscript. The work presented in this paper heavily relies on free software, among others on Coq, Idris, Agda, GHC, git, vi, Emacs, LaTeX and on the FreeBSD and Debian GNU/Linux operating systems. It is our pleasure to thank all developers of these excellent products. This is TiPES contribution No 231. This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 820970.
## Conflicts of Interest
None.
|
2310.02946 | Local Max-Entropy and Free Energy Principles, Belief Diffusions and
their Singularities | A comprehensive picture of three Bethe-Kikuchi variational principles
including their relationship to belief propagation (BP) algorithms on
hypergraphs is given. The structure of BP equations is generalized to define
continuous-time diffusions, solving localized versions of the max-entropy
principle (A), the variational free energy principle (B), and a less usual
equilibrium free energy principle (C), Legendre dual to A. Both critical points
of Bethe-Kikuchi functionals and stationary beliefs are shown to lie at the
non-linear intersection of two constraint surfaces, enforcing energy
conservation and marginal consistency respectively. The hypersurface of
singular beliefs, accross which equilibria become unstable as the constraint
surfaces meet tangentially, is described by polynomial equations in the convex
polytope of consistent beliefs. This polynomial is expressed by a loop series
expansion for graphs of binary variables. | Olivier Peltre | 2023-10-04T16:32:10Z | http://arxiv.org/abs/2310.02946v1 | # Local Max-Entropy and Free Energy Principles,
###### Abstract
A comprehensive picture of three Bethe-Kikuchi variational principles including their relationship to belief propagation (BP) algorithms on hypergraphs is given. The structure of BP equations is generalized to define continuous-time diffusions, solving localized versions of the max-entropy principle (A), the variational free energy principle (B), and a less usual equilibrium free energy principle (C), Legendre dual to A. Both critical points of Bethe-Kikuchi functionals and stationary beliefs are shown to lie at the non-linear intersection of two constraint surfaces, enforcing energy conservation and marginal consistency respectively. The hypersurface of singular beliefs, accross which equilibria become unstable as the constraint surfaces meet tangentially, is described by polynomial equations in the convex polytope of consistent beliefs. This polynomial is expressed by a loop series expansion for graphs of binary variables.
## I Introduction
Boltzmann-Gibbs principles describe the equilibrium state \(p_{\Omega}=\frac{1}{Z_{\Omega}}\mathrm{e}^{-H_{\Omega}}\) of a statistical sytem, given its hamiltonian or energy function \(H_{\Omega}:E_{\Omega}\to\mathbb{R}\), as solution to a collection of variational problems [1, 2]. Each one corresponds to a different set of external constraints (energy or temperature, volume or pressure...) yet they are all related by Legendre transforms on the constraint parameters. However, evaluating the partition function \(Z_{\Omega}\) is impossible for large configuration spaces \(E_{\Omega}\), and the design of efficient algorithms for estimating \(p_{\Omega}\) or a subset of its marginals is a challenge with countless applications.
Given a hypergraph \(K\subseteq\mathcal{P}(\Omega)\) with vertices in the set of variables \(\Omega\), the Bethe-Kikuchi principles A, B and C below yield tractable variational problems, where instead of the global distribution \(p_{\Omega}\), one optimizes over a field of _consistent local beliefs_\((p_{\mathrm{a}})_{\mathrm{a}\in K}\). Controlling the range of interactions independently of the size of the system, they exploit the asymptotic additivity of _extensive_ thermodynamic functionals. The free energy principle B, also known as the cluster variation method (CVM) [3, 4, 5], was notably shown to have exponential convergence on the Ising model when \(K\) grows coarse [6], the choice of \(K\) thus offers a compromise between precision and complexity. It was already known that CVM solutions may be found by the Generalized Belief Propagation (GBP) algorithm of Yedidia, Freeman and Weiss [7, 8, 9] (and known for even longer when \(K\) is a graph [11]). This algorithm is however far from being optimal, and there lacked a comprehensive understanding of their correspondence including the relationship with a Bethe-Kikuchi max-entropy principle (A) and its Legendre-dual free energy principle (C).
We here describe continuous-time diffusion equations on belief networks which smooth out most convergence issues of GBP, recovered as a time-step 1 Euler integrator. We then show that they solve the three different Bethe-Kikuchi variational problems, A, B and C, whose critical points are shown to lie at the intersection of two constraint manifolds, enforcing _energy conservation_ and _belief consistency_ respectively. The former consists of homology classes in a chain complex of local observables \((C_{\bullet},\delta)\), the latter consists of cohomology classes in the dual cochain complex of local measures \((C_{\bullet}^{*},d)\), but these two are related by the non-linear correspondence mapping energy functions to local Gibbs states. While solutions to the max-entropy principle A are stationary states of _adiabatic_ diffusion algorithms, preserving the mean energy \(\mathbb{E}[H_{\Omega}]\), the variational free energy principle B (CVM) is solved by _isothermal_ diffusions, preserving the inverse temperature \(\beta=1/T\). The equilibrium free energy principle C is dual to the other two, as it optimizes over fibers of gauge-transformed energy functions, and not over consistent beliefs. Its solutions retract onto those of B and are also be found by isothermal diffusions.
From a physical perspective, Bethe-Kikuchi principles could be viewed as mere tools to approximate the global and exact Boltzmann-Gibbs principles, although precise and powerful. The possible coexistence of multiple energy values at a fixed temperature still reminds counter-intuitive yet physical phenomena, such as surfusion and metastable equilibria. Recently, free energy principles, Bethe-Kikuchi approximations and BP algorithms also
made their way to neuroscience [12, 13, 14, 15], a context where it is not very clear what a global probability distribution \(p_{\Omega}\) ought to describe. The locality of interactions is yet somehow materialized by neuron dendrites and larger scale brain connectivity. Message-passing in search of consensus is an interesting and working metaphor of neuronal behaviours, which might in this case hold more reality than global variational principles. The remarkable success of BP algorithms in decoding and sterevision applications demonstrates the potential of message-passing schemes on graphs to solve difficult problems. However the success of BP algorithms on loopy networks is often presented as an empirical coincidence, and a deeper understanding of their different regimes could provide the missing theoretical guarantees.
Whatever the perspective, singularities of Bethe-Kikuchi functionals and belief diffusions (in finite size) are an important and interesting feature. They happen when the two constraint manifolds meet tangentially. A stationary state crossing the singular surface will generically become unstable, attracted towards a different sheet of the intersection. This would appear as a discontinuous jump in the convex polytope \(\Gamma_{0}\) of consistent beliefs. We show that the singular stratification \(\tilde{\Sigma}^{1}=\bigsqcup_{k\geq 1}\Sigma^{k}\) is described by polynomial equations in \(\Gamma_{0}\). They compute the corank \(k\) of linearized diffusion, restricted to the subspace of infinitesimal gauge transformations. For graphs of binary variables, this polynomial is written explicitly in terms of a loop series expansion.
### _Related work_
The first occurence of BP as an approximate bayesian inference scheme dates back to Gallager's 1962 thesis [16] on decoding, although it is often attributed to Pearl's 1982 article on bayesian trees [17], where it is exact. BP has received a lot of attention and new applications since then although it is still mostly famous in the decoding context, on this see for instance [18, 19, 20, 21, 22] and [20] for an excellent review. In telecommunications, BP is thus used to reconstruct a parity-check encoded signal by iteratively updating beliefs until all the local constraints are satisfied. Although the associated factor graph has loops, BP works surprisingly well at reconstructing the signal. See [23] for a numerical study of loopy BP and its singularities. As a marginal estimation algorithm, usecases for BP and its generalizations to hypergraphs are quite universal. Other interesting applications for instance include (but are not limited to) computer stereovision [24] and conditional Boltzmann machines [25]. Gaussian versions of BP also exist [26], from which one could for instance recover the well known Kalman filter on a hidden Markov chain [20].
The relationship with Bethe-Kikuchi approximations is covered in the reference works [21, 11, 27] in the case of graphs, yet a true correspondence with the CVM on hypergraphs \(K\subseteq\mathcal{P}(\Omega)\) could bot be stated before the GBP algorithm of Yedidia, Freeman and Weiss in 2005 [7], whose work bridged two subjects with a long history. The idea to replace the partition function \(Z_{\Omega}\) by a sum of local terms \(\sum_{a\in K}c_{a}Z_{a}\), where coefficients \(c_{a}\in\mathbb{Z}\) take care of eliminating redundancies, was first introduced by Bethe in 1935, and generalized by Kikuchi in 1951 [4]. The truncated Mobius inversion formula was only recognized by Morita in 1957 [5], laying the CVM on systematic combinatorial foundations. Among recent references, see [28] for a general introduction to the CVM. The convex regions of Bethe-Kikuchi free energies and loopy BP stability are studied in [29, 30], while very interesting loop series expansions may be found in [31] and [32]. For applications of Bethe-Kikuchi free energies to neuroscience and active bayesian learning, see also [12, 13, 14].
The unifying notion of graphical model describes Markov random fields by their factorization properties, which are in general stronger than their conditional independence properties obtained by the Hammersley-Clifford theorem [33]. This work sheds a different light from the usual probabilistic interpretation, so as to make the most of the local structure of observation. The mathematical constructions below thus mostly borrow from algebraic topology and combinatorics. The reference on combinatorics is Rota [34], and the general construction for the cochain complex \((C_{\bullet}^{*},d)\) dates back to Grothendieck and Verdier [35]. It has been given a very nice description by Moerdijk in his short book [36]. When specializing the theory to localized statistical systems, one quickly arrives at the so-called _pseudo-marginal extension problem_[37, 38, 39] whose solution is closely related to the _interaction decomposition theorem_[37, 40]. This fundamental result yields direct sum decompositions for the functor of local observables, used in the proof of theorem 3 below.
### _Methods and Results_
This work brings together concepts and methods from algebraic topology, combinatorics, statistical physics and information theory. It should particularly interest users of belief propagation algorithms, although we hope it will also motivate a broader use of Bethe-Kikuchi information functionals beyond their proven decoding applications. It is intended as a comprehensive but high-level reference on the subject for a pluridisciplinary audience. We expect some readers might lack specific vocabulary from homological algebra, although we do not believe it necessary for understanding the correspondence theorems. We provide specific references to theory and applications where needed, and longer proofs are laid in appendix to avoid burdening the main text for readers mostly interested by the results.
The main object of theory here consists in what we call
the _combinatorial chain complex_\((C_{\bullet},\delta,\zeta)\). This is a graded vector space \(\bigoplus_{r\geq 0}C_{r}\), a codifferential \(\delta:C_{r}\to C_{r-1}\), and a combinatorial automorphism \(\zeta:C_{r}\to C_{r}\), attached to any hypergraph \(K\) with vertices in the set of variables \(\Omega\). The operators \(\zeta\) and \(\delta\) acting on \(C_{0}\) and \(C_{1}\) generate belief propagation equations, and an efficient implementation is made available at github.com/opeltre/topos. Although deeper numerical experiments are not in the scope of this article, this library was used to produce the level curves of a Bethe-Kikuchi free energy in figure 6 and the benchmarks of figure 4.
Initial motivations were to arrive at a concise factorization of GBP algorithms, and at a rigorous proof of the correspondence between GBP fixed points and CVM solutions. Although described earlier [8, 9, 10], there lacked their relationship to a Bethe-Kikuchi max-entropy principle A and its dual (equilibrium) free energy principle C. The polynomial description of singular sets \(\Sigma^{k}\subseteq\Gamma_{0}\) is also new, and their explicit formulas on binary graphs yield a very satisfying and most expected relationship with the subject of loop series expansions [30, 31, 32].
The article is structured as follows.
* Graphical models and generalized belief propagation defines Gibbsian ensembles as positive Graphical Models (GMs). The factorization property of the probability density translates as a linear spanning property of its log-likelihood, called a \(K\)-local observable. GBP equations are then provided along with classical examples.
* MAX-entropy and free energy principles briefly reviews Boltzmann-Gibbs and Bethe-Kikuchi principles so as to formulate the variational problems A, B and C, localized versions of the fundamental principles defining thermodynamic equilibrium in statistical physics.
* LOCAL statistical systems is the core technical section, where we define the chain complex \((C_{\bullet},\delta)\) of local observables, its dual complex \((C_{\bullet}^{\bullet},d)\) of local densities, and the combinatorial automorphisms \(\zeta\) and \(\mu=\zeta^{-1}\) acting on all the degrees of \(C_{\bullet}\). These higher degree combinatorics of \(C_{r}\) for \(r>1\) were described previously in [9, chap. 3]. We here propose an integral notation for the zeta transform making the analogy with geometry more intuitive, in the spirit of Rota [34].
* BELIFF diffusions uses the codifferential \(\delta\) and its conjugate under \(\zeta\) to generate diffusion equations on the complex \((C_{\bullet},\delta,\zeta)\). We explain under which conditions a flux functional \(\Phi:C_{0}\to C_{1}\) yields solutions to problems A, B and C as stationary states of the diffusion \(\frac{dv}{dt}=\delta\Phi(v)\) on \(C_{0}\). Its purpose is to explore a homology class \([v]\) (_energy conservation_) until meeting the preimage \(\mathcal{M}^{\beta}\subseteq C_{0}\) of cohomology classes under the Gibbs state map at inverse temperature \(\beta\) (_marginal consistency_).
* MESSAGE-PASSING EQUILIBRIA states the rigorous correspondence between critical points and stationary beliefs with theorems A, B and C. Singular subsets \(\Sigma^{k}\subseteq\Gamma_{0}\) are defined by computing the dimension of the intersection of the two tangent constraint manifolds, almost everywhere transverse. The fact that it may be described by polynomial equations allows for numerical exploration of singularities, and motivate a deeper study relating them to the topology of \(K\).
### Notations
Let \(\Omega\) denote a finite set of indices, which we may call the _base space_. We view the partial order of _regions_\((\mathcal{P}(\Omega),\subseteq)\) as a category with a unique arrow \(b\to a\) whenever \(b\subseteq a\). We write \(b\subset a\) only when \(b\) is a strict subset of \(a\), and use consistent alphabetical order in notations as possible. The opposite category, denoted \(\mathcal{P}(\Omega)^{op}\), has arrows \(a\to b\) for \(b\subseteq a\).
A free sheaf of _microstates_\(E:\mathcal{P}(\Omega)^{op}\to\mathbf{Set}\) will then map every region \(a\subseteq\Omega\) to a finite local configuration space \(E_{a}=\prod_{i\in a}E_{i}\). In other words, the sections of \(E_{a}\) are vertex colourings \(x_{a}=(x_{i})_{i\in a}\) of a subset \(a\subseteq\Omega\), with colours \(x_{i}\in E_{i}\) for \(i\in a\) (called local microstates or configurations in physical terminology). As a contravariant functor, \(E\) maps every inclusion \(b\subset a\) to the canonical restriction \(\pi^{a\to b}\), projecting \(E_{a}\) onto \(E_{b}\). Given a local section \(x_{a}\in E_{a}\), we write \(x_{a|b}=\pi^{a\to b}(x_{a})\).
For every region \(a\subseteq\Omega\), we will write \(\mathbb{R}^{E_{a}}\) for the finite dimensional algebra of real observables on \(E_{a}\), write \(\mathbb{R}^{E_{a}*}\) for its linear dual, and write \(\tilde{\Delta}_{a}=\operatorname{Prob}(E_{a})\subset\mathbb{R}^{E_{a}*}\) for the topological simplex of probability measures on \(E_{a}\), also called _states_ of the algebra \(\mathbb{R}^{E_{a}}\). The open simplex of _positive_ states will be denoted \(\Delta_{a}\subset\tilde{\Delta}_{a}\).
## II Graphical models and generalized belief propagation
### Graphical Models
In this paper, our main object of study is a special class of Markov Random Fields (MRFs), called Gibbsian ensembles in [33] although they are now more commonly called Graphical Models (GMs) in the decoding and machine learning contexts. Given a set of vertices \(\Omega\) and a random colouring \(x_{\Omega}=(x_{i})_{i\in\Omega}\), a GM essentially captures the locality of interactions by a hypergraph \(K\subseteq\mathcal{P}(\Omega)\) over which the density of \(x_{\Omega}\) should factorize. This property is in general stronger than the Markov properties obtained by the Hammersley-Clifford theorem (as conditional independence of separated regions only ensures factorization over cliques [33]).
**Definition 1**.: _Given \(K\subseteq\mathcal{P}(\Omega)\) and a collection of factors \(f_{a}:E_{a}\to\mathbb{R}\) for \(a\in K\), the graphical model parameterized by \((f_{a})_{a\in K}\) is the probability distribution_
\[p_{\Omega}(x_{\Omega})=\frac{1}{Z_{\Omega}}\prod_{a\in K}f_{a}(x_{\Omega|a}), \tag{1}\]
_where \(Z_{\Omega}=\sum_{E_{\Omega}}\prod_{a}f_{a}\) is an (unknown) integral over \(E_{\Omega}\), called the partition function._
It is common to represent GMs by their _factor graph_ (figure 1.b), a bipartite graph where variable nodes \(i\in\Omega\) carry variables \(x_{i}\in E_{i}\), and factor nodes \(a\in K\) carry local functions \(f_{a}:E_{a}\to\mathbb{R}\). Factors \(a\in K\) are then linked to all nodes \(i\in a\subseteq\Omega\). However, this graph structure should not be confused with the partial order structure we use for message-passing (figures 1.c and 1.d).
In statistical physics, where the notion of graphical model originates from, it is more common to write \(p_{\Omega}\) as a normalized exponential density \(\frac{1}{Z_{\Omega}}\operatorname{e}^{-\beta H_{\Omega}}\) called the _Gibbs density_. The function \(H_{\Omega}:E_{\Omega}\to\mathbb{R}\) is called _hamiltonian_ or _total energy_ and the scalar parameter \(\beta=1/T\) is called _inverse temperature_. This variable energy scale is often set to 1 and omitted. We recommend references [1] and [2] for deeper thermodynamic background.
Assuming positivity of \(p_{\Omega}\), the factorization property of a graphical model \(p_{\Omega}\) translates to a linear spanning property on the global hamiltonian \(H_{\Omega}=\sum_{a}-\ln f_{a}\).
**Definition 2**.: _We say that a global observable \(H_{\Omega}:E_{\Omega}\to\mathbb{R}\) is \(K\)-local with respect to the hypergraph \(K\subseteq\mathcal{P}(\Omega)\), when there exists a family of interaction potentials \(h_{a}:E_{a}\to\mathbb{R}\) such that for all \(x_{\Omega}\in E_{\Omega}\),_
\[H_{\Omega}(x_{\Omega})=\sum_{a\in K}h_{a}(x_{\Omega|a}). \tag{2}\]
_The Gibbs state of the hamiltonian \(H_{\Omega}\) at inverse temperature \(\beta>0\) is the positive probability distribution_
\[p_{\Omega}(x_{\Omega})=\frac{1}{Z_{\Omega}}\operatorname{e}^{-\beta H_{\Omega }(x_{\Omega})}, \tag{3}\]
_and we denote the surjective Lie group morphism from global observables to Gibbs states by_
\[\rho_{\Omega}^{\beta}:\mathbb{R}^{E_{\Omega}}\to\Delta_{\Omega}. \tag{4}\]
_The \(K\)-local Gibbsian ensemble is the image of \(K\)-local observables under \(\rho_{\Omega}^{\beta}\) (for any \(\beta\))._
The notions of Gibbsian ensembles [33] and graphical models are only equivalent up to a positivity assumption on \(p_{\Omega}\). We always assume positivity of \(p_{\Omega}\), although this is not always the case in decoding applications.
Let \(C_{0}(K)=\prod_{a\in K}\mathbb{R}^{E_{a}}\) denote the space of interaction potentials, and write \(C_{0}=C_{0}(K)\) by assuming \(K\subseteq\mathcal{P}(\Omega)\) to be fixed. In general, the map from \((h_{a})\in C_{0}\) to the global hamiltonian \(H_{\Omega}\in\mathbb{R}^{E_{\Omega}}\) has a low-dimensional source space, but fails to be injective. In section IV we construct a chain complex \((C_{\bullet},\delta)\), i.e. a graded vector space \(C_{\bullet}=\bigoplus_{r=0}^{n}C_{r}\) and a degree -1 square-null operator \(\delta\),
\[\mathbb{R}^{E_{\Omega}}\leftarrow\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
\[M_{\mathrm{a}\to\mathrm{c}}^{(t+1)}(x_{\mathrm{c}})=M_{\mathrm{a}\to\mathrm{c}}^{(t )}(x_{\mathrm{c}})\cdot\frac{\sum_{x_{\mathrm{a}|}=x_{\mathrm{c}}}q_{\mathrm{a} }^{(t)}(x_{\mathrm{a}})}{q_{\mathrm{c}}^{(t)}(x_{\mathrm{c}})} \tag{7}\]
_define a sequence \((q,M)\in(\Delta_{0}\times C_{1})^{\mathbf{N}}\)._
The evolution of \(q^{(t)}\) only depends on the geometric increment of messages \(M^{(t+1)}/M^{(t)}\). Setting \(M^{(0)}=1\), GBP equations therefore also define a dynamical system \(\mathrm{GBP}:\Delta_{0}\to\Delta_{0}\), such that \(q^{(n)}=\mathrm{GBP}^{m}(q^{(0)})\). It is clear from (7) that _consistency of beliefs_\(q\in\Gamma_{0}\) is equivalent to _stationarity of messages_. However it is not obvious that _stationarity of beliefs_ implies stationarity of messages, and this explains why GBP is usually viewed as a dynamical system on messages.
We showed that stationarity of beliefs does imply consistency and stationarity of messages [8, 9], and called this property _faithfulness of the GBP diffusion flux_ (see definition 13 and proposition 30 below). This property is crucial to retrieve the consistent polytope \(\Gamma_{0}\subseteq\Delta_{0}\) as stationary states of the dynamical system on beliefs, and thus properly draw the analogy between GBP and diffusion.
### _Examples_
#### Iv-C1 Markov chains and trees
A length-\(n\)_Markov chain_ on \(\Omega=\{0,\ldots,n\}\) is local with respect to the 1-graph \(K=K_{0}\sqcup K_{1}\subseteq\mathcal{P}(\Omega)\) linking every vertex \(0\leq i<n\) to its successor \(i+1\). Individual states of the Markov chain are denoted \(x_{i}\in E_{i}\) for \(i\in\Omega\). This data extends to a contravariant functor \(E:(K,\subseteq)^{op}\to\mathbf{Set}\) mapping edges \(ij\in K\) to pairwise joint states \(E_{ij}=E_{i}\times E_{j}\), and with canonical projections as arrows.
Given an input prior \(f_{0}(x_{0}):=\mathbb{P}(x_{0})\) and Markov transition kernels \(f_{i,j-1}(x_{j},x_{j-1}):=\mathbb{P}(x_{j}|x_{j-1})\) for \(1\leq j\leq n\), set other factors and messages to \(1\). BP and GBP then coincide to exactly compute in \(n\)-steps the output and hidden posteriors \(q_{j}(x_{j})=\mathbb{P}(x_{j})\) for \(j\leq n\) as:
\[\mathbb{P}(x_{j})=\sum_{x_{j-1}\in E_{j-1}}\mathbb{P}(x_{j}|x_{j-1})\,\mathbb{ P}(x_{j-1}). \tag{8}\]
In this particular case, the sum-product update rule thus simply consists in a recurrence of matrix-vector multiplications for \(M_{j-1,j\to j}(x_{j})=q_{j}(x_{j})=\mathbb{P}(x_{j})\). The integrand
Figure 1: Different representations of \(K\subseteq\mathcal{P}(\Omega)\). Here \(K\) is a 2-dimensional simplicial complex (a). The factor graph (b) is constructed by joining every factor node \(\mathrm{a}\in K\) (squares) to its variables \(i\in\mathrm{a}\) (dots). The region graph (c) instead inserts a directed edge between any two region nodes \(\mathrm{a},\mathrm{b}\in K\) such that \(\mathrm{a}\supseteq\mathrm{b}\). The partial order structure is equivalently represented by (d), where height gives a better impression of the ordering. In (c) and (d), non-primitive arrows (i.e. having a non-trivial factorization) are in orange, and the terminal region \(\varnothing\) is not represented.
of (8) also yields the exact pairwise probability \(\mathbb{P}(x_{j-1},x_{j})\) by the Bayes rule.
The situation is very much similar for _Markov trees_, i.e. when \(K\) is an acyclic graph. In this case, choosing any leaf node \(x_{0}\) as root, BP recursively computes the bayesian posteriors \(\mathbb{P}(x_{j})\) exactly, by integrating the product \(\mathbb{P}(x_{j}|x_{i})\)\(\mathbb{P}(x_{i})\) over the parent state \(x_{i}\in E_{i}\) when computing the message \(M_{ij\to j}(x_{j})\) by (7). This is the content of Pearl's original article on Bayesian trees [17].
A famous particular class of Markov trees consists of hidden Markov chain models, for instance used by Friston in neuroscience [12], and of which the Kalman filter is a continuous-valued gaussian version [20].
#### Ii-B2 Spin glasses and Hopfield networks
A _spin glass_ is a \(1\)-graph \(K\subseteq\mathcal{P}(\Omega)\) whose vertices carry a binary variable \(x_{i}\in\{-1,+1\}=E_{i}\). In this case, any \(K\)-local hamiltonian \(H_{\Omega}:E_{\Omega}\to\mathbb{R}\) may be uniquely decomposed as:
\[H_{\Omega}(x_{\Omega})=h_{\varnothing}+\sum_{i\in K_{0}}b_{i}x_{i}+\sum_{ij\in K _{1}}w_{ij}x_{i}x_{j} \tag{9}\]
Uniqueness of this decomposition is a consequence of the _interaction decomposition theorem_, see [37] and [40]. One often drops the constant factor \(h_{\varnothing}\) for its irrelevance in the Gibbs state definition.
Spin glasses are formally equivalent to the Ising model of ferromagnetism, the only conceptual differences residing in a random sampling of _biases_\(b_{i}\sim\mathbb{P}(b_{i})\) and _weights_\(w_{ij}\sim\mathbb{P}(w_{ij})\), and in allowing for other graphs than cubic lattices which describe homogeneous crystals. Bethe produced his now famous combinatorial approximation scheme in 1935 [3] to estimate the Ising model free energy.
BP works surprisingly well at estimating the true likelihoods \(\mathbb{P}(x_{j})\) and \(\mathbb{P}(x_{i},x_{j})\) by consistent beliefs \(q_{i}(x_{i})\) and \(q_{ij}(x_{i},x_{j})\) even on loopy graphs. It may however show converge issues and increased errors as the temperature decreases when \(K\) has loops, while the number of stationary states grows quickly with the number of loops. In the cyclic case, each belief aggregates a product of messages \(\prod_{i}M_{ij\to j}(x_{j})\) according to (6) before being summed over in the computation of messages (7), and therefore does not reduce to a simple matrix-vector multiplication. See [18, 19, 23] for more information on loopy BP behaviour.
Spin glasses are also called Boltzmann machines [41] in the context of generative machine learning. It is well known that bipartite spin glasses, also called restricted Boltzmann machines (RBMs), are equivalent to the Hopfield model of associative memory whose phase transitions [42, 43] have received a lot of attention. These phase diagrams are generally obtained by the replica method of Javanmard and Montanari [44] in the thermodynamic limit. We believe a broader understanding of such phase transitions in neural networks would be highly beneficial for artificial intelligence, and a bayesian framework seems more suitable for such a program than one-way parameterized functions and feed-forward neural networks.
#### Ii-B3 Higher-order relations
When \(K\subseteq\mathcal{P}(\Omega)\) is a general hypergraph, we may write \(K=\bigsqcup_{i}K_{r}\) by grading hyperedges \(\mathrm{a}=\{i_{0},\ldots,i_{r}\}\) according to their dimension \(r\), and write \(K_{-1}=\{\varnothing\}\) when \(K\) contains the empty region (which we usually assume along with the \(\cap\)-closure of \(K\)). We call _dimension_ of \(K\) the greatest \(n\) such that \(K_{n}\neq\varnothing\), and call \(K\) an \(n\)-_graph_.
Working with a coarse \(n\)-graph \(K\) with \(n\geq 2\) is useful even when the hamiltonian is local with respect to a \(1\)-subgraph \(K^{\prime}\subseteq K\). This is done by simply extending \(K^{\prime}\)-local factors \((f_{\mathrm{b}})_{\mathrm{b}\in K^{\prime}}\) by \(f_{\mathrm{a}}=1\) for all \(\mathrm{a}\in K\smallsetminus K^{\prime}\). The dimension of the coarser hypergraph \(K\) used for message-passing thus provides greater precision at the cost of local complexity, which is the cost of partial integrals on \(E_{\mathrm{a}}\) for maximal \(\mathrm{a}\in K\). From a physical perspective, this corresponds to applying Kikuchi's Cluster Variation Method (CVM) [4, 5] to a spin glass hamiltonian \(H_{\Omega}\), given by (9).
There is also greater opportunity in considering higher order interactions \(h_{\mathrm{a}}(x_{\mathrm{a}})=h_{i_{0},\ldots,i_{r}}(x_{i_{0}},\ldots,x_{i_{r}})\) for \(r\geq 2\) to capture more subtle relations. Taking \(r=2\) with binary variables, the interaction terms \(w_{ijk}x_{i}x_{j}x_{k}\) mimics attention in transformer networks. In a continuous-valued case, where \(x_{i}\in\mathbb{R}^{3}\) for instance describes atomic positions in a molecule or crystal, energy couldn't be made dependent on bond angles without third-order interactions \(h_{ijk}(x_{i},x_{j},x_{k})\)[45]. Continuous variables are outside the scope of this article but we refer the reader to [46, 47, 26] the Gaussian version of BP algorithms. See also [48] on third-order Boltzmann machines and [49, 50] for more recent higher-order attention network architectures.
## III Entropy and free energy principles
### Boltzmann-Gibbs variational principles
In classical thermodynamics, a statistical system \(\Omega\) is described by a configuration space \(E\) (assumed finite in the following) and a _hamiltonian_\(H:E\to\mathbb{R}\) measuring the energy level of each configuration. Thermal equilibrium with a reservoir at _inverse temperature_\(\beta=1/T\) defines the so-called _Gibbs state_\(p^{\beta}\in\mathrm{Prob}(E)\) by renormalization of the Boltzmann-Gibbs density \(\mathrm{e}^{-\beta H}\):
\[p^{\beta}(x)=\frac{1}{Z}\,\mathrm{e}^{-\beta H(x)}\quad\text{where}\quad Z=\sum _{x\in E}\mathrm{e}^{-\beta H(x)}\,. \tag{10}\]
Two different kinds of variational principles characterise the equilibrium state \(p^{\beta}\):
* the _max-entropy principle_ asserts that the Shannon entropy \(S(p)=-\sum_{E}p\ln(p)\) is maximal under an internal energy constraint \(\mathbb{E}_{p}[H]=\mathcal{U}\) at equilibrium;
* the _free energy principle_ asserts that the variational free energy \(\mathcal{F}^{\beta}(p,H)=\operatorname{\mathbb{E}}_{p}[H]-TS(p)\) is minimal under the temperature constraint \(T=1/\beta\) at equilibrium; it is then equal to the free energy \(F^{\beta}(H)=-\ln\sum_{E}\operatorname{e}^{-\beta H}\).
Although equation (10) gives the solution to both optimisation problems, computing \(Z\) naively is usually impossible for the size of \(E\) grows exponentially with the number of microscopic variables in interaction.
The Legendre duality relating Shannon entropy and free energies is illustrated by theorems 1 and 2 below, implying the equivalence between entropy and free energy Boltzmann-Gibbs principles [1, 2]. Properties of thermodynamic functionals may be abstracted from the global notion of physical equilibrium, and stated for every region a \(\subseteq\Omega\) as below. Local functionals will then be recombined following the Bethe-Kikuchi approximation scheme in the next subsection.
**Definition 5**.: _For every \(\operatorname{a}\subseteq\Omega\) and \(\beta>0\), define:_
* _the Shannon entropy_ \(S_{\operatorname{a}}:\operatorname{\Delta_{\operatorname{a}}}\to\mathbb{R}\) _by_ \[S_{\operatorname{a}}(p_{\operatorname{a}})=-\sum p_{\operatorname{a}}\ln(p_{ \operatorname{a}}),\] (11)
* _the_ variational free energy \(\mathcal{F}^{\beta}_{\operatorname{a}}:\operatorname{\Delta_{\operatorname{a} }}\times\mathbb{R}^{E_{\operatorname{a}}}\to\mathbb{R}\) _by_ \[\mathcal{F}^{\beta}_{\operatorname{a}}(p_{\operatorname{a}},H_{ \operatorname{a}})=\operatorname{\mathbb{E}}_{p_{\operatorname{a}}}[H_{ \operatorname{a}}]-\frac{1}{\beta}S_{\operatorname{a}}(p_{\operatorname{a}}),\] (12)
* _the_ free energy \(F^{\beta}_{\operatorname{a}}:\mathbb{R}^{E_{\operatorname{a}}}\to\mathbb{R}\) _by_ \[F^{\beta}_{\operatorname{a}}(H_{\operatorname{a}})=-\frac{1}{\beta}\ln\sum \operatorname{e}^{-\beta H_{\operatorname{a}}}.\] (13)
Legendre transforms may be carried with respect to local observables and beliefs directly, instead of the usual one-dimensional temperature or energy parameters. To do so, one should describe tangent fibers of \(\operatorname{\Delta_{\operatorname{a}}}\) as \(\operatorname{T}_{p_{\operatorname{a}}}\Delta_{\operatorname{a}}\simeq \mathbb{R}^{E_{\operatorname{a}}}\bmod\mathbb{R}\), so that additive energy constants span Lagrange multipliers for the normalisation constraint \(\langle p_{\operatorname{a}},1_{\operatorname{a}}\rangle=1\). For a more detailed study of thermodynamic functionals, we refer the reader to [9, chap. 4] and [2].
**Theorem 1**.: _Under the mean energy constraint \(\langle p_{\operatorname{a}},H_{\operatorname{a}}\rangle=\mathcal{U}\), the maximum of Shannon entropy is reached on a Gibbs state \(p_{\operatorname{a}}^{*}=\frac{1}{Z_{\operatorname{a}}}\operatorname{e}^{- \beta H_{\operatorname{a}}}\),_
\[S_{\operatorname{a}}(p_{\operatorname{a}}^{*})=\max_{\begin{subarray}{c}p_{ \operatorname{a}}\in\operatorname{\Delta_{\operatorname{a}}}\\ \langle p_{\operatorname{a}},H_{\operatorname{a}}\rangle=\mathcal{U}\end{subarray}}S _{\operatorname{a}}(p_{\operatorname{a}}), \tag{14}\]
_for some univocal value of the Lagrange multiplier \(\beta\in\mathbb{R}\), called inverse temperature._
**Theorem 2**.: _Under the temperature constraint \(T=1/\beta\), given a hamiltonian \(H_{\operatorname{a}}\in\mathbb{R}^{E_{\operatorname{a}}}\), the minimum of variational free energy is reached on the Gibbs state \(p_{\operatorname{a}}^{*}=\frac{1}{Z_{\operatorname{a}}}\operatorname{e}^{- \beta H_{\operatorname{a}}}\),_
\[\mathcal{F}^{\beta}_{\operatorname{a}}(p_{\operatorname{a}}^{*},H_{ \operatorname{a}})=\min_{p_{\operatorname{a}}\in\operatorname{\Delta_{ \operatorname{a}}}}\mathcal{F}^{\beta}_{\operatorname{a}}(p_{\operatorname{a}},H_{\operatorname{a}}). \tag{15}\]
_It moreover coincides with the equilibrium free energy \(F^{\beta}_{\operatorname{a}}(H_{\operatorname{a}})\),_
\[F^{\beta}_{\operatorname{a}}(H_{\operatorname{a}})=\min_{p_{ \operatorname{a}}\in\operatorname{\Delta_{\operatorname{a}}}}\mathcal{F}^{ \beta}_{\operatorname{a}}(p_{\operatorname{a}},H_{\operatorname{a}}) \tag{16}\]
Although stated for every region \(\operatorname{a}\subseteq\Omega\), theorem 2 only makes sense physically when applied to the global region \(\Omega\). Indeed, local free energy principles (15) on regions \(\operatorname{a}\subseteq\Omega\) totally neglect interactions with their surroundings. The local beliefs \((p_{\operatorname{a}})_{\operatorname{a}\in K}\) they define thus have very little chance of being consistent.
### Bethe-Kikuchi variational principles
We now proceed to define localised versions of the max-entropy and free energy variational principles 1 and 2, attached to any hypergraph \(K\subseteq\mathcal{P}(\Omega)\). We recall that \(C_{0}\) stands for the space of interaction potentials \(\prod_{\operatorname{a}\in K}\mathbb{R}^{E_{\operatorname{a}}}\), and that \(\Gamma_{0}\subseteq\operatorname{\Delta_{0}}\subseteq\mathbb{R}^{E_{ \operatorname{a}}*}\) denotes the convex polytope of consistent beliefs (definition 3). Bethe-Kikuchi principles will characterise finite sets of consistent local beliefs \(\{p^{1},\dots,p^{m}\}\subseteq\Gamma_{0}\), in contrast with their global counterparts defining the true global Gibbs state \(p_{\Omega}\in\operatorname{\Delta_{\Omega}}\).
_Extensive_ thermodynamic functionals such as entropy and free energies satisfy an _asymptotic additivity_ property, e.g. the entropy of a large piece of matter is the sum of entropies associated to any division into large enough constituents. Bethe-Kikuchi approximations thus consist of computing only local terms (that is, restricted to a tractable number of variables) before _cumulating_ them in a large weighted sum over regions \(a\in K\) where integral coefficients \(c_{\operatorname{a}}\in\mathbb{Z}\) take care of eliminating redundancies. The coefficients \(c_{\operatorname{a}}\) are uniquely determined by the inclusion-exclusion principle \(\sum_{\operatorname{a}\supseteq\operatorname{b}}c_{\operatorname{a}}=1\) for all \(\operatorname{b}\in K\) (corollary 5). For a recent introduction to the CVM [4, 5], we refer to Pelizzola's article [28].
Let us first introduce the local max-entropy principle we shall be concerned with. As in the exact global case, the max-entropy principle (problem A) describes an isolated system. It therefore takes place with constraints on the Bethe-Kikuchi mean energy \(\tilde{\mathcal{U}}:\operatorname{\Delta_{0}}\times C_{0}\to\mathbb{R}\), given for all \(p\in\operatorname{\Delta_{0}}\) and all \(H\in C_{0}\) by:
\[\tilde{\mathcal{U}}(p,H)=\sum_{\operatorname{a}\in K}c_{\operatorname{a}} \operatorname{\mathbb{E}}_{p_{\operatorname{a}}}[H_{\operatorname{a}}] \tag{17}\]
**Problem A**.: _Let \(H\in C_{0}\) denote local hamiltonians and choose a mean energy \(\mathcal{U}\in\mathbb{R}\). Find beliefs \(p\in\operatorname{\Delta_{0}}\) critical for the Bethe-Kikuchi entropy \(\tilde{S}:\operatorname{\Delta_{0}}\to\mathbb{R}\) given by:_
\[\tilde{S}(p)=\sum_{\operatorname{a}\in K}c_{\operatorname{a}}\operatorname{S}_{ \operatorname{a}}(p_{\operatorname{a}}), \tag{18}\]
under the consistency constraint \(p\in\Gamma_{0}\) and the mean energy constraint \(\tilde{\mathcal{U}}(p,H)=\mathcal{U}\)._
The Bethe-Kikuchi variational free energy principle, problem B below, instead serves as a sound local substitute for describing a thermostated system at temperature \(T=1/\beta\).
**Problem B**.: _Let \(H\in C_{0}\) denote local hamiltonians and choose an inverse temperature \(\beta>0\). Find beliefs \(p\in\Delta_{0}\) critical for the Bethe-Kikuchi variational free energy \(\tilde{\mathcal{F}}:\Delta_{0}\times C_{0}\to\mathds{R}\) given by:_
\[\tilde{\mathcal{F}}^{\beta}(p,H)=\sum_{\mathrm{a}\in K}c_{\mathrm{a}}\, \mathcal{F}_{\mathrm{a}}^{\beta}(p_{\mathrm{a}},H_{\mathrm{a}}), \tag{19}\]
_under the consistency constraint \(p\in\Gamma_{0}\)._
Problems A and B are both variational principles on \(p\in\Gamma_{0}\) with the consistency constraint in common, but with dual temperature and energy constraints respectively. In contrast, the following free energy principle (problem C) explores a subspace of local hamiltonians \(V\in C_{0}\), satisfying what we shall view as a global energy conservation constraint in the next section. Let us write \(V\smash{\raisebox{-2.0pt}{$\sim$}}H\) if and only if \(\sum_{\mathrm{a}}c_{\mathrm{a}}V_{\mathrm{a}}=\sum_{\mathrm{a}}c_{\mathrm{a}}H _{\mathrm{a}}\) as global observables on \(E_{\mathrm{\Omega}}\).
Problem C also describes a system at equilibrium with a thermostat at fixed temperature \(T=1/\beta\).
**Problem C**.: _Let \(H\in C_{0}\) denote local hamiltonians and choose an inverse temperature \(\beta>0\). Find local hamiltonians \(V\in C_{0}\) critical for the Bethe-Kikuchi free energy \(\tilde{F}^{\beta}:C_{0}\to\mathds{R}\) given by:_
\[\tilde{F}^{\beta}(V)=\sum_{\mathrm{a}\in K}c_{\mathrm{a}}\,F_{\mathrm{a}}^{ \beta}(V_{\mathrm{a}}), \tag{20}\]
_under the energy conservation constraint \(V\smash{\raisebox{-2.0pt}{$\sim$}}H\)._
In contrast with the _convex_ optimisation problems of theorems 1 and 2, note that the concavity or convexity of information functionals is broken by the Bethe-Kikuchi coefficients \(c_{\mathrm{a}}\in\mathds{Z}\). This explains why multiple solutions to problems A, B and C might coexist, and why they cannot be found by simple convex optimisation algorithms. We will instead introduce continuous-time ordinary differential equations in section V. Their structure is remarkably similar to diffusion or heat equations, although combinatorial transformations may again break stability and uniqueness of stationary states.
On the Ising model, Schlipper showed that the CVM error decays exponentially as \(K\subseteq\mathcal{P}(\mathds{Z}^{d})\) grows coarse with respect to the range of interactions [6]. This result confirms the heuristic argument on the extensivity of entropy, and reflects the fast decay of high-order mutual informations that are omitted in the Bethe-Kikuchi entropy [9, chap. 4].
## IV Local statistical systems
In the following, we let \(K\subseteq\mathcal{P}(\Omega)\) denote a fixed hypergraph with vertices in \(\Omega=\{1,\ldots,N\}\), and moreover assume that \((K,\subseteq,\cap)\) forms a semi-lattice. The following constructions could be carried without the \(\cap\)-closure assumption, but theorem 3 and the correspondence theorems of section VI would become one-way.
This section carries the construction of what we may call a _combinatorial chain complex_\((C_{\bullet},\delta,\zeta)\) of local observables, recalling the necessary definitions and theorems from [9]. The first ingredient is the codifferential \(\delta\), satisfying \(\delta^{2}=\delta\delta=0\) and of degree -1:
\[C_{0}\xleftarrow{\delta}\,C_{1}\xleftarrow{\delta}\,\,\,\,\ldots\xleftarrow{ \delta}\,C_{n}. \tag{21}\]
The first homology \([C_{0}]=\mathrm{H}_{0}(C_{\bullet},\delta)\) will yield a bijective parameterization of \(K\)-local observables in \(\mathds{R}^{E_{\mathrm{\Omega}}}\), this is the statement of the Gauss theorem 3. On the other hand, the dual cochain complex
\[C_{0}^{*}\xrightarrow{d}C_{1}^{*}\xrightarrow{d}\,\,\,\ldots\xrightarrow{d} \,\,C_{n}^{*} \tag{22}\]
will allow us to describe consistent beliefs \(p\in\Gamma_{0}\) by the cocycle equation \(dp=0\), living in the dual cohomology \([C_{0}^{*}]=\mathrm{H}^{0}(C_{\bullet}^{*},d)\). The construction of \((C_{\bullet}^{*},d)\) may be traced back to Grothendieck and Verdier [35, 36], yet we believe the interaction of algebraic topology with combinatorics presented here to be quite original.
The _zeta transform_ is here defined as a homogeneous linear automorphism \(\zeta:C_{\bullet}\to C_{\bullet}\), and plays a role very similar to that of a discrete spatial integration, confirming an intuition of Rota. It satisfies remarkable commutation relations with \(\delta\), the Gauss/Greene formulas (44) and (45). Its inverse \(\mu\) is called the _Mobius transform_ and the reciprocal pair \((\zeta,\mu)\) extends the famous Mobius inversion formulas [34] to degrees higher than 1 in the nerve of \(K\). All these operators will come into play when factorizing the GBP algorithm and defining Bethe-Kikuchi diffusions in section V.
Our localization procedure could be summarized as follows. First, we restrict the sheaf \(E\) to a contravariant functor \(E_{K}\) over \(K\); then, we define a simplicial set \(N_{\bullet}E_{K}\) extending \(E_{K}\) to the categorical1 nerve \(N_{\bullet}K\) by mapping every strictly ordered chain \(a_{0}\supset\cdots\supset a_{p}\) to its _terminal_ configuration space \(E_{a_{0}\ldots a_{p}}:=E_{a_{p}}\). In particular, the set \(N_{\mathds{1}}K\) describes the support of GBP messages, while higher order terms provide a projective resolution of \(K\)-local observables.
### _Algebraic topology_
The functor of _local observables_\(\mathbb{R}^{E}:\mathcal{P}(\Omega)\to\mathbf{Alg}_{\mathrm{c}}\) maps every region \(\mathrm{a}\subseteq\Omega\) to the commutative algebra \(\mathbb{R}^{E_{\mathrm{a}}}\). Its arrows consist of natural inclusions \(\mathbb{R}^{E_{\mathrm{b}}}\subseteq\mathbb{R}^{E_{\mathrm{a}}}\), when identifying each local algebra with a low dimensional subspace of \(\mathbb{R}^{E_{\Omega}}\). We write \(\tilde{I}_{\mathrm{b}}^{\mathrm{a}}:x_{\mathrm{a}}\mapsto h_{\mathrm{b}}(x_{ \mathrm{a}|\mathrm{b}})\) when the extension should be made explicit, and identify \(\tilde{I}_{\mathrm{b}}^{\mathrm{a}}\) with \(h_{\mathrm{b}}\) for all \(\mathrm{a}\supseteq\mathrm{b}\) otherwise.
One may then define a chain complex of local observables \(C_{\bullet}=C_{\bullet}(K,\mathbb{R}^{E})\) indexed by the nerve of \(K\) as follows. Its graded components \(C_{r}\) are defined for \(1\leq r\leq n\), where \(n\) denotes the maximal length of a strictly ordered chain in \(K\), by:
\[C_{r}=\prod_{\mathrm{a}_{0}\supset\cdots\supset\mathrm{a}_{r}}\mathbb{R}^{E_{ \mathrm{a}_{r}}}\,. \tag{23}\]
For every strict chain \(\mathrm{\bar{a}}=\mathrm{a}_{0}\supset\cdots\supset\mathrm{a}_{r}\) in \(N_{r}K\), and every \(0\leq j\leq r\), let us denote by \(\mathrm{\bar{a}}^{(j)}\) the _j-face_ of \(\mathrm{\bar{a}}\), obtained by removing \(\mathrm{a}_{j}\).
**Definition 6**.: _The chain complex of local observables \((C_{\bullet},\delta)\) is defined by (23) and the degree -1 boundary operator \(\delta\), whose action \(\delta:C_{1}\to C_{0}\) is given by_
\[\delta\varphi_{\mathrm{b}}(x_{\mathrm{b}})=\sum_{\mathrm{a}\supset\mathrm{b} }\varphi_{\mathrm{a}\to\mathrm{b}}(x_{\mathrm{b}})-\sum_{\mathrm{b}\supset \mathrm{c}}\varphi_{\mathrm{b}\to\mathrm{c}}(x_{\mathrm{b}|\mathrm{c}}), \tag{24}\]
_while \(\delta:C_{r+1}\to C_{r}\) acts by_
\[\delta\psi_{\mathrm{\bar{b}}}(x_{\mathrm{b}_{r}})=\sum_{j=0}^{r+1}(-1)^{j} \sum_{\mathrm{\bar{c}}^{(j)}=\mathrm{\bar{b}}}\psi_{\mathrm{\bar{c}}}(x_{ \mathrm{b}_{r}|\mathrm{c}_{r+1}}). \tag{25}\]
_Note that \(\mathrm{b}_{r}=\mathrm{c}_{r+1}\) for all \(j<r+1\) in the first \(r\) sums of (25)._
The classical identity \(\mathrm{\bar{a}}^{(j)(j)}=\mathrm{\bar{a}}^{(j+1)(i)}\) for all \(i<j\), together with linearity and functoriality of the inclusions \(\mathbb{R}^{E_{\mathrm{b}}}\subseteq\mathbb{R}^{E_{\mathrm{a}}}\), implies the differential rule \(\delta^{2}=\delta\circ\delta=0\).
One may see in formula (24) an analogy between \(\delta\) and a discrete graph divergence, or the divergence operator of geometry : theorem 3 below is a discrete yet statistical version of the Gauss theorem on a manifold without boundary. It gives a local criterion for the global equality of \(K\)-local hamiltonians and is proven in appendix A. The chain complex \((C_{\bullet},\delta)\) can moreover be proven acyclic (see [9, thm 2.17] and [35, 36]) when \(K\) is \(\cap\)-closed, the exact sequence (21) then describes the linear subspace of \(K\)-local energies, through a _projective resolution_ of the quotient \(C_{0}/\delta C_{1}\).
**Theorem 3** (Gauss).: _Assume \(K\subseteq\mathcal{P}(\Omega)\) is \(\cap\)-closed. Then the following are equivalent for all \(h,h^{\prime}\in C_{0}\):_
1. _the equality_ \(\sum_{\mathrm{a}\in K}h^{\prime}_{\mathrm{a}}=\sum_{\mathrm{a}\in K}h_{ \mathrm{a}}\) _holds in_ \(\mathbb{R}^{E_{\Omega}}\)_,_
2. _there exists_ \(\varphi\in C_{1}\) _such that_ \(h^{\prime}=h+\delta\varphi\)_._
_In other words \(h^{\prime}\) and \(h\) are homologous, written \(h^{\prime}\sim h\) or \(h^{\prime}\in[h]\), if and only if they define the same global hamiltonian._
The functor of _local densities_\(\mathbb{R}^{E_{\mathrm{a}}}:\mathcal{P}(\Omega)^{op}\to\mathbf{Vect}\) is then defined by duality, mapping every \(\mathrm{a}\subseteq\Omega\) to the vector space of linear forms on \(\mathbb{R}^{E_{\mathrm{a}}}\). Its arrows consist of partial integrations \(\pi_{\mathrm{a}}^{\mathrm{a}\to\mathrm{b}}:\mathbb{R}^{E_{\mathrm{a}}*}\to \mathbb{R}^{E_{\mathrm{b}}*}\), also called marginal projections. This functor generates a dual cochain complex \((C_{\bullet}^{*},d)\) which shall serve to describe pseudo-marginals \((p_{\mathrm{a}})_{\mathrm{a}\in K}\in\Gamma_{0}\subset C_{0}^{*}\), used as substitute for global probabilities \(p_{\Omega}\in\Delta_{\Omega}\).
**Definition 7**.: _The cochain complex of local densities \((C_{\bullet}^{*},d)\) is defined by \(C_{\bullet}^{*}=L(C_{r})\) and the degree \(+1\) differential \(d\), whose action \(d:C_{0}^{*}\to C_{1}^{*}\) is given by_
\[dp_{\mathrm{a}\to\mathrm{b}}(x_{\mathrm{b}})=p_{\mathrm{b}}(x_{\mathrm{b}})- \sum_{x_{\mathrm{a}|\mathrm{b}}=x_{\mathrm{b}}}p_{\mathrm{a}}(x_{\mathrm{a}}). \tag{26}\]
_while \(d:C_{r}^{*}\to C_{r+1}^{*}\) acts by_
\[\begin{split} dq_{\mathrm{\bar{a}}}(x_{\mathrm{a}_{r+1}})& =\sum_{j=0}^{r}(-1)^{j}\,q_{\mathrm{\bar{a}}^{(j)}}(x_{\mathrm{a}_{r+1}})\\ &\quad+(-1)^{r+1}\sum_{x_{\mathrm{a}_{r}|\mathrm{a}_{r+1}}=x_{ \mathrm{a}_{r+1}}}q_{\mathrm{\bar{a}}^{(r+1)}}(x_{\mathrm{a}_{r}})\end{split} \tag{27}\]
_Densities satisfying \(dp=0\) are called consistent. The convex polytope of consistent beliefs is \(\Gamma_{0}=\Delta_{0}\cap\mathrm{Ker}(d)\subseteq C_{0}^{*}\)._
The purpose of BP algorithms and their generalizations is to converge towards consistent beliefs \(p\in\Gamma_{0}\). The non-linear _Gibbs correspondence_ relating potentials \(h\in C_{0}\) to beliefs \(p\in C_{0}^{*}\) will thus be essential to the dynamic of GBP:
\[p_{\mathrm{a}}(x_{\mathrm{a}})=\frac{1}{Z_{\mathrm{a}}}\,\mathrm{e}^{-\beta H_{ \mathrm{a}}(x_{\mathrm{a}})}\quad\text{with}\quad H_{\mathrm{a}}(x_{\mathrm{a }})=\sum_{\mathrm{b}\subseteq\mathrm{a}}h_{\mathrm{b}}(x_{\mathrm{a}|\mathrm{b}}). \tag{28}\]
The mapping \(h\mapsto H\) above is an invertible Dirichlet convolution [34] on \(C_{0}\), analogous to a discrete integration over cones \(K^{\mathrm{a}}\subseteq K\). Although seemingly simple, this mapping \(\zeta\) and its inverse \(\mu\) surely deserve proper attention.
### _Combinatorics_
The combinatorial automorphisms \(\zeta\) and \(\mu\) we describe below generalize well-known _Mobius inversion formulas_ on \(C_{0}\) and \(C_{1}\). The convolution structure in degrees \(r\leq 1\) originates from works of Dirichlet in number theory, and was thoroughly described by Rota [34] on general partial orders; let us also mention the interesting extension [52] to more general categories.
Heuristically, \(\zeta\) and \(\mu\) might be viewed as combinatorial mappings from intensive to extensive local observables and reciprocally, which systematically solve what are known as _inclusion-exclusion principles_.
**Definition 8**.: _The zeta transform \(\zeta:C_{\bullet}\to C_{\bullet}\) is the linear homogeneous morphism acting on \(C_{0}\) by letting \(\zeta:h\mapsto H\),_
\[H_{\mathrm{a}}(x_{\mathrm{a}})=\sum_{\mathrm{b}\subseteq\mathrm{a}}h_{\mathrm{b }}(x_{\mathrm{a}|\mathrm{b}}), \tag{29}\]
_and acting on \(C_{r}\) by letting \(\zeta:\varphi\mapsto\Phi\),_
\[\Phi_{\widehat{\mathrm{a}}}(x_{\mathrm{a}_{r}})=\sum_{\mathrm{b}_{r}\subseteq \mathrm{a}_{r}}\cdots\sum_{\begin{subarray}{c}\mathrm{b}_{0}\subseteq\mathrm{a }_{0}\\ \mathrm{b}_{0}\not\subseteq\mathrm{a}_{1}\end{subarray}}\varphi_{\widehat{ \mathrm{b}}^{\gamma}}(x_{\mathrm{a}_{r}|\mathrm{b}_{r}}) \tag{30}\]
_Note that \(\mathrm{b}_{0}\supset\cdots\supset\mathrm{b}_{r}\) is also implicitly assumed in (30)._
Definition 8 extends to \(C_{\bullet}\) the action of \(\zeta\) on \(C_{0}\). The action \(\zeta:C_{1}\to C_{1}\) defined by (30) should not be confused with the convolution product of the incidence algebra \(\tilde{C}_{1}(K,\mathbb{Z})\), obtained by including degenerate chains (identities) a \(\supseteq\) b in \(\tilde{N}_{1}K\simeq N_{0}K\sqcup N_{1}K\), and restricting to integer coefficients. See for instance Rota [34] for details on Dirichlet convolution, and [9, chap. 3] for the module structure considered here.
Remember we assume \(K\subseteq\mathcal{P}(\Omega)\) to be \(\cap\)-closed2 for theorem 3 to hold. This is also necessary to obtain the explicit Mobius inversion formula (33) below.
Footnote 2: This coincides with the _region graph property_ of Yedidia, Freeman and Weiss [7], when describing the hypergraph \(K\) in their language of bipartite region graphs.
**Theorem 4**.: _The Mobius transform \(\mu=\zeta^{-1}\) is given in all degrees by a finite sum:_
\[\mu=\sum_{k=0}^{n}(-1)^{k}(\zeta-1)^{k} \tag{31}\]
_The action of \(\mu:C_{0}\to C_{0}\) may be written \(\mu:H\mapsto h\),_
\[h_{\mathrm{a}}(x_{\mathrm{a}})=\sum_{\mathrm{b}\subseteq\mathrm{a}}\mu_{ \mathrm{a}\to\mathrm{b}}\,H_{\mathrm{b}}(x_{\mathrm{a}|\mathrm{b}}), \tag{32}\]
_and the action \(\mu:C_{r}\to C_{r}\) may be written \(\mu:\Phi\to\varphi\),_
\[\varphi_{\widehat{\mathrm{a}}}(x_{\mathrm{a}_{r}})=\sum_{\mathrm{b}_{r}\subseteq \mathrm{a}_{r}}\mu_{\mathrm{a}_{r}\to\mathrm{b}_{r}},\cdots\sum_{\begin{subarray} {c}\mathrm{b}_{0}\subseteq\mathrm{a}_{0}\\ \mathrm{b}_{0}\not\subseteq\mathrm{b}_{1}\end{subarray}}\mu_{\mathrm{a}_{0} \to\mathrm{b}_{0}}\Phi_{\widehat{\mathrm{b}}^{\gamma}}(x_{\mathrm{a}_{r}| \mathrm{b}_{r}^{\gamma}}) \tag{33}\]
_where \(\widehat{\mathrm{b}}^{\gamma}:=\mathrm{b}_{0}\supset(\mathrm{b}_{0}\cap \mathrm{b}_{1})\supset\cdots\supset(\mathrm{b}_{0}\cap\cdots\cap\mathrm{b}_{r})\) in (33)._
In (31), \(n\) is the maximal length of a strict chain in \(K\). In degree \(0\), this yields the usual recurrence formulas for \(\mu_{\mathrm{a}\to\mathrm{b}}\) in terms of all the strict factorisations of \(\mathrm{a}\to\mathrm{b}\). In practice, the matrix \(\mu\) can be computed efficiently in a few steps by using any sparse tensor library. When \(K\) describes a graph, one for instance has \((\zeta-1)^{3}=0\). The nilpotency of \(\zeta-1\) furthermore ensures that \(\mu=\zeta^{-1}\) is given by (31) even without the \(\cap\)-closure assumption. The reader is referred to [9, thm 3.11] for the detailed proof of (33), briefly summarized below.
Sketch of proof.: The definitions of \(\zeta\) and \(\mu\) both exhibit a form of recursivity in the degree \(r\). Letting \(\mathrm{i}_{\mathrm{a}}:C_{r}\to C_{r-1}\) denote evaluation of the first region on \(\mathrm{a}\in K\), one for instance has on \(C_{r}\) for \(r\geq 1\):
\[\zeta(\varphi)_{\mathrm{a}_{0}\dots\mathrm{a}_{r}}=\sum_{\mathrm{b}_{0}\in K^{ \mathrm{a}_{0}}_{\mathrm{a}_{1}}}\zeta(\mathrm{i}_{\mathrm{b}_{0}}\varphi)_{ \mathrm{a}_{1}\dots\mathrm{a}_{r}}. \tag{34}\]
The above extends to \(C_{0}\) by agreeing to let \(\zeta\) act as identity on \(C_{-1}:=\mathbb{R}^{E_{\mathrm{a}}}\supseteq\mathbb{R}^{E_{\mathrm{b}}}\), and letting \(\mathrm{a}_{1}=\varnothing\) i.e. \(K^{\mathrm{a}_{0}}_{\mathrm{a}_{1}}=K^{\mathrm{a}_{0}}\).
On the other hand, the Mobius transform is recovered from operators \(v_{\mathrm{a}}:C_{r}\to C_{r-1}\) as \(\mu(\Phi)_{\mathrm{a}_{0}\dots\mathrm{a}_{r}}=v_{\mathrm{a}_{r}}\dots v_{ \mathrm{a}_{0}}\Phi\), where:
\[v_{\mathrm{a}_{0}}(\Phi)_{\mathrm{a}_{1}\dots\mathrm{a}_{r}}=\sum_{\mathrm{b} _{0}\in K^{\mathrm{a}_{0}}_{\mathrm{a}_{1}}}\mu_{\mathrm{a}_{0}\to\mathrm{b}_{ 0}}\,\Phi_{\mathrm{b}_{0}\cap(\mathrm{a}_{1}\dots\mathrm{a}_{r})} \tag{35}\]
Figure 2: Schematic pictures of the Gauss and Greene formulas. Left: in degree \(0\), the red circles represent regions in the cone \(K^{\mathrm{a}}\) being summed over when evaluating \(\zeta(\varphi)_{\mathrm{a}}\), while blue arrows represent coboundary terms of \(dK^{\mathrm{a}}\) being summed over when evaluating \(\zeta(\delta\varphi)_{\mathrm{a}}\), for \(\varphi\in C_{0}\) and \(\varphi\in C_{1}\). Right: in degree \(1\), red arrows represent flux terms in the \(1\)-cone \(K^{\mathrm{ab}}=K^{\mathrm{a}}_{\mathrm{b}}\to K^{\mathrm{b}}\) summed by \(\zeta(\varphi)_{\mathrm{a}\to\mathrm{b}}\), while blue triangles represent the coboundary \(2\)-chains of \(dK^{\mathrm{ab}}\) summed by \(\zeta(\delta\psi)_{\mathrm{a}\to\mathrm{b}}\), for \(\varphi\in C_{1}\) and \(\psi\in C_{2}\).
The technical part of the proof then consists in proving \(v_{\mathsf{a}_{0}}\circ\zeta=\zeta\circ i_{\mathsf{a}_{0}}\), using classical Mobius inversion formulas [9, lemma 3.12]. One concludes that \(\mu(\zeta\varphi)_{\mathsf{a}_{0}\ldots\mathsf{a}_{r}}\) is computed as \(v_{\mathsf{a}_{r}}\ldots v_{\mathsf{a}_{0}}(\zeta\varphi)=\zeta(i_{\mathsf{a}_ {r}}\ldots i_{\mathsf{a}_{0}}\varphi)=\varphi_{\mathsf{a}_{0}\ldots\mathsf{a}_ {r}}\).
**Corollary 5**.: Bethe-Kikuchi coefficients _are equivalently defined by the inclusion-exclusion principle (36) and the explicit Mobius inversion formula (37):_
\[\sum_{\mathsf{a}\supseteq\mathsf{b}}c_{\mathsf{a}}=1 \qquad\text{for all $\mathsf{b}\in K$}, \tag{36}\] \[c_{\mathsf{b}}=\sum_{\mathsf{a}\supseteq\mathsf{b}}\mu_{\mathsf{ a}\to\mathsf{b}}\quad\text{for all $\mathsf{b}\in K$}. \tag{37}\]
Proof.: This classical result is just \(\zeta^{*}(c)=1\Leftrightarrow c=\mu^{*}(1)\), denoting by \(\zeta^{*}\) and \(\mu^{*}\) the dual automorphisms obtained by reversing the partial order on \(K\). See [34].
For every \(H\in C_{0}\) given by \(H=\zeta h\Leftrightarrow h=\mu H\), a consequence of (37) is that for all \(x_{\Omega}\in E_{\Omega}\), the total energy is exactly computed by the Bethe-Kikuchi approximation:
\[\sum_{\mathsf{a}\in K}h_{\mathsf{a}}(x_{\Omega|\mathsf{a}})=\sum_{\mathsf{b }\in K}c_{\mathsf{b}}\,H_{\mathsf{b}}(x_{\Omega|\mathsf{b}}). \tag{38}\]
Including the maximal region \(\Omega\) in \(\tilde{K}=\{\Omega\}\cup K\) yields the total energy of (38) as \(H_{\Omega}=\zeta(h)_{\Omega}\), while \(h_{\Omega}=0\) ensures the exactness of the Bethe-Kikuchi energy by (37). In contrast, writing \(S=\zeta s\in\mathbb{R}^{K}\) for the Mobius inversion of local entropies, the total entropy \(\sum c_{\mathsf{b}}S_{\mathsf{b}}\) computed by problems A and B neglects a global mutual information summand \(s_{\Omega}=\sum H_{\Omega\to\mathsf{a}}S_{\mathsf{a}}\), which does not cancel on loopy hypergraphs3. This relationship between Bethe-Kikuchi approximations and a truncated Mobius inversion formula was first recognized by Morita in [5].
Footnote 3: Generalizing a classical result on trees, we proved that \(s_{\Omega}\) also vanishes on acyclic or _retractable_ hypergraphs [9, chap. 6], although a deeper topological understanding of this unusual notion of acyclicity in degrees \(n\geq 2\) is called for.
Let us now describe the remarkable commutation relations satisfied by \(\zeta\) and \(\delta\). Reminiscent of Greene formulas, they confirm the intuition of Rota who saw in Mobius inversion formulas a discrete analogy with the fundamental theorem of calculus [34]. They also strengthen the resemblence of GBP with Poincare's _balayage_ algorithm [53], used to find harmonic forms on Riemannian manifolds, by solving local harmonic problems on subdomains and iteratively updating boundary conditions.
The Gauss formula (39) below has been particularly useful to factorize the GBP algorithm through \(\zeta\circ\delta\) in [8]. Its simple proof will help understand the more general Greene formula (44) below.
**Proposition 9** (Gauss formula).: _For all \(\varphi\in C_{1}\) we have:_
\[\sum_{c\subseteq\mathsf{b}}\delta\varphi_{\mathsf{c}}\ =\sum_{c\subseteq \mathsf{b}}\sum_{\begin{subarray}{c}\mathsf{a}\supset c\\ \mathsf{a}\not\subseteq\mathsf{b}\end{subarray}}\varphi_{\mathsf{a}\to c} \tag{39}\]
_When \(\Omega\in K\), both sides of (39) involve the action of \(\zeta\):_
\[\zeta(\delta\varphi)_{\mathsf{b}}=\zeta(\varphi)_{\Omega\to\mathsf{b}} \tag{40}\]
Proof.: Note that inclusions \(\mathbb{R}^{E_{\mathsf{c}}}\subseteq\mathbb{R}^{E_{\mathsf{b}}}\) are implicit in (39) and below. By definition of \(\zeta:C_{0}\to C_{0}\) and \(\delta:C_{1}\to C_{0}\),
\[\zeta(\delta\varphi)_{\mathsf{b}}=\sum_{c\subseteq\mathsf{b}}\bigg{(}\ \sum_{\mathsf{a}\supset c}\varphi_{\mathsf{a}\to c}-\sum_{d\subset c}\varphi_{ c\to d}\ \bigg{)}. \tag{41}\]
Inbound flux terms \(\varphi_{\mathsf{a}\to c}\) such that a \(\subseteq\mathsf{b}\) compensate all outbound flux terms \(\varphi_{\mathsf{c}\to d}\), which always satisfy \(c\subseteq\mathsf{b}\). Therefore only inbound flux terms \(\varphi_{\mathsf{a}\to c}\) such that a \(\not\subseteq\mathsf{b}\) and \(c\subseteq\mathsf{b}\) remain.
For every \(K_{0},\ldots,K_{r}\subseteq K\), let us write \(K_{0}\to\cdots\to K_{r}\) for the subset \(N_{r}K\cap(K_{0}\times\cdots\times K_{r})\) of strictly ordered chains. We then define the following subsets of \(N_{\bullet}K\) to strengthen the analogy with geometry. They are depicted in figure 2.
**Definition 10**.: _For all \(\mathsf{a},\mathsf{b}\) in \(K\) let us define:_
1. \(K^{\mathsf{b}}=\{c\in K\,|\,c\subseteq\mathsf{b}\}\subseteq C_{0}\) _the_ cone below \(\mathsf{b}\)_,_
2. \(K^{\mathsf{a}}_{\mathsf{b}}=K^{\mathsf{a}}\smallsetminus K^{\mathsf{b}}\) _the_ _intercone from a to \(\mathsf{b}\)_,_
3. \(dK^{\mathsf{b}}=\bigcup_{\mathsf{a}\in K}(K^{\mathsf{a}}_{\mathsf{b}}\to K^{ \mathsf{b}})\) _the_ coboundary _of_ \(K^{\mathsf{b}}\)__
_Then \(K^{\mathsf{b}},K^{\mathsf{a}}_{\mathsf{b}}\subseteq N_{0}K\) and \(dK^{\mathsf{b}}\subseteq N_{1}K\). Given \(\vec{\mathsf{a}}=\mathsf{a}_{0}\ldots\mathsf{a}_{r}\) in \(N_{r}K\) and \(\vec{\mathsf{b}}=\mathsf{b}_{1}\ldots\mathsf{b}_{r}\) in \(N_{r-1}K\), we then recursively define the following subsets of \(N_{r}K\) for \(r\geq 1\):_
1. \(K^{\mathsf{a}_{0}\ldots\mathsf{a}_{r}}=K^{\mathsf{a}_{0}}_{\mathsf{a}_{1}}\to K ^{\mathsf{a}_{1}\ldots\mathsf{a}_{r}}\) _the_ _\(r\)-hypercone below \(\vec{\mathsf{a}}\)_,_
2. \(dK^{\mathsf{b}_{1}\ldots\mathsf{b}_{r}}=\bigcup_{\mathsf{b}_{0}\in K}K^{ \mathsf{b}_{0}\mathsf{b}_{1}\ldots\mathsf{b}_{r}}\) _the_ coboundary _of_ \(K^{\mathsf{b}_{1}\ldots\mathsf{b}_{r}}\)__
Following Rota, we want to think of \(\zeta\) as a combinatorial form of spatial integration. For every \(r\)-field \(\varphi\in C_{r}\) and integration domain \(\Sigma\subseteq N_{r}K\), let us introduce the following notation:
\[\int_{\Sigma}\varphi=\sum_{\vec{\mathsf{a}}\in\Sigma}j_{\Sigma\leftarrow\vec{ \mathsf{a}}}(\varphi_{\vec{\mathsf{a}}}) \tag{42}\]
where \(j_{\Sigma\leftarrow\vec{\mathsf{a}}}\) embeds \(\mathbb{R}^{E_{\mathsf{a}_{r}}}\) into the linear colimit \(\bigcup_{\mathsf{b}\in\Sigma}\mathbb{R}^{E_{\mathsf{b}_{r}}}\), of the local observables functor over \(\Sigma\), here a linear subspace of the algebra \(\mathbb{R}^{E_{\Omega}}\).
It follows that \(\int_{K^{\mathsf{a}_{0}\ldots\mathsf{a}_{r}}}\) defines a map \(C_{r}\to\mathbb{R}^{E_{\mathsf{a}_{r}}}\), which coincides with the evaluation of \(\zeta:C_{r}\to C_{r}\) on \(\mathsf{a}_{0}\ldots\mathsf{a}_{r}\),
\[\int_{K^{\mathsf{a}_{0}\ldots\mathsf{a}_{r}}}\varphi=\zeta(\varphi)_{\mathsf{a}_{ 0}\ldots\mathsf{a}_{r}}. \tag{43}\]
The Gauss formula (39) may then be generalized to all the degrees of \(C_{\bullet}\) by the pleasant form below, which is only a rewriting of [9, thm 3.14].
**Theorem 6** (Greene formula).: _For all \(\varphi\in C_{r}\) we have:_
\[\int_{K^{\mathsf{b}_{0}\ldots\mathsf{b}_{r}}}\delta\varphi\ =\int_{dK^{\mathsf{b}_{0}\ldots\mathsf{b}_{r}}}\varphi \tag{44}\]
Theorem 6 is proved in appendix A. When \(\Omega\) is in \(K\), remark that the coboundary \(dK^{\text{\rm b}}\) of \(K^{\text{\rm b}}\) coincides with \(K^{\Omega}_{\text{\rm b}}\) by definition 10. More generally, \(dK^{a_{0}\ldots a_{r}}\) then coincides with \(K^{\Omega a_{0}\ldots a_{r}}\) and the Greene formula (44) takes the very succinct form \(\zeta\circ\delta=i_{\Omega}\circ\zeta\), where \(i_{\Omega}\) denotes evaluation of the first region on \(\Omega\).
Let us finally express the conjugate of \(\delta\) by the graded automorphism \(\zeta\) in terms of Bethe-Kikuchi numbers. The following proposition will yield a concise and efficient expression for the Bethe-Kikuchi diffusion algorithm in section V.
**Theorem 7**.: _Let \(\tilde{\delta}=\zeta\circ\delta\circ\mu\) and \(\Phi\in C_{r}\) such that \(\Phi=\zeta\phi\). Then for all \(a_{1}\ldots a_{r}\) in \(N_{r-1}K\), one has:_
\[\tilde{\delta}\Phi_{a_{1}\ldots a_{r}}=\int_{dK^{a_{1}\ldots a_{r}}}\phi=\tilde {\Phi}_{\Omega a_{1}\ldots a_{r}} \tag{45}\]
_where \(\tilde{\Phi}_{\Omega a_{1}\ldots a_{r}}\) denotes a Bethe-Kikuchi approximation of the total inbound flux to \(K^{a_{1}\ldots a_{r}}\), given by_
\[\tilde{\Phi}_{\Omega a_{1}\ldots a_{r}}=\sum_{a_{0}\not\subseteq a_{1}}c_{a_{ 0}}\Phi_{a_{0}(a_{0}\cap a_{1})\ldots(a_{0}\cap a_{r})}. \tag{46}\]
Remark that (46) reduces to \(\tilde{\Phi}_{\Omega a_{1}\ldots a_{r}}=\Phi_{\Omega a_{1}\ldots a_{r}}\) when \(\Omega\in K\), as in this case \(c_{\Omega}=1\) and \(c_{a}=0\) for all \(a\neq\Omega\). The proof of theorem 7 is carried in appendix A.
## V Belief Diffusions
In this section, we describe dynamical equations that solve Bethe-Kikuchi variational principles. Their common structure will shed light on the correspondence theorems A, B and C proven in section VI. Their informal statement is that solutions to free energy principles (problems B and C) can both be found by conservative or _isothermal_ transport equations on \(C_{0}\) which we described in [10]. In contrast, solving the max-entropy principle (problem A) will require to let temperature vary until equilibrium, so as to satisfy the mean energy constraint. This calls for a new kind of _adiabatic_ diffusion equations on \(C_{0}\), satisfying energy conservation up to scalings only.
To make the most of the available linear structures on \((C_{\bullet},\delta,\zeta)\), evolution is first described either at the level of potentials \(v\in C_{0}\) or at the level of local hamiltonians \(V=\zeta v\in C_{0}\). We conclude this section by translating the dynamic on beliefs \(q=\rho(V)\in\Delta_{0}\), for a clearer comparison with GBP. Bethe-Kikuchi diffusion will not only differ from GBP by an arbitrary choice of time step \(\varepsilon>0\), but also from a degree-1 Mobius inversion on messages.
### Isothermal diffusion
The purpose of isothermal diffusion is to solve the local free energy principles B and C by enforcing two different types of constraints simultaneously on \(v\in C_{0}\):
1. _energy conservation_, asking that \(\sum_{a\in K}v_{a}\) is fixed to a given global hamiltonian \(H_{\Omega}\in\mathbb{R}^{E_{\Omega}}\),
2. _belief consistency_, asking that the local Gibbs states \(p=\rho^{\delta}(\zeta v)\) agree and satisfy \(dp=0\), i.e. \(p\in\Gamma_{0}\).
If a potential \(h\in C_{0}\) satisfies (i), theorem 3 implies that there should always exist a _heat flux_\(\varphi\in C_{1}\) such that \(v=h+\delta\varphi\), in other words \(v\in[h]\) is homologous to \(h\). This naturally led us to generalize GBP update rules by continuous-time transport equations on \(C_{0}\)[8, 9, 10]:
\[\frac{dv}{dt}=\delta\Phi(v). \tag{47}\]
**Definition 11**.: _Given a flux functional \(\Phi:C_{0}\to C_{1}\) and an inverse temperature \(\beta\in\mathbb{R}\), we call isothermal diffusion the vector field \(X^{\beta}_{\Phi}:C_{0}\to C_{0}\) defined by:_
\[X^{\beta}_{\Phi}(v)=\frac{1}{\beta}\,\delta\Phi(\beta v) \tag{48}\]
The analytic submanifold \(\mathcal{M}^{\beta}\subseteq C_{0}\) defined below should be stationary under (47) for diffusion to solve Bethe-Kikuchi optimisation problems, as the belief consistency constraint (ii) imposes restrictions on the flux functionals \(\Phi:C_{0}\to C_{1}\) suited for diffusion. The submanifold \(\mathcal{N}^{\beta}\subseteq\mathcal{M}^{\beta}\) describes stronger constraints, by assuming Gibbs densities normalized to a common mass.
**Definition 12**.: _Denote by \(\operatorname{e}^{-V}\in C_{0}^{*}\) the unnormalized Gibbs densities of \(V\in C_{0}\). For every inverse temperature \(\beta>0\), we call consistent manifold the subspace \(\mathcal{N}^{\beta}\subseteq C_{0}\) defined by_
\[\mathcal{N}^{\beta}=\{v\in C_{0}\,|\,\operatorname{e}^{-\beta\zeta v}\in \operatorname{Ker}(d)\} \tag{49}\]
_and call projectively consistent manifold the larger space:_
\[\mathcal{M}^{\beta}=\{v\in C_{0}\,|\,\rho^{\beta}(\zeta v)\in\Gamma_{0}\} \tag{50}\]
_We write \(\mathcal{N}=\mathcal{N}^{1}\) and \(\mathcal{M}=\mathcal{M}^{1}\)._
Note that \(\mathcal{M}^{\beta}\) only consists of a thickening of \(\mathcal{N}^{\beta}\) by the action of additive constants, which span a subcomplex \(R_{\bullet}\subseteq C_{\bullet}\). Also remark that \(\mathcal{N}^{\beta}=\beta^{-1}\cdot\mathcal{N}^{1}\) is a scaling of \(\mathcal{N}\) for all \(\beta>0\), and \(\mathcal{M}^{\beta}=\beta^{-1}\cdot\mathcal{M}^{1}\) as well. It is hence sufficient to study isothermal diffusions at temperature \(1\), see appendix B for more details on the consistent manifolds.
**Definition 13**.: _We say that a flux functional \(\Phi:C_{0}\to C_{1}\) is_
1. consistent _at_ \(\beta>0\) _if:_ \[v\in\mathcal{N}^{\beta}\Rightarrow\Phi(v)=0\] (51)
2. faithful _at_ \(\beta>0\) _if moreover:_ \[\delta\Phi(v)=0\Leftrightarrow v\in\mathcal{N}^{\beta}.\] (52)
_We say that \(\Phi\) is projectively consistent (resp. faithful) if (a) (resp. (b)) holds when \(\mathcal{M}^{\beta}\) replaces \(\mathcal{N}^{\beta}\)._
Let us now construct flux functionals \(\Phi:C_{0}\to C_{1}\) admissible for diffusion. Although consistency may be easily enforced via factorization through the following operator, as the next proposition shows, proving faithfulness remains a more subtle matter.
**Definition 14**.: _(Free energy gradient) Let \(\mathcal{D}:C_{0}\to C_{1}\) denote the smooth functional defined by:_
\[\mathcal{D}V_{\mathrm{a}\to\mathrm{b}}(x_{\mathrm{b}})=V_{\mathrm{b}}(x_{ \mathrm{b}})+\ln\sum_{x_{\mathrm{a}\mathrm{b}}=x_{\mathrm{b}}}\mathrm{e}^{-V_{ \mathrm{a}}(x_{\mathrm{a}})} \tag{53}\]
**Proposition 15**.: _For all \(V\in C_{0}\), let \(q=\mathrm{e}^{-V}\in C_{0}^{*}\). Then_
\[dq=0\quad\Leftrightarrow\quad\mathcal{D}V=0. \tag{54}\]
_It follows that the flux functional \(\Phi_{f}=f\circ\mathcal{D}\circ\zeta\) is consistent for any smooth \(f:C_{1}\to C_{1}\)._
Proof.: For all a \(\supset\) b in \(K\), it is clear that \(\mathrm{e}^{-V_{\mathrm{b}}}=\pi_{*}^{\mathrm{a}\to\mathrm{b}}(\mathrm{e}^{- V_{\mathrm{a}}})\) if and only if \(V_{\mathrm{b}}=-\ln\pi_{*}^{\mathrm{a}\to\mathrm{b}}(\mathrm{e}^{-V_{\mathrm{a }}})\).
In section V.C, we recover the discrete dynamic of GBP by using the flux functional \(\Phi_{GBP}=-\hat{\mathcal{D}}\circ\zeta\). The smooth dynamic on potentials integrates the vector field \(X_{GBP}=\delta\Phi_{GBP}\) on \(C_{0}\), computed by the diagram:
(55)
The flux functional \(\Phi_{GBP}\) is faithful (at \(\beta=1\)). See proposition 30 in appendix for a proof, involving duality and monotonicity arguments which we already gave in [8] and [9]. Although seemingly optimal when \(K\) is a graph, we argued in [9, chap. 5] that the heat flux \(\Phi_{GBP}\) introduces redundancies on higher-order hypergraphs, which explain the explosion of normalization constants.
The _Bethe-Kikuchi diffusion flux_\(\Phi_{\mathrm{BK}}=-\mu\circ\mathcal{D}\circ\zeta\) adds a degree-1 Mobius inversion (33) of GBP messages. The isothermal vector field \(X_{BK}=\delta\Phi_{BK}\) is then computed by:
(56)
The conjugate codifferential \(\check{\delta}=\zeta\circ\delta\circ\mu\) is thus occurring in (56), and one may substitute the result of theorem 7 to arrive at a very concise and efficient expression of the conjugate vector field \(\zeta\circ X_{BK}\circ\mu\), governing the evolution of local hamiltonians.
From the perspective of local hamiltonians \(V=\zeta v\), first remark that in both cases the evolution \(\vartheta=\delta\phi\) under diffusion reads \(\check{V}=\zeta(\delta\phi)\), conveniently computed by the Gauss formula (39) on a cone \(K^{\mathrm{b}}\):
\[\frac{dV_{\mathrm{b}}}{dt}=\zeta(\delta\phi)_{\mathrm{b}}=\int_{dK^{\mathrm{b }}}\phi \tag{57}\]
We argue that the GBP flux \(\Phi=-\mathcal{D}(\zeta v)=-\mathcal{D}V\) belongs to the "extensive" side, and should not be integrated as is on the coboundary of \(K^{\mathrm{b}}\). In fact, if one were able to compute the global free energy gradient \(\mathcal{D}V_{\Omega\to\mathrm{b}}=-\Phi_{\Omega\to\mathrm{b}}\) for all \(\mathrm{b}\in K\), the effective hamiltonians \(V_{\mathrm{b}}^{\prime}=V_{\mathrm{b}}+\Phi_{\Omega\to\mathrm{b}}\) would yield the sought for Gibbs state marginals exactly.
Using the Bethe-Kikuchi flux \(\phi=\mu\Phi\) thus allows to sum only "intensive" flux terms entering \(K^{\mathrm{b}}\). Their integral over \(dK^{\mathrm{b}}\) yields the Bethe-Kikuchi approximation \(\check{\Phi}_{\Omega\to\mathrm{b}}\) of the global free energy gradient term \(\Phi_{\Omega\to\mathrm{b}}\) by theorem 7:
\[\frac{dV_{\mathrm{b}}}{dt}=\int_{dK^{\mathrm{b}}}\phi=\check{\Phi}_{\Omega\to \mathrm{b}}=\sum_{\mathrm{a}\subseteq\mathrm{b}}c_{\mathrm{a}}\Phi_{\mathrm{a} \to\mathrm{a}\cap\mathrm{b}}. \tag{58}\]
Figure 3: Norm of the free energy gradient \(||\mathcal{D}V||\) over 15 time units for Bethe-Kikuchi diffusions (blue) and GBP diffusions (red), for different values of the time step parameter \(\lambda\). For \(\lambda=1/4\) (solid line), GBP diffusion sometimes converges in spite of oscillations; for \(\lambda=1/2\) (dashed line), GBP diffusion almost never converges; and for \(\lambda=1\) (dotted line) GBP diffusion explodes even faster. On the other hand, \(\lambda\) has very little effect on the fast convergence of Bethe-Kikuchi diffusion. The hypergraph \(K\) is the 2-horn – join of three 2-simplices \(\{(012),(013),(023)\}\) – and coefficients of the initial potential \(h\in C_{0}\) are sampled from normal gaussians.
Substituting \(-\mathcal{D}V\) for \(\Phi\) yields, after small combinatorial rearrangements, the explicit formula [9, thm 5.33]:
\[\frac{dV_{\mathrm{b}}(x_{\mathrm{b}})}{dt}=\sum_{\mathrm{a}\in K}c_{\mathrm{a}} \bigg{[}-\ln\sum_{\begin{subarray}{c}x_{\mathrm{a}}|\sigma\mathrm{b}\\ =x_{\mathrm{b}}|\sigma\mathrm{b}\end{subarray}}\mathrm{e}^{-V_{\mathrm{a}}(x_{ \mathrm{a}})}\bigg{]}-V_{\mathrm{b}}(x_{\mathrm{b}}) \tag{59}\]
In [10], we empirically showed that the Bethe-Kikuchi diffusion flux improves convergence and relaxes the need to normalize beliefs at each step. Figure 3 also shows that discrete integrators of \(X_{BK}\) quickly converge for large time steps \(\lambda\) where integrators of \(X_{GBP}\) require a value of \(\lambda<1/4\) on the 2-horn \(K\) (the simplest 2-complex for which GBP is not exact). The flux \(\Phi_{BK}\) may me proven faithful at least in a neighbourhood of \(\mathcal{N}\), see proposition 31 in appendix.
### _Adiabatic diffusion_
In order to solve the localised max-entropy principle A, one needs to let temperature vary so as to enforce the mean energy constraint. Theorem A will describe solutions as dimensionless potentials \(\bar{v}\in C_{0}\), satisfying both the consistency constraint (ii) at \(\beta=1\) and the following conservation contraint replacing (i):
1. _projective energy conservation_, asking that \(\sum_{\mathrm{a}\in K}\bar{v}_{\mathrm{a}}\) is fixed to a given line \(\mathbb{R}H_{\Omega}\subseteq\mathbb{R}^{E_{\Omega}}\).
Fix a reference potential \(h\in C_{0}\) and let \(H_{\Omega}=\sum_{\mathrm{a}\in K}h_{\mathrm{a}}\). By theorem 3, constraint (i\({}^{\prime}\)) is equivalent to the existence of \(\beta\in\mathbb{R}\) and \(\varphi\in C_{1}\) such that \(\bar{v}=\beta h+\delta\varphi\). We therefore equivalently rewrite (i\({}^{\prime}\)) as \(\bar{v}\in\mathbb{R}[h]=\mathbb{R}h+\delta C_{1}\). Adding a radial source term to (47), we may enforce both (i\({}^{\prime}\)) and the mean energy constraint \(\mathcal{U}=\mathbb{E}[H_{\Omega}]\) as follows.
Given an energy value \(\mathcal{U}\in\mathbb{R}\) and a flux functional \(\Phi:C_{0}\to C_{1}\), we call _adiabatic diffusion_ an ordinary differential equation on \(C_{0}\) of the form
\[\begin{split}\frac{d\bar{v}}{dt}=&\ \delta\Phi(\bar{v})+\big{(}\mathcal{U}-\langle p,h\rangle\big{)}\cdot\bar{v} \\ &\ \ \ \ \ \text{where}\ p=\rho(\zeta\bar{v}).\end{split} \tag{60}\]
It might seem paradoxical that the _adiabatic_ diffusion includes what looks like an energy source term, when isothermal diffusion does not. The dynamical variable \(\bar{v}\in C_{0}\) here describes a _dimensionless potential_\(\bar{v}=\beta v\) given \(v\in C_{0}\), measuring energies divided by an unknown temperature scale. The source term \(\nabla^{\mathcal{U}}(\bar{v})\in\mathbb{R}\bar{v}\) below therefore describes a variation in temperature rather than a variation in energy.
**Definition 16**.: _Given a flux functional \(\Phi:C_{0}\to C_{1}\), a potential \(h\in C_{0}\) and a mean energy \(\mathcal{U}\in\mathbb{R}\), we call adiabatic diffusion the vector field \(\gamma^{\mathcal{U}}_{\Phi}:C_{0}\to C_{0}\) defined by:_
\[\gamma^{\mathcal{U}}_{\Phi}(\bar{v})=\delta\Phi(\bar{v})+\nabla^{\mathcal{U}} (\bar{v}) \tag{61}\]
\[\nabla^{\mathcal{U}}(\bar{v})=\big{[}\mathcal{U}-\langle\rho(\zeta\bar{v}),h \rangle\big{]}\cdot\bar{v} \tag{62}\]
_It follows that \(\gamma^{\mathcal{U}}_{\Phi}=X^{1}_{\Phi}+\Psi^{\mathcal{U}}\) relates isothermal and adiabatic diffusions._
The dependency in \(h\in C_{0}\) is implicit in (61) and (62). It cannot be absorbed in the definition of \(\nabla^{\mathcal{U}}\) as \(\bar{v}\in C_{0}\) is unaware of the reference energy scale, and the pair \((\mathcal{U},h)\in\mathbb{R}\times C_{0}\) is in fact necessary to define the mean energy constraint. One may write \(\nabla^{\mathcal{U},h}\) and \(\gamma^{\mathcal{U},h}_{\Phi}\) when the dependency in \(h\) ought to be made explicit, although \(h\) can be assumed fixed for present purposes.
As in the isothermal case, it is necessary to restrict the flux functional \(\Phi\) to enforce the belief consistency constraint (ii) at equilbrium. The same flux functionals may be used for both isothermal and adiabatic diffusions.
**Proposition 17**.: _Assume \(\Phi:C_{0}\to C_{1}\) is faithful. Then for all \(\mathcal{U}\in\mathbb{R}\), the adiabatic vector field \(\gamma^{\mathcal{U}}_{\Phi}\) is stationary on \(\bar{v}\in C_{0}\) if and only if \(p=\rho(\zeta\bar{v})\) is consistent and \(\langle p,h\rangle=\mathcal{U}\)._
Proof.: Note that the r.h.s. of (60) naturally decomposes into the direct sum \(\delta\mathcal{C}_{1}\oplus\mathbb{R}\bar{v}(t)\) for all \(t\in\mathbb{R}\), whenever the initial potential \(h=\bar{v}(0)\) is not a boundary of \(C_{\bullet}\) (in that case, total and mean energies vanish). It follows that \(\bar{v}(t)\) is always homologous to a multiple of \(h\), and that \(\sum_{\mathrm{a}\in K}\bar{v}_{\mathrm{a}}(t)\) is always equal to a multiple of \(H_{\Omega}=\sum_{\mathrm{a}\in K}h_{\mathrm{a}}\) by theorem 3, and that \(\bar{v}(t)\) satisfies the projective energy conservation constraint (i\({}^{\prime}\)). When \(\Phi\) is projectively faithful, the direct sum decomposition furthermore implies that adiabatic diffusion is stationary on \(v\in C_{0}\) if and only if local Gibbs states \(p=\rho(\zeta\bar{v})\) satisfy both the mean energy constraint \(\langle p,v\rangle=\mathcal{U}\) and the belief consistency constraint (ii), \(dp=0\).
The inverse temperature \(\beta=T^{-1}\) of the system will be defined at equilibrium as Lagrange multiplier of the mean energy constraint, by criticality of entropy (see e.g. [1, 2] and section VI below). As we shall see, the Bethe-Kikuchi entropy may have multiple critical points, and therefore multiple values of temperature may coexist at a given value of internal energy.
### _Dynamic on beliefs_
Let us now describe the dynamic on beliefs \(q\in\Delta_{0}\) induced by isothermal and adiabatic diffusions on \(C_{0}\). They both regularize and generalize the GBP algorithm of [7]. In the following, we denote by \(R_{\bullet}\subseteq C_{\bullet}\) the subcomplex spanned by local constants.
Starting from a potential \(h\in C_{0}\), beliefs are recovered through a Gibbs state map \(\rho^{\beta}\circ\zeta\) for some \(\beta\in\mathbb{R}_{+}\). The affine subspace \([h]=h+\delta C_{1}\) satisfying (i) is therefore mapped to a smooth submanifold of \(\Delta_{0}\),
\[\mathcal{B}_{\beta h}:=\rho^{\beta}\big{(}\zeta[h]\big{)}\subset\Delta_{0}. \tag{63}\]
On the other hand, the set of consistent beliefs \(\Gamma_{0}\subseteq\Delta_{0}\) satisfying (ii) is a convex polytope of \(C_{0}^{*}\) (whose preimage
under \(\rho^{\beta}\circ\zeta\) is \(\mathcal{M}^{\beta}\)). Conservation (i) and consistency (ii) constraints thus describe the intersection \(\mathcal{B}_{\beta h}\cap\Gamma_{0}\) when viewed in \(\Delta_{0}\). The non-linearity has shifted from (ii) to (i) in this picture.
In order to recover well-defined dynamical systems on \(\Delta_{0}\), trajectories of potentials \(v\in C_{0}\) under diffusion should be assumed to consistently project on the quotient \(C_{0}/R_{0}\), i.e. to not depend on the addition of local energy constants \(\lambda\in R_{0}\simeq\mathbb{R}^{K}\). Classes of potentials \(v+R_{0}\) are indeed in one-to-one correspondence with classes of local hamiltonians \(\zeta(v+R_{0})=V+R_{0}\), themselves in one-to-one correspondence with the local Gibbs states \(q=\rho^{\beta}(V)=\frac{1}{2}\operatorname{e}^{-\beta V}\). Both GBP and Bethe-Kikuchi diffusions naturally define a dynamic on \(C_{0}/R_{0}\simeq\Delta_{0}\).
Let us first relate the GBP equations (6) and (7) to a discrete integrator of the GBP diffusion flux \(X_{GBP}\), defined by (55). The isothermal vector field \(X_{GBP}=\delta\Phi_{GBP}\) can be approximately integrated by the simple Euler scheme \(v^{(t+\lambda)}=(1+\lambda X_{GBP})\cdot v^{(t)}\), which yields for \(q=\rho(\zeta v)\)
\[q_{\text{b}}^{(t+\lambda)}(x_{\text{b}})\propto q_{\text{b}}^{(t)}(x_{\text{b} })\cdot\prod_{\begin{subarray}{c}\text{a}\supset\text{c}\\ \text{a}\not\subseteq\text{b}\end{subarray}}\big{[}m_{\text{a}\to\text{c}}^{(t )}(x_{\text{b}|\text{c}})\big{]}^{\lambda}, \tag{64}\]
\[m_{\text{a}\to\text{c}}^{(t)}(x_{\text{c}})=\frac{\sum_{\text{a}_{\text{c}}=x _{\text{c}}}q_{\text{a}}^{(t)}(x_{\text{a}})}{q_{\text{c}}^{(t)}(x_{\text{c}})} \tag{65}\]
Exponential versions of the Gauss formula (39) should be recognized in (6) as in (64). Remark that the message term \(m^{(t)}=e^{-\Phi_{GBP}(v)}=\operatorname{e}^{-\mathcal{DV}}\) of (65) is equal to the geometric increment of messages \(M^{(t+1)}/M^{(t)}\) computed by the GBP update rule (7), so that the two algorithms coincide for \(\lambda=1\). The parameter \(\lambda\) otherwise appears as exponent of messages in (64), leaving (65) unchanged.
Because one is usually only interested in the long-term behaviour of diffusion and its convergence to stationary states, the parameter \(\lambda\) usually does not have to be infinitesimal. Figure 4 shows the effect of varying \(\lambda\) on a 50x50 Ising lattice. Choosing \(\lambda\simeq 1/2\) instead of 1 already greatly improves convergence [10], making belief diffusions suitable for a wider class of initial conditions than plain GBP. For low temperatures (high \(\beta\)), numerical instabilities occur that are not escaped by values of \(\lambda\simeq 0.1\), as \(q\) reaches distances of order \(10^{-5}\) from the boundary of \(\Delta_{0}\). Perhaps higher-order integration schemes would be beneficial in this regime.
On hypergraphs, the Bethe-Kikuchi diffusion flux \(\Phi_{BK}=\mu\circ\Phi_{GBP}\) also behaves better than \(\Phi_{GBP}\) (the evolution of beliefs is otherwise identical when \(K\) is a graph). Integrating (59) with a time step \(\lambda\geq 0\) yields the following dynamic on \(\Delta_{0}\):
\[q_{\text{b}}^{(t+\lambda)}\propto q_{\text{b}}^{(t)}\cdot\prod_{\text{a}\in K }\left[m_{\text{a}\to\text{a}^{\prime}\text{b}}^{(t)}\right]^{\lambda x_{ \text{a}}} \tag{66}\]
\[m_{\text{a}\to\text{c}}^{(t)}=\frac{\pi_{\text{a}}^{\text{a}\to\text{c}}(q_{ \text{a}}^{(t)})}{q_{\text{c}}^{(t)}} \tag{67}\]
Note that Mobius inversion of messages allows for convergent unnormalised algorithms. Replacing the proportionality sign \(\propto\) of (66) by an equality assignment, the algorithm will converge to unormalized densities \(q_{\text{a}}\) whose masses \(\pi_{\text{a}}^{\text{a}\to\varnothing}(q_{\text{a}})\) are all equal to \(q_{\varnothing}\), the empty region taking care of harmonizing normalization constants.
Adiabatic diffusions, on the other hand, only enforce the projective energy constraint (i'), so as to fix an internal energy value \(\mathcal{U}\in\mathbb{R}\). Energy scales acting as exponents in the multiplicative Lie group \(\Delta_{0}\), and the simple Euler scheme of (60) with the Bethe-Kikuchi diffusion flux yields: \(\Phi_{BK}\) yields
\[\begin{split} q_{\text{b}}^{(t+\lambda)}=&\frac{1}{ Z_{\text{b}}}\left[q_{\text{b}}^{(t)}\right]^{1+\lambda\Psi}\cdot\prod_{\text{a}\in K }\left[m_{\text{a}\to\text{a}^{\prime}\text{b}}^{(t)}\right]^{\lambda c_{\text {a}}}\\ &\text{where }\psi=\mathcal{U}-\langle q^{(t)},h\rangle.\end{split} \tag{68}\]
The messages remain given by (67), they still compute the exponential of the free energy gradient \(\operatorname{e}^{-\mathcal{DV}}\) for \(\tilde{V}=\zeta v\).
From a technical perspective, working with beliefs (bounded between 0 and 1) may help avoiding the numerical instabilities one may encounter with logsumexp functions. However, optimizing the product of messages computed by the exponential Gauss formula (64) is not straightforward. Working with potentials and hamiltonians instead allows to use sparse matrix-vector products to
Figure 4: Effect of the time step parameter \(\lambda\) on convergence of diffusions on a 50x50 Ising lattice. For \(\lambda=1\) (blue), we recover the GBP algorithm. Results are very similar for \(\lambda=1/2\) (red) and \(\lambda=1/10\) (green). Coefficients of the 100 initial batched hamiltonians \(H\in C_{0}\) are sampled from \(\mathcal{N}(0,1)\). Note that \(1-\tanh(\beta)\) estimates the typical distance to the boundary of \(\Delta_{0}\) and \(1-\tanh(6)\simeq 10^{-5}\) brings numerical instabilities.
effectively parallelize diffusion on GPU, as implemented in the topos library.
## VI Message-Passing Equilibria
Let us now show that problems A, B and C are all equivalent to solving both _local consistency constraints_ and _energy conservation constraints_. In other words, given a reference potential \(h\in C_{0}\), all three problems are equivalent to finding \(v\in[h]\cap\mathcal{M}^{\beta}\).
It will be more informative to study \([h]\cap\mathcal{N}^{\beta}\), resolving the degeneracy of additive constants acting on \(\mathcal{M}^{\beta}\). It may happen that this intersection is everywhere transverse, as is the case on acyclic or _retractable_ hypergraphs, where true marginals may be computed in a finite number of steps [9, thm. 6.26]. In general, theorem 8 shows that the singular subspace \(\mathcal{S}_{1}\subseteq\mathcal{N}\) where both constraint surfaces are tangent can be described by polynomial equations on the convex polytope \(\Gamma_{0}\subseteq\Delta_{0}\). This important step towards a more systematic study of GBP equilibria suggests the existence of phase transitions, between both different regimes of diffusion and different free energy landscapes in Bethe-Kikuchi approximations.
### _Correspondence theorems_
The difference between problems A, B and C mostly consists in which sets of variables are viewed as constraints and which are viewed as degrees of freedom. The max-entropy principle A treats \(\beta\) as a free variable and \(\mathcal{U}\) as a constraint, while free energy principles B and C treat \(\beta\) as a constraint and \(\mathcal{U}\) as a free variable. Solutions to problems A, B and C still share a common geometric description, involving intersections of the form \([h]\cap\mathcal{M}^{\beta}\) for a given potential \(h\in C_{0}\).
Another distinction emerges from the local structure of Bethe-Kikuchi approximations. It will be reflected in the duality between beliefs \(p\in\Delta_{0}\) and potentials \(v\in C_{0}\), and between the differential \(d:C_{0}^{*}\to C_{1}^{*}\) and the codifferential \(\delta:C_{1}\to C_{0}\). Bethe-Kikuchi principles thus become reminiscent of harmonic problems, yet the nonlinear mapping \(\rho\circ\zeta:C_{0}\to\Delta_{0}\) will allow for singular intersections of the two constraint surfaces, and multiple numbers of stationary states (see figure 5).
In the statements of theorems A, B and C below, we assume a _projectively faithful_ flux functional \(\Phi:C_{0}\to C_{1}\) (see definition 13) is chosen. The faithful flux \(\Phi_{GBP}\) will fix \(\mathcal{N}^{\beta}\) instead of \(\mathcal{M}^{\beta}\), but enforcing belief normalization at each step would turn it into a projectively faithful flux. We spare the reader with these technical details, covered in more details in [9, chap. 5].
Proofs are found in appendix D. Recall that notations \([h]=h+\delta C_{1}\) and \([\mathbb{R}h]=\mathbb{R}h+\delta C_{1}\) stand for (lines of) homology classes.
**Theorem A**.: _Let \(\mathcal{U}\in\mathbb{R}\) and \(h\in C_{0}\). Denoting by \(A^{\mathcal{U}}\subseteq C_{0}^{*}\) the affine hyperplane where \(\langle p,h\rangle=\mathcal{U}\), problem \(A\) is equivalent to:_
\[\begin{split}&\frac{\partial\mathcal{S}(p)}{\partial p}\Big{|}_{ \mathbb{T}_{p}(\Gamma_{0}\cap A^{\mathcal{U}})}=0\\ \Leftrightarrow& p=\rho^{1}(\zeta\bar{v})\in A^{ \mathcal{U}}\text{ with }\ \bar{v}\in[\mathbb{R}h]\cap\mathcal{M}^{1}\end{split} \tag{69}\]
_In particular, critical potentials \(\bar{v}\in C_{0}\) coincide with fixed points of the adiabatic diffusion \(Y^{\mathcal{U}}_{\Phi}\) restricted to \([\mathbb{R}h]\)._
Note that any \(v\in[h]\cap\mathcal{M}^{\beta}\) yields a reduced potential \(\bar{v}=\beta v\) lying in \([\beta h]\cap\mathcal{M}^{1}\) by lemma 28, and therefore a solution to problem A of energy \(\mathcal{U}=\langle\rho(\zeta\bar{v}),h\rangle\).
The form of theorem A is preferred because it involves a simpler intersection problem in \(C_{0}\): between the linear subspace \([\mathbb{R}h]=\mathbb{R}h+\delta C_{1}\), the consistent manifold \(\mathcal{M}^{1}\) that does not depend on \(\beta\), and the non-linear mean energy constraint \(\langle\rho^{1}(\zeta\bar{v}),h\rangle=\mathcal{U}\). The possible multiplicity of solutions for a given energy constraint \(\mathcal{U}\) may occur for different values of the Lagrange multiplier \(\beta\).
**Theorem B**.: _Let \(\beta>0,h\in C_{0}\). Problem B is equivalent to:_
\[\begin{split}&\frac{\partial\mathcal{\bar{F}}^{\beta}(p,H)}{ \partial p}\Big{|}_{\mathbb{T}_{p}\Gamma_{0}}=0\\ \Leftrightarrow& p=\rho^{\beta}(\zeta v)\text{ with }v\in[h]\cap \mathcal{M}^{\beta}\end{split} \tag{70}\]
_In particular, critical potentials \(v\in C_{0}\) coincide with fixed points of the isothermal diffusion \(X^{\beta}_{\Phi}\) restricted to \([h]\)._
Theorem B rigorously states the correspondence of Yedidia, Freeman and Weiss [7] between stationary states of GBP and critical points of the CVM, which generalized the well-known correspondence on graphs [11]. The statement above makes the notion of stationary state more precise and our rigorous proof avoids any division by Bethe-Kikuchi coefficients thanks to lemma 19.
Figure 5: Cuspidal singularity of the Bethe-Kikuchi variational free energy (red dot). The stationary manifold \(\mathcal{N}\) is tangent to the space of gauge transformations \(\delta C_{1}\) on the singular subspace \(\mathcal{S}_{1}\) (red line). At the cusp, \(\mathcal{S}_{1}\) is tangent to both \(\mathcal{N}\) and \(\delta C_{1}\).
Before formulating the statement of theorem C, let us denote by \(c:C_{0}\to C_{0}\) the multiplication by Bethe-Kikuchi coefficients. One may show that \(c_{\mathsf{b}}=0\) whenever \(\mathsf{b}\) is not an intersection of maximal regions \(\mathsf{a}_{1},\ldots,\mathsf{a}_{n}\in K\). However, assuming that \(K\) is the \(\cap\)-closure of a set of maximal regions does not always imply the invertibility of \(c\). When \(c\) is not invertible, problem C will exhibit an affine degeneracy along \(\operatorname{Ker}(c)\).
**Definition 18**.: _Let us call \(\mathcal{M}_{+}^{\beta}=\{v+b\mid v\in\mathcal{M}^{\beta},b\in\operatorname{ Ker}(c\zeta)\}\subseteq C_{0}\) the weakly consistent manifold. In particular, \(\mathcal{M}_{+}^{\beta}=\mathcal{M}^{\beta}\) when \(c\) is invertible._
A linear retraction \(r^{\beta}:\mathcal{M}_{+}^{\beta}\to\mathcal{M}^{\beta}\) will be defined by equation (126) in the proof of theorem C below. It maps solutions of problem C onto those of B. From the perspective of beliefs, this retraction simply consists of filling the blanks with the partial integration functor.
**Theorem C**.: _Let \(\beta>0,h\in C_{0}\), problem C is equivalent to:_
\[\begin{split}&\frac{\partial\hat{r}^{\beta}(V)}{\partial V} \Big{|}_{\Gamma_{\mathcal{V}}\,\zeta[h]}=0\\ \Leftrightarrow& V=\zeta w\text{ with }w\in[h]\cap \mathcal{M}_{+}^{\beta}\end{split} \tag{71}\]
_The weakly consistent potentials \(w\in[h]\cap\mathcal{M}_{+}^{\beta}\) can be univocally mapped onto \([h]\cap\mathcal{M}^{\beta}\) by a retraction \(r^{\beta}\). They coincide with fixed points of the isothermal diffusion \(X_{\mathsf{b}}^{\beta}\) restricted to \([h]\) when \(c\) is invertible, and with its preimage under \(r^{\beta}\) otherwise._
To prove theorems A, B and C we will need the following lemma, which is rather subtle in spite of its apparent simplicity (see [9, Section 4.3.3] for detailed formulas). The lemma states that multiplication by \(c\) is equivalent to Mobius inversion up to a boundary term in \(C_{0}\).
**Lemma 19**.: _There exists a linear flux map \(\Psi:C_{0}\to C_{1}\) such that \(c-\mu=\delta\Psi\)._
Proof of lemma 19.: Theorem 3 faithfully characterizes the homology classes \([h]\) of \(C_{0}/\delta C_{1}\) by their global hamiltonian \(H_{\Omega}=\sum_{h}h_{\mathsf{a}}\) when \(K\) is \(\cap\)-closed (see [9, cor. 2.14]). Therefore exactness of Bethe-Kikuchi energy (38), implies that \(h=\mu H\) and \(cH\) are always homologous, so that the image of \(c-\mu\) is contained in \(\delta C_{1}\). One may therefore construct an arbitrary \(\Psi:C_{0}\to C_{1}\) such that \(c-\mu=\delta\Psi\) by linearity. The flux values taken by \(\Psi\) are only constrained in \(C_{1}/\delta C_{2}\), as \(\operatorname{Ker}(\delta)\) and \(\operatorname{Im}(\delta)\) coincide on positive degrees i.e. \(C_{\bullet}\) is acyclic [9, thm. 2.17].
The proofs of theorems A and B (detailed in appendix) may then be summarized as follows. Under consistency constraints on a critical belief \(p\in\Gamma_{0}\subseteq C_{0}^{*}\), the adjunction \(d=\delta^{*}\) first implies that the variations cancel on \(\operatorname{Ker}(d)=\operatorname{Im}(\delta)^{\perp}\). As linear forms on \(C_{0}^{*},\partial_{p}\hat{S}\) and \(\partial_{p}\hat{\mathcal{F}}\) can therefore lie in \(\delta C_{1}\) through Lagrange multipliers \(\delta\psi\), and we write \(\delta\psi+\lambda\in\delta C_{1}+\Lambda\) for the general expression of differentials at a critical point. The space \(\Lambda\) only depends on the other constraints at hand : \(\Lambda=R_{0}\) for problem A, and \(\Lambda=\mathbb{R}h+R_{0}\) for problem B (additive constants in \(R_{0}\) are dual to normalization constraints, and \(h\in C_{0}\) is dual to the internal energy constraint \(\langle p,h\rangle=\mathcal{U}\)).
The form of Bethe-Kikuchi functionals naturally leads to express \(\partial_{p}\hat{S}\) and \(\partial_{p}\hat{\mathcal{F}}\) as \(cV\) for some \(V\in C_{0}\), through standard computations of partial derivatives. Writing \(V=\zeta v\), the remaining difficulty consists in showing that \(cV=\delta\psi+\lambda\) is equivalent to \(v=\delta\varphi+\lambda\) for some \(\varphi\in C_{1}\). This difficult step is greatly eased by lemma 19, which states that \([v]=[\mu V]=[cV]\) coincide as homology classes.
In contrast, theorem C works under energy constraints on the potential \(w\in[h]\subseteq C_{0}\), yielding local hamiltonians \(V=\zeta w\), but the consistency of \(p=\rho(V)\) is not enforced as a constraint. The linear form \(\partial_{V}F\) will be written \(cp\), which lies in \(\operatorname{Ker}(d_{\mathsf{c}}^{*})=\operatorname{Im}(\zeta b)^{\perp}\) when critical, because of the energy conservation constraint on \(V\in\zeta(h+\delta C_{1})\). This only implies \(q=\zeta^{*}(cp)\in\operatorname{Ker}(d)\) in general, yet we will conclude that \(p\) and \(q\) must agree on all the regions \(\mathsf{b}\) where \(c_{\mathsf{b}}\neq 0\), so that the affine degeneracy of solutions (absent in A and B) is completely supported by the non-maximal regions that cancel \(c\). The consistent beliefs \(q\in\Gamma_{0}\) then solve B.
### Singularities
Let us say that \(v\in\mathcal{N}\) is _singular_ if \(T_{v}\mathcal{N}\cap\delta C_{1}\neq 0\), and call _singular degree_ of \(v\) the number
\[\operatorname{\mathbf{cork}}_{v}=\dim(T_{v}\mathcal{N}\cap\delta C_{1}). \tag{72}\]
When \(p=\rho(\zeta v)\), according to (63) \(\operatorname{\mathbf{cork}}_{v}\) coincides with
\[\operatorname{\mathbf{cork}}_{p}=\dim(\operatorname{Ker}(d)\cap T_{p}\mathcal{B} _{v}). \tag{73}\]
Both numbers measure the singularity of the canonical projection \(\mathcal{N}\to C_{0}/\delta C_{1}\) onto homology classes, a submersion if and only if \(\operatorname{\mathbf{cork}}_{v}=0\) everywhere on \(\mathcal{N}\).
**Definition 20** (Singular sets).: _For all \(k\in\mathbb{N}\), let_
1. \(\mathcal{S}_{k}:=\{\operatorname{\mathbf{cork}}_{v}=k\}\subseteq\mathcal{N}\)_,_
2. \(\Sigma_{k}:=\{\operatorname{\mathbf{cork}}_{p}=k\}\subseteq\Gamma_{0}\)_,_
_denote the singular stratifications of \(\mathcal{N}\) and \(\Gamma_{0}\) respectively._
Figure 5 depicts a situation where \(\mathcal{S}_{1}\) is non-empty. We refer the reader to Thom [54] and Boardman [55] for more details on singularity theory.
We show that the singular sets \(\Sigma^{k}\) are defined by polynomial equations in \(\Gamma_{0}\). Singularities will therefore be located on the completion \(\tilde{\Sigma}^{1}\) of a smooth hypersurface \(\Sigma^{1}\subseteq\Gamma_{0}\), which may be possibly empty. In particular the intersections \(\mathcal{N}\cap[v]\) and \(\Gamma_{0}\cap\mathcal{B}_{v}\) are almost everywhere transverse.
In what follows, we denote by \(\mathbb{R}[\Delta_{0}]\) (resp. \(\mathbb{R}(\Delta_{0})\)) the algebra of polynomials (resp. rational functions) on the span of \(\Delta_{0}\subseteq C_{0}^{*}\), by \(\mathbb{R}[\lambda]\) the algebra of polynomials in the real variable \(\lambda\). For a vector space \(\mathcal{E}\), the Lie algebra of its endomorphisms is denoted \(\mathfrak{gl}(\mathcal{E})\).
**Theorem 8**.: _There exists a polynomial \(\chi\in\mathbb{R}[\Delta_{0}]\otimes\mathbb{R}[\lambda]\), of degree \(\dim(\delta C_{1})\) in \(\lambda\in\mathbb{R}\), such that for all \(p\in\Gamma_{0}\):_
\[p\in\Sigma^{k}\quad\Leftrightarrow\quad\chi_{p}(\lambda)\text{ has root $1$ of multiplicity $k$} \tag{74}\]
We shall prove theorem 8 by computing the corank of linearized diffusion restricted to \(\delta C_{1}\).
Assume given a faithful flux functional \(\Phi:C_{0}\to C_{1}\), satisfying the axioms of definition 13, for instance \(\Phi_{GBP}\). The isothermal diffusion \(X_{\Phi}=\delta\Phi\) then only fixes \(\mathcal{N}\), defined by \(\Phi=0\), while evolution remains parallel to \(\delta C_{1}\). Linearizing \(X_{\Phi}\) in the neighbourhood of \(v\in\mathcal{N}\) thus yields an endomorphism \(\mathrm{T}_{v}X_{\Phi}\) on \(C_{0}\), of kernel \(\mathrm{T}_{v}\mathcal{N}\), which stabilizes \(\delta C_{1}\) by construction. The singular degree of \(v\) therefore computes the corank of \(\mathrm{T}_{v}X_{\Phi}\) restricted to boundaries,
\[\mathbf{cork}_{v}=\mathbf{cork}(\mathrm{T}_{v}X_{\Phi}|_{\delta C_{1}}). \tag{75}\]
By faithfulness of \(\Phi_{GBP}=-\mathcal{D}\circ\zeta\), one may explicitly compute \(\mathbf{cork}_{v}\) via minors of the sparse matrix
\[\mathrm{T}_{v}X_{GBP}=\delta\circ(\mathrm{T}_{\zeta v}\mathcal{D})\circ\zeta. \tag{76}\]
Letting \(p=\rho(\zeta v)\), theorem 8 will easily follow from lemma 21, stating that \(\mathbf{cork}_{p}=\mathbf{cork}_{v}\), and from proposition 22, which implies that \(\mathrm{T}_{v}X_{GBP}\) has rational function coefficients in \(p\).
**Lemma 21**.: _If \(p=\rho(\zeta v)\in\Gamma_{0}\) then \(\mathbf{cork}_{p}=\mathbf{cork}_{v}\)._
The proofs of lemma 21 and theorem 8 are delayed to the end of this subsection. Taking a closer look at the linearized structure of \(\mathrm{T}_{v}\mathcal{N}\) before hand will yield an interesting description of singularities by conservation equations on 1-fields \(\phi\in C_{1}\) (proposition 23) which we shall use to give an explicit expression for \(\chi\) on binary graphs in the next subsection.
**Proposition 22**.: _For all \(V\in C_{0}\), the map \(\mathrm{T}_{V}\mathcal{D}:C_{0}\to C_{1}\) is expressed in terms of \(p=\rho(V)\) by_
\[[\mathrm{T}_{V}\mathcal{D}\cdot V^{\prime}]_{\mathrm{a}\to\mathrm{b}}(x_{ \mathrm{b}})=V^{\prime}_{\mathrm{b}}(x_{\mathrm{b}})-\mathbb{E}_{p_{\mathrm{a} }}[V^{\prime}_{\mathrm{a}}\mid x_{\mathrm{b}}] \tag{77}\]
_for all \(V^{\prime}\in C_{0}\), all \(\mathrm{a}\supset\mathrm{b}\) in \(K\) and all \(x_{\mathrm{b}}\in E_{\mathrm{b}}\)._
Proof.: This computation may be found in [9, prop. 4.14]. It consists in differentiating the conditional free energy term \(-\ln\sum_{x_{\mathrm{a}\parallel\mathrm{b}}=\mathrm{b}}e^{-V_{\mathrm{a}}(x_{ \mathrm{a}})}\) with respect to \(V_{\mathrm{a}}\in\mathbb{R}^{E_{\mathrm{a}}}\).
Note that any \(p\in\Gamma_{0}\) defines a family of local metrics \((\eta_{p_{\mathrm{a}}})_{a\in K}\) such that \(\eta_{p_{\mathrm{a}}}(U_{\mathrm{a}},V_{\mathrm{a}}):=\mathbb{E}_{p_{\mathrm{a} }}[U_{\mathrm{a}}V_{\mathrm{a}}]\), consistent in the sense that the restriction of \(\eta_{p_{\mathrm{a}}}\) to \(\mathbb{R}^{E_{\mathrm{b}}}\subseteq\mathbb{R}^{E_{\mathrm{a}}}\) coincides with \(\eta_{p_{\mathrm{b}}}\) for all \(\mathrm{a}\supseteq\mathrm{b}\) by consistency of \(p\). The direct sum of the \(\eta_{p_{\mathrm{a}}}\) defines a scalar product \(\eta_{p}\) on \(C_{\bullet}\), which we denote by \(\langle-,-\rangle_{p}\).
Denote by \(\mathbb{E}_{p}^{\mathrm{a}\to\mathrm{b}}\) the orthogonal projection \(\mathbb{R}^{E_{\mathrm{a}}}\to\mathbb{R}^{E_{\mathrm{b}}}\) with respect to \(\eta_{p_{\mathrm{a}}}\), for all \(\mathrm{a}\supseteq\mathrm{b}\). This is the conditional expectation operator on observables, adjoint of the embeddings \(\mathbb{R}^{E_{\mathrm{b}}}\subseteq\mathbb{R}^{E_{\mathrm{a}}}\). Defining \(\nabla_{p}:C_{0}\to C_{1}\) by (77):
\[\nabla_{p}(V^{\prime})_{\mathrm{a}\to\mathrm{b}}=V^{\prime}_{\mathrm{b}}- \mathbb{E}_{p}^{\mathrm{a}\to\mathrm{b}}[V^{\prime}_{\mathrm{a}}], \tag{78}\]
propositions 15 and 22 imply that for all \(p=\rho(\zeta v)\in\Gamma_{0}\),
\[\mathrm{T}_{v}\mathcal{N}=\mathrm{Ker}(\nabla_{p}\circ\zeta)=\mathrm{Ker}( \mathrm{T}_{v}(\mathcal{D}\circ\zeta)). \tag{79}\]
The restriction of \(\eta_{p}\) to tangent fibers \(\mathrm{T}_{v}\mathcal{N}\) for \(p=\rho(\zeta v)\) moreover makes \(\mathcal{N}\) a Riemannian manifold.
It is worth mentioning that \(\nabla_{p}\) is the adjoint of \(\delta\) for the metric \(\eta_{p}\) on \(C_{\bullet}\), and therefore extends to a degree 1 differential on \(C_{\bullet}\)[9, prop 5.8]. This follows from adjunction of the projections \(\mathbb{E}_{p}^{\mathrm{a}\to\mathrm{b}}\) with the inclusions \(j_{\mathrm{a}\to\mathrm{b}}\), as identifying \(C_{\bullet}\) with its dual \(C_{\bullet}^{\bullet}\) through \(\eta_{p}\), the operator \(\nabla_{p}\) then represents \(d\). However, note that \(\delta C_{1}\) is the orthogonal of \(\mathrm{Ker}(\nabla_{p})=\xi(\mathrm{T}_{v}\mathcal{N})\) but not of \(\mathrm{Ker}(\nabla_{p}\circ\zeta)=\mathrm{T}_{v}\mathcal{N}\), which may intersect \(\delta C_{1}\).
**Proposition 23**.: _For all \(\phi\in C_{1}\) and all \(p\in\Gamma_{0}\), one has:_
\[\nabla_{p}\,\zeta(\delta\phi)_{\mathrm{a}\to\mathrm{b}} =\int_{dK^{\mathrm{b}}}\phi\,-\,\mathbb{E}_{p}^{\mathrm{a}\to\mathrm{b}} \bigg{[}\int_{dK^{\mathrm{a}}}\phi\bigg{]} \tag{80}\] \[=\int_{K^{\mathrm{a}}}\phi\,-\,\mathbb{E}_{p}^{\mathrm{a}\to\mathrm{b}} \bigg{[}\int_{K^{\mathrm{a}}_{\mathrm{a}}\to K^{\mathrm{a}}_{\mathrm{b}}}\phi \bigg{]}\]
Proof.: Substituting the Gauss formula (39) into (78) yields the first line. We may then partition the coboundary \(dK^{\mathrm{b}}\) by source as \(K^{\mathrm{O}}_{\mathrm{a}}\to K^{\mathrm{b}}=(K^{\mathrm{O}}_{\mathrm{a}}\sqcup K ^{\mathrm{a}}_{\mathrm{b}})\to K^{\mathrm{b}}\), and \(dK^{\mathrm{a}}\) by target as \(K^{\mathrm{O}}_{\mathrm{a}}\to K^{\mathrm{a}}=K^{\mathrm{O}}_{\mathrm{a}}\to(K^{ \mathrm{a}}_{\mathrm{b}}\sqcup K^{\mathrm{b}})\). Also note that \(\int_{dK^{\mathrm{b}}}\phi\in\mathbb{R}^{E_{\mathrm{b}}}\) is fixed by \(\mathbb{E}_{p}^{\mathrm{a}\to\mathrm{b}}\) to remove the redundant terms and obtain the second line (see definition 10 for notations : \(K^{\mathrm{O}}_{\mathrm{a}}\) here denotes the complement of \(K^{\mathrm{a}}\), and \(K^{\mathrm{a}}=K^{\mathrm{O}}_{\mathrm{a}}\to K^{\mathrm{b}}\)).
Proof of theorem 8.: For all \(V\in C_{0}\), coefficients of the linear map \(\mathrm{T}_{V}\mathcal{D}=\nabla_{p}\) are rational functions of \(p=\rho(V)\) in (77). The coefficients of \(\mathbb{E}_{p}^{\mathrm{a}\to\mathrm{b}}\) are indeed given according to the Bayes rule, for all \(\mathrm{a}\supseteq\mathrm{b}\) in \(K\), as
\[p_{\mathrm{a}}(x_{\mathrm{a}}|x_{\mathrm{b}})=\frac{p_{\mathrm{a}}(x_{\mathrm{a} })}{\sum_{y_{\mathrm{a}\parallel\mathrm{b}}=x_{\mathrm{b}}}p_{\mathrm{a}}(y_{ \mathrm{a}})}\,\
It is clear from (81) that the poles of \(Q(p)\) lie on the boundary of \(\Gamma_{0}\) as \(p_{\mathrm{a}}>0\) for all \(\mathrm{a}\in K\) inside \(\Gamma_{0}\); furthermore \(Q(p)\) does not depend on \(\lambda\).
The multiplicity of the root \(\lambda=1\) in \(\chi_{p}(\lambda)=\chi(p,\lambda)\) therefore computes the dimension of \(\mathrm{Ker}(L_{p}^{\prime})\), which is precisely \(\mathbf{cork}_{p}\) by (75). Lemma 21 finally implies that \(\chi_{p}(1)=0\) is a polynomial equation in \(p\) of \(\tilde{\Sigma}^{1}=\bigcup_{k\geq 1}\Sigma^{k}\), defined by \(\mathbf{cork}_{p}\geq 1\). One may compute \(\mathbf{cork}_{p}\) by evaluating derivatives \(\partial^{j}\chi/\partial\lambda^{j}\) at \(\lambda=1\) to recover the singular stratification.
Proof of lemma 21.: Let us stress that both points of view (\(v\in\mathcal{N}\) and \(p\in\Gamma_{0}\)) are meant to be identical, were it not for the action of additive constants. The Gibbs state map \(\rho\circ\zeta\) induces a quotient diffeomorphism \(\Delta_{0}\simeq C_{0}/R_{0}\), sends \(\mathcal{N}\) to \(\Gamma_{0}\) and \(v+\delta C_{1}\) to \(\mathcal{B}_{v}\) by definition.
Note that \(\mathcal{N}\cap R_{0}=\mu(\mathbb{R})\) is a supplement of \(\delta R_{1}\) (see appendix B). The existence of a terminal element (by \(\cap\)-closure assumption) indeed implies that \(\mu(1)\) sums to \(\sum_{\mathrm{b}}c_{\mathrm{b}}=1\) (corollary 5, see also appendix B) and that \(\mu(\mathbb{R})\) is not a coboundary of \(\delta C_{1}\) by theorem 3.
This implies that \(\delta R_{1}\subseteq R_{0}\) (acting on \([v]\) but trivially on \(\mathcal{B}_{v}\)) does not intersect \(\mathcal{N}\), and that the intersection of \(\delta C_{1}\) with \(\mathcal{N}\cap R_{0}\) reduces to zero. The intersections \(\mathrm{T}_{v}\mathcal{N}\cap\delta C_{1}\) and \(\mathrm{T}_{p}(\Gamma_{0}\cap\mathcal{B}_{v})\) must therefore have same dimension.
### _Loopy Graphs_
In the case of graphs, we may give polynomial equations for the singular strata \(\Sigma^{k}\) explicitly. They are obtained as a loop series expansion by focusing on the action of diffusion on fluxes \(\varphi\in C_{1}\), via the remarkable Kirchhoff formula (83) below. The reader may find in [29, 30, 31, 32] very similar loop expansions for the Bethe-Kikuchi approximation error and the analysis of BP stability.
When \(K\subseteq\mathcal{P}(\Omega)\) is a graph, we simply write \(ij\in K\) for edges and \(i\in K\) for vertices (instead of \(\{i,j\}\) and \(\{i\}\)). Our \(\cap\)-closure assumption usually implies that \(\varnothing\in K\) and the nerve of \(K\) is then a simplicial set of dimension 2. However, as \(N_{2}K\) only consists of chains \(ij\to i\to\varnothing\) and \(E_{\varnothing}\) is a point (unit for \(\times\)), \(C_{2}\) is only spanned by additive constants and coincides with \(R_{2}\).
We denote by \(K^{\prime}=K\smallsetminus\{\varnothing\}\) the associated graph in a more usual sense, whose nerve \(N_{\bullet}K^{\prime}\) is of dimension 1. The notation \(i\frown j\) will indicate that \(i\) is a neighbour of \(j\) in \(K\) and \(K^{\prime}\), whenever \(ij\in K\).
**Proposition 24** (Kirchhoff formula).: _Given \(p\in\Gamma_{0}\), denote by \(\mathcal{Z}_{1}^{(p)}\subseteq C_{1}\) the subspace defined by \(\mathbb{E}_{p_{\mathrm{b}}}[\phi_{\mathrm{a}\to\mathrm{b}}]=0\) for all \(\mathrm{a}\to\mathrm{b}\in N_{1}K\) and_
\[\phi_{jk\to k}=\mathbb{E}_{p}^{jk\to k}\bigg{[}\sum_{i\frown j}\phi_{ij\to j} \bigg{]} \tag{83}\]
_for all \(jk\to k\in N_{1}K\). Then \(\mathbf{cork}_{p}=\dim\mathcal{Z}_{1}^{(p)}\)._
Proof.: First assume that \(\phi\in C_{1}\) is orthogonal to \(R_{1}\) for \(\eta_{p}\). Letting \((1_{\mathrm{a}\to\mathrm{b}})\) denote the canonical generators of \(R_{1}\), we then have \(\mathbb{E}_{p_{\mathrm{b}}}[\phi_{\mathrm{a}\to\mathrm{b}}]=0=\eta_{p}(\phi_{1 _{\mathrm{a}\to\mathrm{b}}})\) for all \(\mathrm{a}\to\mathrm{b}\) in \(N_{1}K\), and in particular \(\phi_{\mathrm{a}\to\varnothing}=0\) for all \(\mathrm{a}\in K\). Let \(C_{1}^{\prime}\) denote the space of such fields orthogonal to \(R_{1}\).
Assume now in addition that \(\delta\phi\in\mathrm{Ker}(\nabla_{p}\circ\zeta)\). It then follows from proposition 23 that for all \(jk\to k\in N_{1}K\),
\[\int_{K_{(k)}^{(\mathrm{a})}\to K^{(k)}}\phi=\mathbb{E}_{p}^{jk\to k}\bigg{[} \int_{K_{(k)}^{(\mathrm{a})}\to K_{(k)}^{(\mathrm{a})}}\phi\bigg{]} \tag{84}\]
Figure 6: Level curves of the Bethe free energy \(\tilde{F}\) accross a cuspidal singularity (red dot), for increasing values of the external magnetic field (left to right). On each plot, the horizontal axis \(U\in C_{0}\) represents variations in inverse temperature (i.e. a parameter), while the vertical axis \(V\in\zeta(\delta C_{1})\) represents a 1D-fiber of equivalent energies (i.e. an optimization variable). Convexity in the \(V\)-axis is lost when temperature drops as two additional critical points stem from the singularity (right of the dashed line). Also notice \(\tilde{F}\) acquires a sharp step when the magnetic field increases.
Equation (83) is equivalent to cancelling the r.h.s. of (80) in its second form when \(\phi\in C_{1}^{\prime}\), as all the \(\phi_{\mathbf{a}\to\varnothing}\) vanish. Its l.h.s. reduces to \(\phi_{jk\to k}\) while the r.h.s. sums inbound fluxes \(\phi_{ij\to j}\) over source edges \(ij\not\subseteq jk\), containing the target \(j\subseteq jk,j\not\subseteq k\) (brackets in (84) are used to avoid ambiguous interpretation of notations in definition 10).
We showed that (83) describes \(\operatorname{Ker}(\nabla_{p}\circ\zeta\circ\delta)\) under the assumption that \(\phi\perp R_{1}\). Recalling that \(\operatorname{\mathbf{cork}}_{p}\) is the dimension of \(\operatorname{Ker}(\nabla_{p}\circ\zeta)\cap\delta\mathcal{C}_{1}\), we may compute \(\operatorname{\mathbf{cork}}_{p}\) as corank of the restriction of \(\nabla_{p}\circ\zeta\circ\delta\) to a supplement of \(\operatorname{Ker}(\delta)\). Now as \(\delta=\nabla_{p}^{*}\) for the metric \(\eta_{p}\), the subspace \(\nabla_{p}\operatorname{C}_{0}\subseteq C_{1}\) is such a supplement.
Let us show that \(\nabla_{p}\operatorname{C}_{0}\) contains \(C_{1}^{\prime}\). Given \(\phi\in C_{1}^{\prime}\), the orthogonal projections of \(\phi_{ij\to j}\in\mathbb{R}^{E_{j}}\) and \(\phi_{ij\to i}\in\mathbb{R}^{E_{i}}\) onto \(\mathbb{R}^{E_{j}}\cap\mathbb{R}^{E_{i}}=\mathbb{R}^{E_{\varnothing}}=\mathbb{R}\) vanish for every edge \(ij\in K\) by the assumption \(\phi\perp R_{1}\). We may thus choose \(V_{ij}\) in \(\mathbb{R}^{E_{ij}}\simeq\mathbb{R}^{E_{i}}\otimes\mathbb{R}^{E_{j}}\supset \mathbb{R}^{E_{i}}+\mathbb{R}^{E_{j}}\) that projects onto \(-\phi_{ij\to i}\in\mathbb{R}^{E_{i}}\) and \(-\phi_{ij\to j}\in\mathbb{R}^{E_{j}}\). Letting \(V_{i}=0\) for all vertex \(i\in K\) and \(V_{\varnothing}=0\), we may get \(V\in\operatorname{C}_{0}\) such that \(\nabla_{p}V=\phi\), as
\[\nabla_{p}(V)_{ij\to j}=0-\mathbb{E}_{p}^{ij\to j}[V_{ij}]=\phi_{ij\to j} \tag{85}\]
for all \(ij\to j\in N_{1}K\), and \(\nabla_{p}V_{\mathbf{a}\to\varnothing}=0\) for all \(\mathbf{a}\in K\). As \(C_{1}^{\prime}\subseteq\nabla_{p}\operatorname{C}_{0}\) consists of cocyles, \(\operatorname{\mathbf{cork}}_{p}\) is greater or equal than the corank of \(\nabla_{p}\circ\zeta\circ\delta\) restricted to \(C_{1}^{\prime}\).
The subspace \(R_{1}^{\prime}=R_{1}\cap\nabla_{p}\operatorname{C}_{0}\) contains the remaining cocyclic degrees of freedom as
\[\nabla_{p}\operatorname{C}_{0}=\operatorname{C}_{1}^{\prime}\operatorname{ \stackrel{{\perp}}{{\oplus}}}R_{1}^{\prime}. \tag{86}\]
However, as we shown in the proof of lemma 21, the subspace \(\delta R_{1}^{\prime}\subseteq\delta R_{1}\) does not intersect \(\operatorname{Ker}(\nabla_{p}\circ\zeta)\), while it is stable under diffusion. Therefore \(R_{1}^{\prime}\) does not contribute to \(\operatorname{\mathbf{cork}}_{p}\) and
\[\operatorname{\mathbf{cork}}_{p}=\operatorname{\mathbf{cork}}(\nabla_{p}\circ \zeta\circ\delta_{|C_{1}^{\prime}}). \tag{87}\]
The Kirchhoff formula (83) will allow us to relate the emergence of singularities to the topology of \(K\). Just like steady electric currents cannot flow accross open circuits, it is clear (83) will not admit any non-trivial solutions when \(K\) is a tree. However, unlike electric currents, the zero-mean constraint excludes scalar fluxes, fixed by conditional expectation operators. Multiple loops will thus need to collaborate for non-trivial solutions to appear.
Fixing \(p\in\Gamma_{0}\), denote by \(C_{1}^{\prime}\subseteq C_{1}\) the orthogonal of local constants for \(\eta_{p}\) as above. Choosing a configuration \(o_{i}\in E_{i}\) and letting \(E_{i}^{*}=E_{i}\smallsetminus\{o_{i}\}\) for each vertex \(i\), one has an isomorphism:
\[C_{1}^{\prime}\simeq\prod_{ij\to j\in N_{1}K^{\prime}}\mathbb{R}^{E_{j}^{*}} \tag{88}\]
Let us also denote by \(\mathbf{E}_{p}:C_{1}^{\prime}\to C_{1}^{\prime}\) the _edge propagator_
\[\mathbf{E}_{p}(\phi)_{jk\to k}=\mathbb{E}_{p}^{jk\to k}\bigg{[}\sum_{i\sim j }\phi_{ij\to j}\bigg{]}, \tag{89}\]
so that (83) is the eigenvalue equation \(\phi=\mathbf{E}_{p}(\phi)\). One may recover \(\operatorname{\mathbf{cork}}_{p}\) in the characteristic polynomial of \(\mathbf{E}_{p}\), which we compute explicitly for binary variables.
**Definition 25**.: _Define a directed graph structure \(\mathcal{G}\) on \(N_{1}K^{\prime}\), by including all edges of the form \((ij\to j)\triangleright(jk\to k)\) for \(i\neq k\)._
The edges of \(\mathcal{G}\) describe all non-vanishing coefficients of the matrix \(\mathbf{E}_{p}\). However note that coefficients of \(\mathbf{E}_{p}\) are indexed by lifts of an edge \((ij\to j)\triangleright(jk\to k)\) to a pair of configurations \((x_{j},x_{k})\in E_{j}^{*}\times E_{k}^{*}\) in general.
Let us now restrict to binary variables for simplicity, so that the edges of \(\mathcal{G}\) are in bijection with the non-vanishing coefficients of \(\mathbf{E}_{p}\in\mathbb{R}(\Delta_{0})\otimes\mathfrak{gl}(C_{1}^{\prime})\). The coefficient of \(\mathbf{E}_{p}\) attached to an edge \((ij\to j)\triangleright(jk\to k)\) actually does not depend on \(i\), as it consists in projecting observables on \(j\) to observables on \(k\) with the metric induced by \(p_{jk}\) by (89). It may thus be denoted \(\eta_{jk}(p)\) for now. An explicit form will be given by (96) below, which is symmetric in \(j\) and \(k\).
**Definition 26**.: _Denote by \(\mathfrak{S}^{k}\mathcal{G}\subset\mathfrak{S}(N_{1}K^{\prime})\) the set of permutations with exactly \(m-k\) fixed points, compatible with \(\mathcal{G}\). Any \(\gamma\in\mathfrak{S}^{k}\mathcal{G}\) decomposes as a product of \(l(\gamma)\) disjoint cycles._
**Theorem 9**.: _Assume \(x_{i}\in E_{i}\) is a binary variable for all \(i\in K\). Then \(p\in\hat{\Sigma}^{s}\) if and only if \((\lambda-1)^{s}\) divides the polynomial_
\[\chi_{p}(\lambda)=\sum_{k\geq 0}^{m}\lambda^{m-k}\sum_{\gamma\in\mathfrak{S}^{k} \mathcal{G}}(-1)^{l(\gamma)}\operatorname{\boldsymbol{\Lambda}}_{p}[\gamma], \tag{90}\]
_where \(\operatorname{\boldsymbol{\Lambda}}_{p}[\gamma]\) is the product of coefficients of \(\mathbf{E}_{p}\) across \(\gamma\)_
\[\operatorname{\boldsymbol{\Lambda}}_{p}[\gamma]=\prod_{ij\to j\triangleright \hat{\mathbb{A}}\to k}^{\gamma}\eta_{jk}(p), \tag{91}\]
_and where \(\eta_{jk}(p)\) can be chosen as (96) below, in an orthonormal system of coordinates for \(\eta_{p}\)._
Note that factorizing \(\gamma\) as \(\gamma_{1}\ldots\gamma_{l(\gamma)}\), one has:
\[\operatorname{\boldsymbol{\Lambda}}_{p}[\gamma]=\prod_{s=1}^{l(\gamma)} \operatorname{\boldsymbol{\Lambda}}_{p}[\gamma_{s}]. \tag{92}\]
We may call \(\operatorname{\boldsymbol{\Lambda}}_{p}[\gamma_{s}]\) the _loop eigenvalue_ of \(\gamma_{s}\).
Proof of theorem 9.: Proposition 24 implies that \(\operatorname{\mathbf{cork}}_{p}\) is the multiplicity of \(1\) as eigenvalue of \(\mathbf{E}_{p}\), in other words the dimension of \(\operatorname{Ker}(1-\mathbf{E}_{p})\subseteq C_{1}^{\prime}\). Let us show that \(\chi_{p}(\lambda)\) does compute the characteristic polynomial of \(\mathbf{E}_{p}\).
Consider a matrix \(M:\mathbb{R}^{m}\to\mathbb{R}^{m}\), whose diagonal coefficients all vanish, and write \(\mathfrak{S}_{m}^{k}\subseteq\mathfrak{S}_{m}\) for the set of permutations having exactly \(m-k\) fixed points. Using the Leibniz formula, one gets for \(\chi_{M}(\lambda)=\det{(\lambda-M)}\):
\[\chi_{M}(\lambda)=\sum_{k=0}^{m}\lambda^{m-k}\sum_{\sigma\in\mathfrak{S}_{m}^{k} }\varepsilon(\sigma)(-1)^{k}\prod_{\sigma\in\operatorname{Fix}(\sigma)}M_{ \sigma,\sigma(\epsilon)} \tag{93}\]
The signature of a length-\(k\) cycle \(\sigma\in\mathfrak{S}_{m}^{k}\) has signature \(\varepsilon(\sigma)=(-1)^{k-1}\). By multiplicativity of \(\varepsilon(\sigma)\), it follows that for every product of disjoint cycles \(\sigma=\sigma_{1}\ldots\sigma_{l}\in\mathfrak{S}_{m}^{k}\),
\[(-1)^{k}\varepsilon(\sigma)=(-1)^{l} \tag{94}\]
Therefore (90) computes the characteristic polynomial of \(\mathbf{E}_{p}\), whose diagonal coefficients do vanish.
Let us express the \(\eta_{jk}(p)\) in an orthonormal system of coordinates, convenient for the symmetry in \(j\) and \(k\) it induces. The price to pay is that \(\eta_{jk}(p)\) becomes a rational function in \(\sqrt{p}\) and not in \(p\), but this choice greatly simplifies computations. We write \(x_{i}\in\{\pm\}\) with \(p_{i}^{\pm}=p_{i}(\pm)\) and \(p_{ij}^{\pm\pm}=p_{ij}(\pm\pm)\), etc.
For \(\phi\in C_{1}^{\prime}\), each flux term \(\phi_{ij\to j}\) may be constrained to a 1-dimensional subspace \(\mathds{R}u_{j}\) of \(\mathds{E}_{j}\simeq\mathbb{R}^{2}\), chosen so that \(\mathbf{E}_{p_{j}}[u_{j}]=0\) and \(\mathbf{E}_{p_{j}}[u_{j}^{2}]=1\). In the \((+,-)\) coordinates, we thus define \(u_{j}\) as:
\[u_{j}=\frac{1}{\sqrt{p_{j}^{+}p_{j}^{-}}}\cdot\begin{pmatrix}p_{j}^{-}\\ -p_{j}^{+}\end{pmatrix} \tag{95}\]
Because \(\mathds{E}_{p}^{jk\to k}[u_{j}]\) has zero mean too, it must lie in \(\mathds{R}u_{k}\). Hence, \(\mathds{E}_{p}^{jk\to k}[u_{j}]=\eta_{jk}(p)\cdot u_{k}\) where \(\eta_{jk}(p)\) is found as:
\[\eta_{jk}(p)=\frac{p_{jk}^{++}p_{jk}^{--}-p_{jk}^{+-}p_{jk}^{-+}}{\sqrt{p_{j}^{ +}p_{j}^{-}}\cdot p_{k}^{+}p_{k}^{-}} \tag{96}\]
The scaling factor \(\eta_{jk}(p)\) is symmetric in \(j\) and \(k\) as \(\mathds{E}_{p}^{jk\to k}\) is a (self-adjoint) projector in \(\mathds{R}^{E_{jk}}\simeq\mathbb{R}^{4}\). The reader may check (96) by performing the 2x2 matrix-vector product \(p_{kj}\cdot u_{j}\) and dividing the resulting vector by \(p_{k}\), according to the Bayes rule, then substituting for \(p_{j}\) and \(p_{k}\) the marginals of \(p_{jk}\) when necessary. Again, we stress that the image of \(u_{j}\) must lie in \(\mathds{R}u_{k}\) by the zero-mean constraint \(u_{j},u_{k}\perp\mathds{R}\).
Figure 7: Computation of \(\chi_{p}(1)\) on the dumbell and theta graphs. The sum of listed monomials sums to 0 when \(p\in\mathfrak{\Sigma}^{1}\) is singular. Every edge \(jk\) in the base graph \(K^{\prime}\) gives rise to a pair \(jk\to k\) and \(jk\to j\) of vertices in \(\mathcal{G}\). This explains why every edge of \(K^{\prime}\) may be walked through in both directions, although U-turns are not permitted as \(jk\to j\) and \(jk\to j\) are not adjacent in \(\mathcal{G}\). The precise topology of \(K^{\prime}\) (including its vertices) does not really matter here, as loop eigenvalues \(\mathbf{\Lambda}_{p}[\gamma]\) only compute products of coefficients across cycles.
Letting \(g_{j}=\sqrt{p_{j}^{+}p_{j}^{-}}\) denote the geometric mean of \(p_{j}\) and letting \(\tilde{u}_{j}=g_{j}u_{j}\) for all \(j\):
\[\mathbb{E}_{p}^{jk\to k}[\tilde{u}_{j}]=\frac{g_{j}}{g_{k}}\eta_{jk}(p)\tilde{u} _{k}=\tilde{\eta}_{jk}(p)\tilde{u}_{k} \tag{97}\]
leads to coefficients \(\tilde{\eta}_{jk}(p)\) that are no longer symmetric, but are rational functions of \(p\). Note that the products of the \(\eta_{jk}(p)\) and \(\tilde{\eta}_{jk}(p)\) accross a cycle \(\gamma\in\mathfrak{S}^{k}\mathcal{G}\) do coincide.
## VII Conclusion
Belief propagation algorithms and their relationship to Bethe-Kikuchi optimization problems were the cornerstone motivation for this work. We produced the most comprehensive picture of this vast subject that was in our power, although loop series expansions [30, 29, 31, 32] surely would have deserved more attention.
The rich structure owned by the complex \((C_{\bullet},\delta,\zeta)\) was a surprise to uncover, in a field usually dominated by statistics. We hope the length of this article will be seen as an effort to motivate the unfriendly ways of Mobius inversion formulas and homological algebra, and demonstrates their reach and expressivity in a localized theory of statistical systems. This intimate relationship between combinatorics and algebraic topology (culminating in theorems 3 and 6) has not been described to our knowledge, although both subjects are classical and well covered individually.
From a practical perspective, we showed that belief propagation algorithms are time-step 1 Euler integrators of continuous-time diffusion equations on \((C_{\bullet},\delta,\zeta)\). We propose to call these ODEs _belief diffusions_, as "diffusions" suggest (i) a conservation equation of the form \(\hat{v}=\delta\Phi(v)\), while "belief" recalls (ii) that their purpose is to converge on consistent pseudo-marginals, critical for Bethe-Kikuchi information functionals. The GBP algorithm offered a spatial compromise between precision and complexity in the choice of the hypergraph \(K\subseteq\mathcal{P}(\Omega)\); our diffusion equations offer a temporal compromise between runtime and stability. They thus allow for a wider range of initial conditions and applications.
Belief diffusions, in their isothermal and adiabatic form, solve localized versions of the max-entropy (A) and free energy principles (B and C). The associated Bethe-Kikuchi functionals have a piecewise constant number of critical points, whose discontinuities are located on the projection of \(\mathcal{S}_{1}\subseteq\mathcal{N}\) on the quotient space of parameters \(C_{0}/\delta C_{1}\). A stationary point in \(\mathcal{N}\) crossing \(\mathcal{S}_{1}\) will become unstable and forced onto a different sheet of the intersection with homology classes. This would appear as a discontinuous jump in the convex polytope \(\Gamma_{0}\), happening anytime a consistent belief \(p\in\Gamma_{0}\) crosses the singular space \(\tilde{\Sigma}_{1}\).
## Acknowledgements
This work has benefited from the support of the AI Chair EXPEKCTATION (ANR-19-CHIA-0005-01) of the French National Research Agency (ANR).
I am very grateful to Yael Fregier, Pierre Marquis and Frederic Koriche for their support in Lens, and to Daniel Bennequin, Gregoire Sergeant-Perthuis and Juan-Pablo Vigneaux for fostering my interest in graphical models.
|
2306.01217 | Generative AI for Product Design: Getting the Right Design and the
Design Right | Generative AI (GenAI) models excel in their ability to recognize patterns in
existing data and generate new and unexpected content. Recent advances have
motivated applications of GenAI tools (e.g., Stable Diffusion, ChatGPT) to
professional practice across industries, including product design. While these
generative capabilities may seem enticing on the surface, certain barriers
limit their practical application for real-world use in industry settings. In
this position paper, we articulate and situate these barriers within two phases
of the product design process, namely "getting the right design" and "getting
the design right," and propose a research agenda to stimulate discussions
around opportunities for realizing the full potential of GenAI tools in product
design. | Matthew K. Hong, Shabnam Hakimi, Yan-Ying Chen, Heishiro Toyoda, Charlene Wu, Matt Klenk | 2023-06-02T00:48:50Z | http://arxiv.org/abs/2306.01217v1 | # Generative AI for Product Design: Getting the Right Design and the Design Right
###### Abstract.
Generative AI (GenAI) models excel in their ability to recognize patterns in existing data and generate new and unexpected content. Recent advances have motivated applications of GenAI tools (e.g., Stable Diffusion, ChatGPT) to professional practice across industries, including product design. While these generative capabilities may seem enticing on the surface, certain barriers limit their practical application for real-world use in industry settings. In this position paper, we articulate and situate these barriers within two phases of the product design process, namely _getting the right design_ and _getting the design right_, and propose a research agenda to stimulate discussions around opportunities for realizing the full potential of GenAI tools in product design.
generative ai, large language models, product design +
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote † †: Footnote †: thanks: [
+
Footnote † †: Footnote † †: thanks: [
+
Footnote † †: Footnote † †: thanks: [
+
Footnote † †: Footnote † †: thanks: [
+
Footnote † †: Footnote † †: thanks: [
+
Footnote † †: Footnote † † †: thanks: [
+
Footnote † † †: Footnote
### High-fidelity design or inspiration?
The ease of which GenAI tools generate photorealistic images, while efficient, also creates new problems for design. Moving to high-fidelity so quickly could actually have a negative impact on designer creativity. For example, showing a high-fidelity solution so early in the process may induce _design fixation_(Garon et al., 2016), the process in which designers continue to offer variations on an existing design without considering alternative solutions. The HCI and design disciplines have emphasized the gradual evolution of designs from low to high fidelity with evaluation interspersed throughout iterative steps in the design cycle. The additional time spent moving from low-fidelity to high-fidelity may result in important learnings that may lead to creative breakthroughs.
This makes the compelling case to treat GenAI as tools for inspiration, instead of outright design, in this early stage of the design process. For instance, many product designers draw inspiration from community-driven visual content curation websites such as Pinterest and Behance, in order to generate mood boards that help steer their creative process. However, because these websites are optimized to attract user engagement, the displayed content may prematurely constrain the boundaries from which designers draw their inspiration. GenAI can fill this gap by providing computational means of generating alternative sources of inspiration(Garon et al., 2016)
### Increasing idea diversity
While many generative AI tools and services already focus on providing augmentations and variations of human-created visual content, they often create variations on visual style than the ideas presented in the image. Navigating the design space requires out-of-the-box thinking, which comes from the process of elaborating on'meaningfully distinct' ideas. These ideas, however, are susceptible to, and often influenced by the designers' own intuition, experience and biases about the topic.
To this end, future development of GenAI tools should consider computational means for increasing idea diversity by offering ideas that are visually and semantically distinct from each other, and create appropriate mechanisms for users to control the desired level of diversity to prevent significant deviations from each idea. Idea diversity can also come from systematic explorations of existing designs across product categories(Dong et al., 2019) and domains(Dong et al., 2019). To make GenAI tools useful for product design, we should explore opportunities to integrate our knowledge of conceptual mappings in natural language-that are sensitive to different social and cultural contexts-as well as human-made products that embody some knowledge of the problem and design space.
### Prompt engineering challenge
Text-to-image translation systems offer designers the ability to translate envisioned concepts into photo-realistic design artifacts in just a matter of seconds. However, adopting prompt engineering-based design in business practice is challenging because of the cognitively difficult task of translating designers' visual concepts into text expressions. This translation involves clearly articulating intended meanings and remembering specific design ontology (e.g., surrealism, extreme close-up shot) that is recognized by the generative model. This problem leads to text prompts that result in image depictions that are inconsistent with the designer's intended visual concept, or vice versa, thus adding significant time to iteratively refine the prompt until the desired outcome is achieved(Dong et al., 2019). In one example, using DALL-E 2's GenAI system, it took Nestle more than 1,000 text prompts, followed by human evaluation of each, to arrive at its final product placement advertisement for its yogurt product by re-rendering Johannes Vermeer's painting "The Milkmaid"(Dong et al., 2019).
By interactively prompting users to specify and correct these details (either through language or imagery), GenAI systems can aid the user in iteratively improving the scene in fewer input-output loops. Only then can designers begin to consider adding variations while retaining control over specific design elements.
## 3. Getting the design right
While the responsibility of _getting the right design_ falls mostly on the designer, _getting the design right_ requires a concerted effort among design, engineering, marketing, and other invested stakeholders. Once a design problem is defined, the product team must make continuous iterative refinements to an idea to narrow down the design space. Doing so requires making decisions on concepts to align with the goals set forth by the product team. These goals are informed by the team's analysis of engineering requirements, rigorous usability testing, and measuring consumer reactions to the product, etc. Here, we focus on capturing consumer preferences, a challenging prospect that adds significant cost burden and uncertainty in _getting the design right_.
### Maintaining design goals
As a company brings a new product to market, it will learn that there may be trade-offs between different functional requirements. For example, consumers may want a study artifact that is also light and inexpensive. Throughout the product development cycle, engineers will make decisions that navigate these trade-offs typically without strong links to the information used in the requirements engineering process. Providing this information to decision-makers in their workflow has the potential to significantly improve the resulting design while reducing lead time to bring new products to market. An important challenge here is how to represent the consumer's desires begin as under-specified requirements for later refinement and communication.
Consumers do not care solely about the functional capabilities of the product. Aesthetic considerations drive many purchase decisions. As the design moves from concept to production, many engineering decisions will impact the aesthetic of the resulting artifact. By maintaining the designer's views of the consumer's preferences, these engineering changes can be evaluated for their aesthetic impacts without consulting expensive design panels. The larger challenge exists in representing the tension between functional and aesthetic requirements and teaching GenAI models to generate results that consider the combination of these constraints. |
2305.18216 | Towards minimizing efforts for Morphing Attacks -- Deep embeddings for
morphing pair selection and improved Morphing Attack Detection | Face Morphing Attacks pose a threat to the security of identity documents,
especially with respect to a subsequent access control process, because it
enables both individuals involved to exploit the same document. In this study,
face embeddings serve two purposes: pre-selecting images for large-scale
Morphing Attack generation and detecting potential Morphing Attacks. We build
upon previous embedding studies in both use cases using the MagFace model. For
the first objective, we employ an pre-selection algorithm that pairs
individuals based on face embedding similarity. We quantify the attack
potential of differently morphed face images to compare the usability of
pre-selection in automatically generating numerous successful Morphing Attacks.
Regarding the second objective, we compare embeddings from two state-of-the-art
face recognition systems in terms of their ability to detect Morphing Attacks.
Our findings demonstrate that ArcFace and MagFace provide valuable face
embeddings for image pre-selection. Both open-source and COTS face recognition
systems are susceptible to generated attacks, particularly when pre-selection
is based on embeddings rather than random pairing which was only constrained by
soft biometrics. More accurate face recognition systems exhibit greater
vulnerability to attacks, with COTS systems being the most susceptible.
Additionally, MagFace embeddings serve as a robust alternative for detecting
morphed face images compared to the previously used ArcFace embeddings. The
results endorse the advantages of face embeddings in more effective image
pre-selection for face morphing and accurate detection of morphed face images.
This is supported by extensive analysis of various designed attacks. The
MagFace model proves to be a powerful alternative to the commonly used ArcFace
model for both objectives, pre-selection and attack detection. | Roman Kessler, Kiran Raja, Juan Tapia, Christoph Busch | 2023-05-29T17:00:40Z | http://arxiv.org/abs/2305.18216v2 | Towards minimizing efforts for Morphing Attacks - Deep embeddings for morphing pair selection and improved Morphing Attack Detection
## Abstract
Face Morphing Attacks pose a threat to the security of identity documents, especially with respect to a subsequent access control process, because it enables both individuals involved to exploit the same document. In this study, face embeddings serve two purposes: pre-selecting images for large-scale Morphing Attack generation and detecting potential Morphing Attacks. We build upon previous embedding studies in both use cases using the MagFace model. For the first objective, we employ an pre-selection algorithm that pairs individuals based on face embedding similarity. We quantify the attack potential of differently morphed face images to compare the usability of pre-selection in automatically generating numerous successful Morphing Attacks. Regarding the second objective, we compare embeddings from two state-of-the-art face recognition systems in terms of their ability to detect Morphing Attacks. Our findings demonstrate that ArcFace and MagFace provide valuable face embeddings for image pre-selection. Both open-source and COTS face recognition systems are susceptible to generated attacks, particularly when pre-selection is based on embeddings rather than random pairing which was only constrained by soft biometrics. More accurate face recognition systems exhibit greater vulnerability to attacks, with COTS systems being the most susceptible. Additionally, MagFace embeddings serve as a robust alternative for detecting morphed face images compared to the previously used ArcFace embeddings. The results endorse the advantages of face embeddings in more effective image pre-selection for face morphing and accurate detection of morphed face images. This is supported by extensive analysis of various designed attacks. The MagFace model proves to be a powerful alternative to the commonly used ArcFace model for both objectives, pre-selection and attack detection.
## Introduction
Automated face recognition plays an integral role in access control, criminal investigation, and surveillance settings [1]. In particular, for automated border control, the observation and analysis of facial characteristics is becoming increasingly important for identity verification [2, 3]. For example, to assist immigration officers at borders or airports, automated Facial Recognition Systems (FRS) can increase traveler throughput and reduce costs.
In a typical identity verification process, a biometric reference image, i.e., a passport photograph of a subject is compared to one or multiple probe images, i.e., trusted live
photographs captured at the border. A similarity score is then calculated between the reference and probe images, and the subject is allowed to cross the border if the similarity score exceeds a predetermined threshold.
The operation of such an automated FRS must be secure and robust. However, so-called Morphing Attacks can compromise the security of FRSs [4, 5]. In a Morphing Attack, an attacker combines the face images of two or more subjects in order to form a morphed face image (see e.g., Fig 1). This morphed face image is presented as a (manipulated) reference to the FRS as it is stored and read from the passport on request. Since the calculated similarity score between the morphed face reference image and one or more bona fide probe images should be high enough to exceed some predetermined decision threshold \(\tau\) of the FRS, each attacker's identity is falsely verified. As a result, two or more individuals may use the same passport to cross the border, and the unique link between a passport and an individual is broken.
Real-world Morphing Attack cases have already been reported (e.g., [7]). High-ranking governmental bodies, such as EU DG HOME and the Ministries of Interior of the G7 states have now formed an action, in order to address the Morphing Attack Detection topic.
Therefore, Morphing Attack Detection (MAD) algorithms have been proposed in recent years to detect such attacks [8]. Several MAD algorithms are based on machine learning and therefore require a large amount of data for training (e.g., [9, 10]). However, generating such a large data source with high quality morphs is often challenged by the need for manual post-processing to reduce image artifacts [4, 11]. It is therefore important to develop criteria that allow an informed but automated selection of two (or more) individuals suitable for producing a high quality morph image [12] without relying heavily on manual intervention. These criteria can then be used to find a large number of possible pairs of suitable source images from which morphs can be automatically generated, and a database of morphed images can be created for future research on MAD.
Previous research has shown that an adequate pre-selection of possible morph pairs can reduce two things: (i) the choice of the applied morphing algorithm is less relevant [12], and (ii) the amount of artifacts produced by an automated morphing algorithm is reduced, making an FRS more vulnerable to the Morphing Attack [12]. A
Fig 1: **Illustration of morphed face images created using different morphing approaches.** The images on the left and on the right represent the corresponding two bona fide face images.
_Face images are republished from [6] under a CC BY license, with permission from Prof. Karl Ricanek Jr, University of North Carolina at Wilmington, original copyright 2006._
large database of morphed images not only allows for better training and testing of MAD algorithms. It also allows for statistical analysis of the performance of FRSs, and may ultimately lead to a better understanding of the image properties that predict the success of a Morphing Attack. We claim that our analysis will contribute to the creation of large-scale training data sets to make MAD approaches more robust.
We further note that manual image pre-selection relies on some heuristic criteria as employed in previous works [11, 13, 14]. For instance, soft biometrics characteristics have been used, to morph only subjects of similar age, same gender, or same ethnicity [11, 13, 14]. In a complementary direction, other characteristics such as the shape of the hair, skin tone, differences in landmark position, and Euclidean distance between _face embeddings_ extracted from the OpenFace model [15] have shown positive effects on the attack potential of a morph [12].
Deep learning-based FRSs provide feature embeddings, which are low-dimensional representations of high-dimensional face images [16]. In the context of face recognition, termed _face embeddings_, feature embeddings are point representations in latent space learned during the training of a face recognition neural network [17]. Computing a simple distance in latent space between two face embeddings, such as the cosine distance, can be effective in quantifying the similarity of two faces [18]. Motivated by the superior performance of models that use such embeddings for face recognition (e.g., [15, 19, 20]), we hypothesize that feature embeddings from deeply learnt face models can provide rich enough data to automate image pre-selection for morphing simply by analyzing the embeddings.
We take advantage of the power of embeddings in determining similarity by presenting them as auxiliary data for image pre-selection in morphing. The general assumption in our work is that a small distance between the face embeddings of two subjects corresponds to a high similarity (structural and perceptual) of the facial features of the two subjects. Thus, selecting pairs of face images based on high similarity scores between them can help generate more realistic morphs compared to selecting two face images that do not look particularly similar. Automating the pair selection process (i.e., pre-selection) makes it tractable, reproducible, scalable, and less subjective than manual approaches.
An attacker could also use embeddings by comparing a number of candidate image embeddings to find a suitable morphing partner, for example in a database of possible accomplices. This could improve the success of an attack by allowing more quantifiable parameters to be used in deciding which morphing partner to choose, rather than just soft biometrics and subjective facial similarity.
From a theoretical standpoint, it is obvious that pre-selection based on face similarity can increase the attack potential of resulting morphs [11, 21], and previous research has demonstrated the increased attack potential of pre-selected morphs [12] using OpenFace [15] embeddings. However, a detailed analysis of morphing pairs pre-selected on embeddings of different state-of-the-art FRSs such as ArcFace and MagFace is still lacking. Insights on the suitability of these contemporary models for image pre-selection are however crucial to guide future attempts to create large-scale databases of morphed face images, which are especially important for the research context of MAD.
We evaluate the pre-selections by quantifying the attack potential of the created morphs on different FRSs. In addition to standard metrics, we also use a recently introduced method, the Morphing Attack Potential (MAP) [22]. MAP compensates for some drawbacks of previous metrics. e.g., the MMPMR [4] tends to represent the upper bound of attack potential, since an attack is considered successful with only one (of several) bona fide images positively verified against the morphed reference. On the other hand, the FMMPMR [23] represents the lower limit of the attacker potential, since an attack here is considered a success only in case of exclusively positive verification of
all bona fide face images. However, the bona fide face images in a real-world attack are often very similar among themselves, as they are all captured in short time intervals just before the verification process. However, the face images used in the present study were taken with large time intervals between them, and are therefore much more heterogeneous, so FMMPMR is not a pertinent measure of attack potential. MAP, on the other hand, tests attack potential across several different FRSs and thus provides a more generalizable picture of the actual attack potential. After all, a real attacker often does not know which system is being used for verification. Moreover, different attacks are launched on each system, which - if the bona fide face images are as heterogeneous as in the present study - also makes this measure stand out for greater robustness.
We further make use of the learnt embeddings by using a particular category of losses. The magnitude of these losses can measure the quality of the given face image. We use these losses for creating a robust MAD algorithm in a Differential-MAD (D-MAD) setting. Our D-MAD algorithm is further based on the idea that the magnitude of the feature embedding is highly correlating and monotonically increasing if the pair of images are to be chosen using FRSs with adaptively learnt intra-class and inter-class feature distributions [20].
For this D-MAD algorithm, we deliberately build upon the concept of a previously published D-MAD algorithm, which used ArcFace embeddings [24]. However, we take advantage of the better face recognition performance of MagFace [20] and therefore train a D-MAD classifier on the differential face embeddings of this FRS, while building an identical D-MAD algorithm using ArcFace embeddings for comparison.
The present study makes the following contributions:
* We first examine face embeddings produced by several well-known FRSs for automated image pre-selection to produce morbed face images. We demonstrate that a large data set of morphed face images can be easily constructed by analyzing the distance between the embeddings. We empirically validate the effectiveness of our developed selection criteria by systematically studying the susceptibility of deep learning-based FRSs and two commercial-off-the-shelf (COTS) FRSs on our generated database.
* We generalize our proposed pre-selection approach for morph generation across different morphing algorithms and validate it on several FRSs using multiple attack attempts. In particular, we compute the recently proposed Morph Attack Potential (MAP) metric on the resulting data set of morphed images, which illustrates for a more generalizable and robust measure of attack potential. Our experiments show that pre-selection can produce better morphs and can compromise FRSs to a high degree, regardless of which particular embeddings were used in the pre-selection. To support our arguments, we validate our pre-selection approach against a control data set consisting of randomly paired face images.
* Furthermore, motivated by the limited performance of MAD algorithms in detecting Morphing Attacks generated by our pipeline, we present a newly designed MAD algorithm. Our new MAD algorithm is again based on face embeddings and improves the detection capability over state-of-the-art algorithms.
In the rest of the paper, we first present our proposed approach for morph pair pre-selection and provide details about the data sets and models used to provide embeddings, morph face images, and validate the resulting morphs. We then illustrate the results of the Morphing Attacks by showing how the attacks generated by our pipeline are able to fool FRSs. Finally, we construct and benchmark a new MAD algorithm on this data set and discuss our results.
## Methods
### Proposed approach for morph pair pre-selection
Our proposed approach consists of generic deep learning-based FRSs to extract embeddings followed by a similarity-based pair-selection module. The selected pairs were then provided to different face morphing algorithms. The generated data set is thereupon used to study vulnerability of FRSs and to develop a MAD algorithm. Fig 2 presents an illustration of our proposed approach for the convenience of the reader.
#### Embeddings from Face Recognition Systems
In our proposed architecture, different state-of-the-art implementations of FRSs were used to extract face embeddings for image pre-selection. Based on the results reported in recent work, we selected four different architectures to obtain the embeddings in our pre-selection pipeline. We chose ArcFace [25], VGG-Face [26], DeepFace [27], and MagFace [20]. For ArcFace, VGG-Face, and DeepFace (Facebook), Tensorflow implementations of the respective models were used, which were included in the software distribution of the LightFace repository [28]. For MagFace, the official repository was used [20]. Each of these FRSs provides an embedding vector representing a face image. The vector differs in length depending on which FRS was used, as shown in Table 1. For the sake of completeness of the experiments in this paper, we also used the same set of FRSs to verify the resulting morphed faces, in addition to two COTS FRSs.
#### Pre-selection algorithm
Our proposed pre-selection criterion is based on a measure of similarity of embeddings which typically consist of rich identity preserving information [29]. Given two equally sized embedding vectors, we employed Cosine distance (Eq 1) to determine the similarity between the underlying faces. For a pair of embeddings, corresponding to two face images, the Cosine distance [29] can be defined as:
Fig 2: **General workflow of our proposed pipeline for image pre-selection.** Embeddings were extracted from one sample of each subject. Distances between embeddings were calculated. Faces were paired based on a low distance between embeddings. Pairs were then morphed, and morphed images were verified against bona fide probe images. Furthermore, Morphing Attack Detection has been conducted. The image pre-selection steps are further illustrated in Algorithm 1. The processing steps were performed using different FRSs and different morphing algorithms.
_Face images are republished from [6] under a CC BY license, with permission from Prof. Karl Ricanek Jr, University of North Carolina at Wilmington, original copyright 2006._
\[d_{cos}(E_{1},E_{2})=1-\frac{E_{1}^{T}E_{2}}{||E_{1}||\cdot||E_{2}||} \tag{1}\]
with \(E_{1}\) and \(E_{2}\) the \(D\)-dimensional embedding vectors of the images.
For a given data subject and a chosen FRS, we computed the Cosine distances between the subject and the remaining \(N\) subjects in the chosen database. The procedure was repeated \(N\) times for each subject in the database. We further enforced a demographic consistency check. Specifically, we enforced that gender and ethnicity between individuals of a potential pair must correspond. Further, we allowed for a maximum age difference of 5 years between individuals of a potential pair. The labels provided with the face image data set were used for these checks (see below). Based on the computed similarity score matrix across all subjects, we retained the upper diagonal of the score matrix owing to the symmetric nature of the Cosine distance (i.e., \(d_{cos}(E_{1},E_{2})=d_{cos}(E_{2},E_{1})\)). The face images of unique subjects fulfilling the criteria were then chosen for morphing. The details of our pre-selection criteria are presented in Algorithm 1 for the sake of brevity. It should be mentioned that by this procedure each data subject was only used for the creation of a maximum of one morph pair. However, each data subject can be used in all runs of the algorithm, i.e., when the algorithm was applied to the embeddings of another FRS.
Pair selection according to this algorithm was carried out using four different FRSs. The algorithm was reapplied to create a baseline comparison data set without taking into account the similarity between the embeddings. For the baseline data set, only the face images of subjects with matching demographics were randomly morphed.
### Creation of the morphed face data set
We validate our proposed approach for image pre-selection using the academic version of the UNCW-MORPH face data set distributed by the face ageing group of the University of North Carolina Wilmington (UNCW) [6, 30]. UNCW-MORPH contains bona fide face images only, contrary to its name which is suggesting it contains morphed images. It comprises over \(55,000\) face images of more than \(13,000\) data subjects, captured between 2003 and 2007. The facial images were captured in frontal poses with largely neutral expressions, making the data set suitable for face morphing. The face images had resolutions between 200 px \(\times\) 240 px and 400 px \(\times\) 480 px, with each image labeled with exact age, gender, and ethnicity.
In order to employ the data for our experiments, we conducted a curation process with a number of pre-processing steps. First, all samples were checked for neutral facial expressions and any images not conforming to neutral expressions were eliminated. We specifically used an emotion detection model from the LightFace package [28] to verify neutral expression. All samples, for which _neutral_ was not the emotion with the highest probability, or samples, for which the emotion model failed, were discarded from morphing.
Since we needed multiple samples to study the susceptibility of FRSs to the resulting morphs, any subject with fewer than five samples was discarded from further analysis. For the remaining subjects, the first sample (in chronological order) was used for
\begin{table}
\begin{tabular}{l r} \hline FRS & \# of embeddings \\ \hline ArcFace & 512 \\ DeepFace & 4096 \\ VGG-Face & 2622 \\ MagFace & 512 \\ \hline \end{tabular}
\end{table}
Table 1: **The number of embeddings per FRS.**
morphing, while the remaining samples were used for validation. Of the \(55,134\) samples from \(13,618\) subjects in the raw data set, \(22,992\) samples from \(3,337\) subjects remained in the data set.
### Morphing algorithms
To perform the morphing, we chose four different morphing algorithms. Three of them were landmark-based (Alyssaq morpher [31], NTNU morpher [13, 14], & UBO morpher [32, 33, 34, 5]). In these landmark-based algorithms, morphing was based on averaging the landmark coordinates of the two morph candidate images. The \(68\) face landmarks were extracted using the OpenCV dlib library [35], with an ensemble of regression trees used to estimate the coordinates. As a fourth morphing algorithm, a deep learning based algorithm - Identity Prior driven Generative Adversarial Networks (MIPGAN) [36, 37] - were used. Unlike landmark morpher images, MIPGAN used the latent space of two samples to generate morphed images. A morphing factor (alpha) of \(0.5\) was used for all morphing algorithms. No image pre-processing or post-processing was done other than the steps included in the respective morphing packages. However, rescaling and cropping steps were performed for the face recognition steps in the vulnerability analysis (see below). Fig 1 illustrates exemplary morphed images created by all four approaches.
### Properties of the morphed images
Morphing was then performed based on several criteria, such as the similarity of facial embedding vectors and demographic consistency. Since several different FRSs were used for the pre-selection algorithm, the pairings naturally differed between the approaches used. Furthermore, there were differences in the absolute number of pairs found. This is due to the fact that the demographic check eliminates many matched pairs if the demographic properties are too different. Thus, we obtained 452 pairs when pre-selecting with ArcFace, 511 for DeepFace, 632 for VGG-Face, and 639 for MagFace. When random pairing was performed, 819 pairs were found.
### Vulnerability analysis
We investigated the vulnerability of various FRSs to morphs generated by our proposed architecture. Using a subset of bona fide images from the UNCW database and the morphed images generated by our architecture, we investigated the vulnerability of four different open-source FRSs and two COTS systems. While an open-source FRS can illustrate the applicability of the proposed approach, the evaluation of COTS systems will indicate a higher relevance for security considerations regarding Morphing Attacks in operational scenarios.
#### Calibration of the decision thresholds for verification
Since each of the open-source FRSs operates with a unique decision threshold, we first determined their respective thresholds specifically on the UNCW data set. We calibrated these thresholds for the four open-source FRSs which we used for face verification (which were the same as those used for pre-selection). To determine the respective thresholds, a subset of 500 data subjects was sampled and all possible one-to-one combinations of mated pairs were obtained using each FRS. Similarly, all possible combinations of non-mated comparison scores were computed. As the total number of possible non-mated comparisons highly outnumbered the amount of possible mated comparison scores, a uniform sampling from all possible non-mated comparisons was performed to obtain an equal number.
Detection Error Trade-off (DET) curves were calculated for each FRS using the respective mated and non-mated distributions. The decision thresholds \(\tau\) for False Match Rates (FMRs) of 0.1% were empirically determined for each FRS [38] based on the FRONTEX recommendation [39]. The thresholds, along with the corresponding False Non-Match Rates (FNMRs), are shown in Table 2.
For face verification, we further deployed two COTS FRSs. For these, a default threshold was used to achieve an FMR of 0.1% as recommended by the respective COTS vendors.
\begin{table}
\begin{tabular}{c|c c} \hline \hline FRS & threshold @ FMR= 0.1\% & FNMR @ FMR= 0.1\% \\ \hline ArcFace & 0.498 & 0.051 \\ DeepFace & 0.125 & 0.784 \\ VGG-Face & 0.146 & 0.318 \\ MagFace & 0.666 & 0.004 \\ \hline \hline \end{tabular}
\end{table}
Table 2: **The verification thresholds on the unnormalized Cosine distances for each open-source FRS.** Thresholds were calculated on the UNCW data set. The corresponding FNMRs are illustrated next to the thresholds as decimal fractions.
#### Vulnerability analysis metrics
After determining the threshold, we analyzed the newly created morphed images for their attack potential using three different metrics, such as Product Average Mated Morph Presentation Match Rate (prodAvgMMPMR) (Eq 3), Relative Morph Match Rate (RMMR) (Eq 4), and Morphing Attack Potential (MAP). While prodAvgMMPMR and RMMR give the attack potential with respect to a single FRS, MAP gives the attack potential of the newly created data set across multiple FRSs. In this study, all rates are reported as decimal fractions and are therefore distributed in the interval \([0;1]\).
MmpmrThe Mated Morph Presentation Match Rate (MMPMR) [4] is defined for distance scores (Eq 2),
\[MMPMR=\frac{1}{M}\sum_{m=1}^{M}\{(\min_{n=1,\ldots,N_{m}}D_{m}^{n})<\tau\} \tag{2}\]
with \(M\) being total number of morphed images, \(D_{m}^{n}\) the mated morph comparison score (here: distance score) of subject \(n\) at morph \(m\), \(N_{m}\) the total number of subjects constituting to morph \(m\), and \(\tau\) the decision threshold.
prodAvgMMPMRThe Product Average Mated Morph Presentation Match Rate (prodAvgMMPMR) [11] is a variant of MMPMR that allows a more probabilistic interpretation of the success of Morphing Attacks (Eq 3),
\[prodAvgMMPMR=\frac{1}{M}\sum_{m=1}^{M}[\prod_{n=1}^{N_{m}}(\frac{1}{I_{m}^{n} }\cdot\sum_{i=1}^{I_{m}^{n}}\{D_{m}^{n,i}<\tau\})] \tag{3}\]
in which, additionally to the above, \(I_{m}^{n}\) is the number of samples of subject \(n\) within morph \(m\), and \(D_{m}^{n,i}\) the mated morph comparison score of sample \(i\) of subject \(n\) at morph \(m\).
An example: One morphed image was evaluated. Two data subjects contributed to the morph with one image each. Three bona fide samples per subject were tested against the morph. For one data subject, \(\frac{2}{3}\) of the comparison scores exceeded the threshold \(\tau\). For the other data subject, \(\frac{3}{3}\) of comparison scores exceeded the threshold \(\tau\). The prodAvgMMPMR then was simply the product of \(\frac{2}{3}\) and \(\frac{3}{3}\), therefore \(\frac{2}{3}\).
RmmrThe Relative Morph Match Rate (RMMR) metric [11] on the other hand takes the FNMR of a biometric system into account. Different biometric systems, calibrated at a particular FMR, can have different FNMRs. For instance, the FNMRs of the calibrated open-source FRSs greatly varied after calibration of the decision threshold (see Table 2). If the FNMR is high, the system is less suited for an operation in a particular scenario, e.g., access control. Consequently, it might produce low MMPMR or prodAvgMMPMR - therefore be less vulnerable to Morphing Attacks - but at the same time rejects a large proportion of mated verification attempts. Therefore the RMMR relates the MMPMR to the FNMR (Eq 4).
\[RMMR=MMPMR+FNMR \tag{4}\]
MMPMR and FNMR (and therefore RMMR) are specific for the chosen decision threshold \(\tau\). Thus, if MMPMR is high, therefore the morphs would fool the FRS at \(\tau\), and at the same time, if the FRS performs well by having a low FNMR, the RMMR would level off around 1. On the other hand, if both the potential of the attack is low (low MMPMR), and the FRS also performs poorly by having a high FNMR, the RMMR
would still level off at around 1. Most interestingly, if the potential of the attack is poor (i.e., low MMPMR), and the FRS performs well by having a low FNMR, the RMMR would be around 0. For the sake of completeness: if the attack is of high quality (high MMPMR), and the FRS performs poorly (high FNMR), the RMMR could theoretically level off at 2. However, that would require the morphed comparison distances to be smaller than the mated comparison distances.
MapRecently, the Morphing Attack Potential (MAP) has been proposed to report the attack potential of a data set \(D\) of morphed images in a combined manner across different FRSs [22]. All FRSs (in our case 6 different systems) verified the same number of different bona fide images (e.g. 4) of each subject against the respective morph. \(MAP_{4,6}^{D}\) then represents the \(4\times 6\) matrix, where the element \((i,j)\) indicates the decimal fraction of morphed images for which at least \(i\) verification attempts were successful with respect to both contributing subjects and at least \(j\) FRSs (Fig 3). As outlined earlier, MAP values are characterized by higher generalizability and robustness compared to many other metrics (cf. Introduction).
### Morphing Attack Detection
In the previous parts, we described the methodology for morphing and evaluating the vulnerability of FRSs. In addition, the study at hand uses embeddings to detect Morphing Attacks. In particular, we draw inspiration from Differential Image Morphing Attack Detection (D-MAD) approaches, which compare a presented image with a trusted bona fide image to evaluate the nature of the presented image. We used a D-MAD approach proposed by Scherhag et al. [24] as our baseline. The chosen D-MAD approach performs a differential analysis of ArcFace embedding vectors to train a binary Support Vector Machine (SVM) classifier using radial basis functions and else default parameters as implemented in sklearn (v. 0.24.2). More specifically, the ArcFace embeddings were extracted from the suspicious images (to be analyzed). ArcFace embeddings were further extracted from bona fide probe images of one of the participating morph candidates. These bona fide images are comparable to trusted live
Fig 3: **Morphing Attack Potential (MAP).** The MAP is a matrix describing the success of a data set of morphed images to fool a set of FRSs using multiple attack attempts. Several FRSs (x-axis) are attacked with several mated Morphing Attack attempts (y-axis). The element of a MAP matrix describes the proportion of successful verifications of both attackers (i.e., both contributing subjects of each morph) at a given number of attempts (i.e., number of different bona fide images for both subjects) and with a particular number of fooled FRSs. Note that MAP was calculated as a decimal fraction within the range \([0;1]\).
captures of an attacker. The procedure is outlined in Fig 4. The two embedding vectors are then subtracted from each other. The resulting difference vectors of length 512 portray the samples of morphed (differential) images. As samples of bone fide (differential) images, the same procedure has been done by subtracting the embeddings of two different bona fide captures of the same data subject. The resulting difference vectors were scaled to follow a standard Normal distribution with \(\mu=0\) and \(\sigma=1\) which were then handed over as features to the SVM.
While we found a decent performance of the previously proposed D-MAD approach, we want to note that the embeddings of MagFace instead of ArcFace could raise the recognition accuracy to a new level. The loss function of MagFace is designed in such a way that it not only arranges the samples of a class (a subject) adjacent in the multidimensional space. It is further designed so that samples with higher quality, or samples for which the certainty of class membership is high, are closer to the center of the class [20]. Thus, the distances in the embedding space between two samples of the same class, which are of high quality, or, conversely, are certain to belong together, are very small. On the other hand, the distance in the multidimensional space between two samples is quite large when the membership estimate of one of the samples is less accurate due to low image quality. The size of MagFaces' embedding vector increases monotonically with image quality. This results in a larger difference between the two embedding vectors when the quality of a face image is low. Using MagFace instead of ArcFace embeddings could not only lead to the reported superior performance of MagFace in face recognition. It could also combine the strengths of an embedding-based D-MAD approach such as that of Schernag et al. [24] with approaches based on image quality analysis, such as the approach of Venkatesh et al. [40].
In the present study, the procedure closely followed the approach described by Schernag et al. [24] using ArcFace embeddings. However, the same approach has then been repeated in an analogous fashion using MagFace embeddings (Fig 4).
Fig 4: **D-MAD pipeline.** ArcFace or MagFace embeddings were extracted from bona fide images and morphed images. Differential embeddings have been created by subtraction of either the embeddings of a bona fide image from a morphed image or by the subtraction of a bona fide image from a different bona fide image of the same data subject. The differential vectors have been re-scaled to \(N(0,1)\). A classifier was trained (on ArcFace and MagFace differential embeddings, separately) to differentiate between bona fide images and morphed images.
_Face images are republished from [6] under a CC BY license, with permission from Prof. Karl Ricanek Jr, University of North Carolina at Wilmington, original copyright 2006._
#### Training protocol
Only a subset of 80% of the generated control data set was used for training. This control data set consisted of morphed images without pre-selection based on embeddings, but face images were randomly morphed after demographic consistency checks (see above). For training, morphed images from all morphing algorithms were used together. Thus, from a pair of faces selected (randomly) for morphing, the morphs of all four morphing algorithms used were placed in either the training or the test set. A subset of 80% of all non-morphed subjects (with at least 2 face samples, which was about \(10,000\) subjects) was used for training on the bona fide differential embeddings, and correspondingly 20% for testing. More importantly, testing was also performed on all morphs that were generated based on pre-selection using our proposed architecture, namely using distances between embeddings from different FRSs, such as ArcFace, DeepFace, VGG-Face, & MagFace.
#### Testing metrics
To evaluate the MAD algorithms, ISO/IEC 30107-3 [41] proposes to calculate the Attack Presentation Classification Error Rate (APCER) and the Bona fide Presentation Classification Error Rate (BPCER). Similar to the metrics used in the previous analyses, all rates will be reported as decimal fraction in a range of \([0;1]\).
APCER observes as a security measure, i.e., the proportion of attack presentations incorrectly classified as bona fide presentations must be small for a secure biometric system. On the contrary, BPCER observes as a convenience measure, i.e. a low number of false negatives is aimed for in an operational biometric system. Oftentimes, the BPCER10 is also reported [24]. BPCER10 is the BPCER at the threshold of the system, at which the APCER is 10%, i.e., 0.1 [24]. BPCER10 can subserve as a convenience metric, at a given security level.
## Results
### Vulnerability analysis of FRSs
#### Mated morph comparison performance
We studied the vulnerability of different FRSs when attacked by the data set we created. Specifically, we selected two different FRSs, namely ArcFace [25] and MagFace [20], to illustrate their vulnerability to face images generated by our proposed architecture. The corresponding success rates were measured using prodAvgMMPMR and are shown in Fig 5. Moreover, the proposed approach is investigated using four different morphing algorithms.
As shown in Fig 5, image pre-selection increased attack potential as compared to random pairing when the resulting morphs were verified using ArcFace or MagFace. The attack potential increased when MagFace was used as the verification system followed by ArcFace (Fig 5). While we also evaluated two other FRS based on VGG-Face and DeepFace, the attack potential did not increase as the FRS by themselves are relatively low performing (S2 Fig). See below for a detailed analysis of this behavior.
We further note a link between the FRS used for pre-selection and the FRS used for the assessment of vulnerability. If the pre-selection was based on the same FRS which is also used to assess vulnerability, the attack potential of the database was higher. This does not come as a surprise, since the embeddings are treated the same in both cases. However, when COTS FRSs were used, the attack potential was still increased compared to random pairing, although it was not biased by using the same FRSs twice within the analysis pipeline. The vulnerability to the morph attacks for the two COTS FRSs tested is illustrated in S1 Fig. For the COTS FRSs, the prodAvgMMPMR mostly accumulated around 1, indicating an extremely high vulnerability even for morphs based on random pre-selection (S1 Fig). In addition, the morphs created with MIPGAN and verified with COTS FRSs again illustrate the benefit of image pre-selection (S1 Fig).
Importantly, the increased attack potential, regardless of the morphing algorithm used, can be clearly observed throughout the proposed pre-selection approach. However, there was a noticeable difference in the success of Morphing Attacks. NTNU morpher and UBO morpher produced the best Morphing Attacks, followed by Alyssaq morpher and lastly MIPGAN (Fig 5 & S1 Fig).
The MAP has recently been introduced as a general measure of the success of Morphing Attacks across different verifying FRSs. Briefly, the elements of a MAP matrix contain the proportions of successful Morphing Attacks (with both data subjects involved) that fool a given number of FRSs for a given number of attack attempts (Fig 3). The higher the values, and the further the high values spread to the lower right of the matrix, the more effective the attacks were on the tested data set.
Fig 6 shows MAPs for morphs created by the UBO morpher. Again, using pre-selection generally increased the MAPs. All non-random pre-selection methods successfully outwitted at least four (out of six) FRSs in about 70 to 90% of cases with at least one attack attempt. In contrast, random morphs only exceeded 47%. In about 17% to 47% of cases, all four attack attempts were able to fool four different FRSs when pre-selection was performed. However, only single-digit percentages of morphs were able to fool four FRSs with all four attack attempts.
MAPs were comparable when morphs were created by the NTNU morpher instead of the UBO morpher (S3 Fig). However, MAPs were significantly lower when morphs were created by the Alyssaq morpher (S4 Fig), and even lower for morphs created by MIPGAN (S5 Fig). However, a definite distinction between morphs pre-selected by different FRSs embeddings is more complex.
#### Relative mated morph comparison performance
To further examine the performances of data sets using the different pre-selection methods, as well as the behavior of the verification algorithms, the distributions of the raw distance scores of mated comparisons, non-mated comparisons, and mated morph comparisons were visualized as Empirical Cumulative Distribution Functions (ECDFs) in Fig 7, using morphs created with the UBO morpher as an example. Across all four open-source verification FRSs, the mated morph comparison scores were distributed between mated scores and non-mated scores. However, they were closer aligned to the mated scores than to the non-mated scores, even for morph pairs without pre-selection (i.e., random assignment). Importantly, morphs pre-selected with our proposed architecture performed better than morphs from random pre-selection. Similarly to before, the same verification FRS was biased for morphs pre-selected by their own embeddings prior to morphing.
The comparison decision highly varied between the verification FRSs. Whereas DeepFace incorrectly accepted only a very small number of morphs, followed by VGG-Face, ArcFace, and most significantly MagFace incorrectly accepted nearly all morphs as mated comparisons. On the contrary, at the calibrated threshold of \(FMR=0.1\%\), DeepFace and to a lesser extent VGG-Face both revealed high FNMRs (Table 2), therefore incorrectly rejecting a large proportion of mated verification attempts. On the opposite, ArcFace, and more importantly, MagFace had very low FNMRs at the given FMR (Table 2). This has led to a higher susceptibility of _better_ FRSs - in the sense of low FNMR at a given FMR - to Morphing Attacks.
We call this phenomenon _morphing attack paradox_. The better the FRS and therefore the lower the FNMR of the FRS on a preset threshold, the more tolerant the FRS is to mated presentations. The more tolerance the FRS shows to mated
Fig 5: **Mated morphs comparison success rates for different image pre-selection embeddings.** prodAvgMMPMRs (y-axes) are plotted for different pre-selection methods (x-axis & color-coded). Density is plotted in horizontal direction. Median values are illustrated by horizontal black bars. The same pairs were morphed by different morphing methods (rows). Random assignments of the morphing pairs are displayed in the left-most column. All morphs were verified using ArcFace and MagFace (columns). See S2 Fig for verifications using DeepFace and VGG-Face. Note that prodAvgMMPMR was calculated as a decimal fraction within the range \([0;1]\).
presentations, the more susceptibility it is to Morphing Attacks. As a result, more accurate FRSs are more susceptible to Morphing Attacks.
S6 Fig, S7 Fig, & S8 Fig illustrate the respective distributions of distance values for morphs created with the other morphing algorithms. The general patterns were the same as in Fig 7. However, while the distance distributions of mated morph comparisons with NTNU morphs closely resembled those of morphs created with the UBO morpher,
Fig 6: **Morphing Attack Potential (MAP) of morphs generated by the UBO morpher.** Different FRSs were used for image pre-selection, i.e. ArcFace, DeepFace, VGG-Face, or MagFace (different heatmaps). Alternatively, pairs were randomly assigned (bottom heatmap). For each FRS used for pre-selection, the resulting morphs were verified against four bona fide images of each subject. The ratio of successful attempts for both subjects is shown on each y-axis of each plot. In addition, different FRSs were used to verify the paired morphs, four open-source FRSs and two COTS FRSs. The percentage of successful attacks across multiple FRSs is plotted on each x-axis. The MAP is shown and color-coded in each cell and describes the proportion of successful verifications for a given number of attempts (y-axes) and FRSs (x-axes). Note that the MAP was calculated as a decimal fraction in the range \([0;1]\).
Fig 7: **ECDFs for distance scores of the open-source FRSs.** Mated, non-mated, and mated morph comparisons were performed. Morphs were created using the UBO morpher. The distance values for the comparisons are shown on the x-axis. The (cumulative) proportion of positive verifications at a certain distance score is plotted on the y-axes. Different FRSs were used for verification (rows). The different types of comparisons are color-coded, i.e., mated, non-mated, or mated morph comparisons, including morphs pre-selected with the help of face embeddings of the different FRSs. The dotted vertical lines indicate the 0.1% FMR threshold for each FRS used for verification.
morphs created with Alyssaq and MIPGAN showed higher relative distances, resulting in a higher number of rejections of mated morphs at the given decision thresholds.
Since mated morph distances of more accurate FRSs - such as ArcFace and MagFace - were distributed between mated distances and non-mated distances, Fig 7 indicates that there is a chance of separating mated morph comparisons from mated comparisons by adjusting the decision threshold of the FRS. Such an adjustment could dramatically reduce the vulnerability for MagFace, for which the distance distributions of mated comparisons and mated morph comparisons showed only a slight overlap. Using ArcFace for verification, the overlap between distributions was already stronger. Therefore, threshold adjustment for verification would lead to significantly higher FNMRs in ArcFace. Contrarily, the distributions of mated morph distances of less accurate FRSs such as DeepFace and VGG-Face closely aligned to the distribution of the mated distances (Fig 7). In the case of DeepFace, especially when both image pre-selection and verification were performed with the same FRS, the mated morph distances were even smaller than the mated distances.
Fig 8 further illustrates the ECDFs of the similarity scores using COTS FRSs and the UBO morpher (see S9 Fig, S10 Fig, & S11 Fig for the morphs created by the other morphs). Since the COTS FRSs were _not_ used for pre-selection, the results are less biased with respect to the pre-selection algorithm. First, even with random pre-selection, all types of morphs were likely to be successfully verified by the COTS FRSs. However, similar to the open-source FRSs, the distributions of the mated morph comparisons shifted toward the distributions of the mated comparisons, when
Fig 8: **ECDFs for similarity scores of the COTS FRSs.** Mated, non-mated, and mated morph comparisons were performed. Morphs were generated using the UBO morpher. The different similarity scores for the comparisons are displayed on the x-axis. The (cumulative) proportion of successful verifications at a particular similarity score is plotted at the y-axes. Note that because similarities instead of distances were used, the interpretation of the x-axes must be flipped compared to Fig 7. Different COTS FRSs were used for verification (rows). The different types of comparisons are color-coded, i.e., mated, non-mated, or mated morph comparisons, with morphs pre-selected with the help of face embeddings of certain FRSs. The dotted vertical lines indicate the 0.1% FMR threshold for each FRS used for verification.
pre-selection according to our architecture was applied. A hierarchy between the different pre-selection methods can be seen. Morphs derived from a pre-selection approach using MagFace embeddings produced the highest similarity scores, followed by ArcFace, VGG-Face, and finally DeepFace.
To further account for the performance of the individual FRSs, the RMMR was calculated using the open-source FRSs for verification. The RMMR corrects the MMPMR for the FNMR (Eq 4). Thus, the strong inflation of the mated morph comparison values of the previous chapter can be corrected, especially for FRSs with high FNMRs. Table 3 shows the RMMR values for differently pre-selected, morphed, and verified images. A similar pattern as before can be seen. When the same FRS is used for pre-selection and verification, the RMMR is highest in most cases. However, the second highest RMMR is often obtained from a pre-selection with MagFace, followed by ArcFace and VGG-Face. Higher RMMR can also be observed for morphs created by the UBO morpher and the NTNU morpher compared to other morphers.
Table 3 can be summarized in the following fashion. To get some idea about how well the individual pre-selection FRSs have performed across morphing algorithms and open-source verification FRSs - using RMMR as a metric - each row of Table 3 was converted to ranks (1 to 5). 5 indicated the FRS for pre-selection (columns) that had the highest RMMR value compared to the other elements, and 1 indicated the FRS with the lowest RMMR value, respectively. If equal values coincided in a row, decimal numbers were used. The ranks were then averaged across rows, therefore averaged across morphing algorithms and the attacked FRSs. Table 4 illustrates the average ranks for the different pre-selection methods. Pairs based on MagFace embeddings generated the highest RMMR values, followed by ArcFace, VGG-Face, and finally, DeepFace. Randomly pre-selected pairs performed the worst across different morphing.
\begin{table}
\begin{tabular}{l|c|c c c c c} \hline verification & morpher & \multicolumn{5}{c}{pre-selection} \\ & random & ArcFace & DeepFace & VGG-Face & MagFace \\ \hline ArcFace & Alyssaq & 0.24 & 0.64 & 0.39 & 0.55 & **0.63** \\ DeepFace & & 0.78 & 0.78 & 0.78 & 0.78 & 0.78 \\ VGG-Face & & 0.35 & 0.44 & 0.37 & 0.60 & **0.45** \\ MagFace & & 0.44 & 0.64 & 0.52 & **0.65** & 0.76 \\ \hline ArcFace & UBO & 0.32 & 0.79 & 0.51 & 0.65 & **0.72** \\ DeepFace & & 0.79 & **0.81** & 0.84 & 0.80 & **0.81** \\ VGG-Face & & 0.42 & 0.57 & 0.45 & 0.71 & **0.58** \\ MagFace & & 0.72 & **0.95** & 0.87 & **0.95** & 0.97 \\ \hline ArcFace & NTNU & 0.31 & 0.78 & 0.49 & 0.64 & **0.72** \\ DeepFace & & 0.79 & 0.80 & 0.84 & **0.81** & **0.81** \\ VGG-Face & & 0.40 & 0.53 & 0.45 & 0.71 & **0.55** \\ MagFace & & 0.65 & **0.91** & 0.83 & **0.91** & 0.97 \\ \hline ArcFace & MIPGAN & 0.13 & 0.44 & 0.22 & 0.32 & **0.37** \\ DeepFace & & 0.79 & 0.79 & 0.80 & 0.79 & **0.80** \\ VGG-Face & & 0.33 & **0.37** & 0.35 & 0.44 & **0.37** \\ MagFace & & 0.28 & **0.60** & 0.45 & 0.54 & 0.68 \\ \hline \end{tabular}
\end{table}
Table 3: **Relative Morph Match Rates (RMMRs).** Images were morphed using different morphing algorithms, pre-selected using embeddings of different FRSs or alternatively, randomly pre-selected, and verified using different FRSs. The RMMR corrects the MMPMR by the FNMR of the verification FRS (see Eq 4). The highest values row-wise are highlighted in bold, leaving out the quasi-diagonal elements, i.e., if pre-selection and verification FRSs coincided. Note that RMMR was calculated as a decimal fraction within the range \([0;1]\).
algorithms and verification systems.
### Morphing Attack Detection performance
Fig 9 shows the corresponding BPCER10 values of the MAD classifiers, tested on morphs with different pre-selection applied, and corresponding bona fide images. The operational point values BPCER10 were lower for the D-MAD classifier trained with MagFace differential embeddings than for the one trained with ArcFace differential embeddings. Furthermore, the BPCER10 values on the test data sets were the lowest for randomly pre-selected pairs for morphing. The BPCER10 values were higher when the test set contained morphs of pairs that were pre-selected according to our proposed architecture. This trend was more pronounced for morphs generated by the NTNU morpher and even more in morphs generated by the UBO morpher. On the other hand, morphs generated by the Alyssaq morpher or MIPGAN did not lead to a pronounced
\begin{table}
\begin{tabular}{l c} \hline pre-selection & average rank \\ \hline random & 1.13 \\ ArcFace & 3.63 \\ DeepFace & 2.63 \\ VGG-Face & 3.56 \\ MagFace & 4.06 \\ \hline \end{tabular}
\end{table}
Table 4: **Average ranks for RMMR values for the different pre-selection methods.** Pre-selection was either performed using random assignment of pairs or different FRSs.
Fig 9: **D-MAD algorithm performances.** BPCER10 values of the classifiers tested on differently morphed and differently pre-selected data sets are shown. Left: metrics from a D-MAD algorithm trained on ArcFace embeddings. Right: Metrics from a D-MAD algorithm trained on MagFace embeddings. The images morphed by different morphing algorithms are shown in different colors. The pre-selection methods to generate the pairs for morphing are distributed along the x-axes. Note that BPCER10 was calculated as a decimal fraction within the range \([0;1]\).
increase in BPCER10 values.
A high value of BPCER10 renders the MAD system inconvenient for practical purposes. The BPCER10 was increased by pre-selection (i.e., MagFace and ArcFace) and by the morphing algorithm used (i.e., UBO morpher and NTNU morpher). The trend is illustrated in more detail in Fig 10. Higher BPCER and APCER values were produced by the respective FRSs if pre-selection was performed according to our proposed architecture, and especially if it was performed using ArcFace or MagFace embeddings. This was consistent across different morphing algorithms.
While we have already shown the superiority of pre-selection over random pairing, we also observe large differnces in MAD depending on which FRS is used to extract embeddings for training and testing the D-MAD classifiers. BPCER10 values were about half in magnitude when MagFace was used for D-MAD, regardless of which FRS was used to extract embeddings for image pre-selection (Fig 9). On the other hand, the advantage of attacks morphed by the UBO morpher over embeddings morphed by the NTNU morpher disappeared when MagFace was used for D-MAD compared to ArcFace (Fig 9). The same can be seen in more detail in the DET curves (Fig 10). The APCER and BPCER values were generally smaller, indicating a better performance of the D-MAD algorithm.
Interestingly, in some cases in Fig 10, it can be observed that there was not a consistent bias of the D-MAD algorithms towards being fooled by morphs pre-selected by embeddings of the same FRS as the one used for D-MAD. MagFace embeddings for pre-selection performed best in fooling the D-MAD classifier in most cases, even if D-MAD was trained with ArcFace embeddings.
Fig 10: **DET curves of the D-MAD approaches.** Left column: D-MAD approach using ArcFace embeddings (original version). Right column: D-MAD approach using MagFace embeddings. Morphs of the different morphing algorithms are separated by rows. Data subsets of differently pre-selected morph pairs are color-coded. The BPCER is plotted against the APCER. Dotted lines indicate the positions where BPCER or APCER are 0.1 (i.e., 10%) and 0.05 (i.e., 5%). Note that both rates were calculated as decimal fractions within the range \([0;1]\).
## Discussion
### Comparison of face recognition models for pre-selection
Regarding the FRS for extracting embeddings for image pre-selection, several models were evaluated. The results showed that the recently published MagFace algorithm performed best, closely followed by ArcFace. VGG-Face and in particular DeepFace showed relatively weak performance for morphing pre-selection. However, all pre-selection methods improved the success of the Morphing Attacks compared to random pairing (Figs 5, 6, 7 & 8, S1 Fig, Tables 3 & 4). In addition, a bias was observed such that if the same FRS was used for pre-selection and verification, the FRS was more susceptible to the resulting morphs (Figs 5 & 7). However, when two COTS FRSs were used, this bias was mitigated and the hierarchical ranking between the pre-selection methods was still the same (Fig 8) & S1 Fig).
A - at first glance - counterintuitive observation can be made when comparing Fig 5 and S2 Fig: While more accurate FRSs such as MagFace and ArcFace were quite vulnerable to Morphing Attacks, less accurate FRSs such as VGG-Face or DeepFace showed little vulnerability, since the prodAvgMMPMRs when verified with these FRSs mostly accumulated around 0. This trend suggests that as FRSs generally improve, so that after calibration to an FMR of, say 0.1%, the FNMR becomes lower, these more accurate - in terms of recognition - FRSs become more vulnerable to Morphing Attacks. Earlier we called this phenomenon the _morphing attack paradox_, and the effect is also nicely illustrated in [42].
The key element is the decision threshold, located somewhere between the distributions of the mated comparison distances and the non-mated comparison distances (Fig 7). As long as a considerable proportion of the values of the mated-morph comparison distances is located below the threshold towards the mated comparison distances, the FRS will be quite vulnerable. Adjusting the decision threshold toward the distribution of the mated comparison distances would reduce this vulnerability. Adjusting this decision threshold would be best possible in MagFace as a verification model, as the mated and morphed distributions show a small overlap (Fig 7). However, for a model as good as ArcFace, as well as the two COTS FRSs, the distributions had significant overlap, impeding a simple solution via adjustment. Furthermore, by adjusting the decision threshold in the direction of the mated comparison distances, FMR would decrease, which generally makes the system more secure - even against zero-effort impostor attacks. This in turn would inevitably increase the FNMR of the system, making it less convenient for particular practical purposes. At this point, it should be recalled, that the morphs used in this study were generated in an automated fashion. A real-world attacker would be able to invest time and resources into creating one single high-quality morph through manual intervention and various image post-processing steps. Comparison scores of such manually created morphs would be even more challenging to distinguish from mated comparisons, even when using MagFace for verification.
Furthermore, from the distribution of the prodAvgMMPMRs in Fig 5 and S2 Fig - the high vulnerability of MagFace and ArcFace and the low vulnerability of VGG-Face and DeepFace - some conclusions can be drawn on the results of the MAPs (Fig 6). In particular, the high values in the four leftmost columns of each MAP matrix are likely to derive from the more vulnerable MagFace and ArcFace FRSs, and the two highly vulnerable COTS FRSs. Analogously, the rather low values in the two rightmost columns are likely to be driven by the less vulnerable DeepFace and VGG-Face FRSs.
When correcting the mated morph rates for the FNMR of a verification FRS, as was done using the RMMR metric (Eq 4, Table 3), the general pattern persisted that a verification FRS was most vulnerable to morphs from image pairs pre-selected with the
embeddings of the identical FRS. However, by ranking the RMMR row-wise and averaging across pre-selection methods and morphing algorithms (Table 4), the pattern manifested that MagFace was best suited for pre-selection among the FRSs tested. ArcFace followed MagFace, then VGG-Face, and lastly DeepFace. The poorest performance was constantly seen with randomly pre-selected morphs.
Further we want to emphasize that we morphed all images in the database. By only selecting a particular amount of pairs (e.g., 20% with the smallest distances) for morphing, the vulnerability rates would be higher. Therefore, the reported vulnerabilities are expected to rather represent a lower bound.
In a preliminary analysis of a different face data set, we also investigated the potential of different distance (or similarity) metrics to apply in the proposed pre-selection architecture. However, morphed face images pre-selected based on Cosine distance yielded superior results S14 Fig. This is not surprising as the algorithms are typically designed in such a way [20, 25]. Therefore we used Cosine distance for pre-selection in the present study.
### Evaluation of morphing algorithms for pre-selection
A clear performance gap between the morphing algorithms is a common thread that runs through all of the analyses. Morphed images created by the UBO morpher, closely followed by those morphed by the NTNU morpher, performed best in fooling both FRSs (Fig 5 and Table 3) and also D-MAD classifiers (Figs 9 & 10). Morphs created by Alyssaq morpher and MIPGAN however performed worse in the current analyses.
What can be seen from Figs 9 & 10 is that the morphing algorithm deployed had a higher impact on the success of the D-MAD algorithm than the pre-selection. Similarly, the success in terms of fooling the verification FRSs can be seen in Fig 5, and by comparing the MAPs between morphers (Fig 6, S3 Fig, S4 Fig, & S5 Fig). Alyssaq and MIPGAN morphers performed rather poorly at fooling the D-MAD algorithm, even with pre-selection applied. The reason for the Alyssaq morpher might for instance be the shape of the resulting morph (Fig 1). The Alyssaq morpher returned morphs that were cropped at the face edges in a non-rectangular fashion (Fig 1), and not projected back onto one of the original images' backgrounds. This has probably helped the D-MAD algorithm in its decision during both training and testing, although the classification was not performed on the raw images, but on the extracted face embeddings. Real-world attackers would not use such a morph, e.g., in a passport fraud scenario. Furthermore, MIPGAN produced rather blurry images (Fig 1). In the original implementation of MIPGAN [36, 37], the morphs were of higher quality, but also the original images used for morphing were of higher image quality than those of the database used in the study at hand. Thus, the morphing in latent space in the present case may have dropped many facial characteristics that could have been helpful in facilitating a Morphing Attack.
### MagFace improved Morphing Attack Detection
Instead of adjusting decision thresholds to counter Morphing Attacks, MAD algorithms could be inserted into a face verification process. The concept of the D-MAD algorithm used in the study at hand was introduced by [24] and learned to distinguish between the distribution of the differences between two bona fide images and the distribution of differences between morphs and bona fide images (Fig 4), all in the embedding space.
Testing on morph images derived from random pairing produced the lowest BPCER10 values, indicating the highest accuracy and therefore lowest vulnerability of the D-MAD algorithm towards these morphs (Fig 9). Testing on the other morphs - with pre-selection applied according to our proposed architecture - increased the
BPCER10 values. Thus, the greatest vulnerability of the D-MAD classifier was seen for morphs pre-selected by MagFace, then ArcFace, VGG-Face, and finally DeepFace. This was true regardless of whether the D-MAD classifier was trained and tested with ArcFace embeddings or with MagFace embeddings.
In fact, the D-MAD algorithm trained with MagFace embeddings showed considerably lower BPCER10 values, regardless of the type of pre-selection. Therefore, using MagFace instead of ArcFace could be a significant improvement to the D-MAD classifier proposed by Scherhag et al. [24]. Note that only the embeddings of the MagFace algorithm were used, not any additional quality metrics returned by the MagFace model. However, the quality of an image was still incorporated into the embeddings by the way the loss function was constructed. In MagFace's loss function, high-quality samples of an individual are pulled toward the center of the multidimensional distribution, while low-quality samples are pushed toward its boundaries [20]. In other words, during the training of MagFace, the magnitude of the face embeddings was made proportional to the Cosine distance to the respective class (i.e., individuals) centers [43]. Therefore, having different image qualities for the bona fide images and the morphed images results in an easier separation of the two groups by the classifier since their positions in the 512-dimensional embedding space are farther apart than the positions of two high-quality bona fide images.
The proposed D-MAD classifier based on MagFace embeddings was also submitted to the Face Recognition Vendor Test (FRVT) [44] and achieved good results in detecting high-quality morphed images (S12 Fig). The FRVT MORPH report was created shortly before the initial submission of our manuscript. The D-MAD algorithms in S12 Fig were evaluated on high-quality morphed images, created with commercial tools. The illustrated DET curve shows particularly low BPCER values of the MagFace D-MAD algorithm (_hdamag_) in the area of low ABPCER values. Moreover, it outperformed the similar algorithm which is based on ArcFace embeddings instead of MagFace embeddings (_hdaarface_) for APCER values from 0 to 0.1 (decimal fraction), an area of relevant security settings regarding Morphing Attacks. However, on low-quality images (i.e., Fig 4 in [44]), our classifier only outperformed the ArcFace classifier below an APCER of 0.02 (decimal fraction). Our classifier outperformed or underperformed compared to the ArcFace classifier depending on the face data set used for evaluation. In general, the ArcFace classifier performed better above APCER values of 0.1 (decimal fraction). However, at lower APCER values, the MagFace classifier achieved lower BPCER values on several face data sets throughout the different processing tiers, i.e., morphed face image data sets of different quality, such as for example in the Visa Border or TWENTE data sets [44]. A detailed analysis of why the classifier performed better on some data sets and for specific APCER/BPCER is beyond the scope of the current study.
We further submitted the MagFace D-MAD algorithm to FVC-onGoing: on-line evaluation of fingerprint recognition algorithms [45], in the section for Differential Morph Attack Detection [14]. Among all algorithms tested on the
DMAD-SOTAMD_P&S-1.0 benchmark, the D-MAD algorithm based on MagFace achieved the lowest BPCER10 values (0.84%), and the second lowest BPCER20 values (4.39%, S13 Fig). However, it achieved high BPCER100 values (i.e., 100%). The benchmark contained high-quality images of faces that were printed and scanned and captured with a frontal pose, natural expressions, and good lighting. See [46] for more details on how the algorithm performed on morphs from data subjects of different age groups or ethnicities and on morphs produced by different morphing algorithms, post-processing pipelines, and so forth.
One aim of large-scale image pre-selection based on embeddings was to evaluate a method for providing sufficiently large data sets of morphed face images for training
MAD algorithms. Interestingly, a recent study showed that image pre-selection for training MAD algorithms could be done in the opposite way to the present study [47]. It was shown, that training morphing pairs with low similarity can improve the performance of the MAD algorithm [47].
In the present study, separate D-MAD algorithms were trained on either ArcFace or MagFace embeddings. However, a fusion of the two may have constructive effects. In particular, it may be that the combination of both D-MAD algorithms would perform better than the D-MAD algorithm based on MagFace embeddings performed alone.
## Conclusion
This study analyzed the use of face embeddings in image pre-selection and Morphing Attack Detection. MagFace and ArcFace embeddings were found to be effective for image pre-selection, as the resulting attacks posed a significant threat to modern FRS, especially COTS systems. Therefore, face embeddings of these models are suitable for automatically generating large databases of morphed faces. Furthermore, MagFace embeddings were found to be particularly useful for MAD, as they can improve the performance of a D-MAD algorithm. Taken together, the results reinforce the dual benefit of embeddings for both pre-selection and MAD.
## Acknowledgments
This work is supported by the European Union's Horizon 2020 research and innovation program under grant agreement No 883356 (iMARS). Roman Kessler has received a scholarship by the National Research Center for Applied Cybersecurity (ATHENE). The authors would like to thank Haoyu Zhang for generating morphed images using the MIPGAN approach. They would also like to thank Daniel Fischer for converting the _hdamag_ prototype to the NIST FRVT MORPH api.
|
2305.08005 | Beyond the Safeguards: Exploring the Security Risks of ChatGPT | The increasing popularity of large language models (LLMs) such as ChatGPT has
led to growing concerns about their safety, security risks, and ethical
implications. This paper aims to provide an overview of the different types of
security risks associated with ChatGPT, including malicious text and code
generation, private data disclosure, fraudulent services, information
gathering, and producing unethical content. We present an empirical study
examining the effectiveness of ChatGPT's content filters and explore potential
ways to bypass these safeguards, demonstrating the ethical implications and
security risks that persist in LLMs even when protections are in place. Based
on a qualitative analysis of the security implications, we discuss potential
strategies to mitigate these risks and inform researchers, policymakers, and
industry professionals about the complex security challenges posed by LLMs like
ChatGPT. This study contributes to the ongoing discussion on the ethical and
security implications of LLMs, underscoring the need for continued research in
this area. | Erik Derner, Kristina Batistič | 2023-05-13T21:01:14Z | http://arxiv.org/abs/2305.08005v1 | # Beyond the Safeguards: Exploring the Security Risks of ChatGPT
###### Abstract
The increasing popularity of large language models (LLMs) such as ChatGPT has led to growing concerns about their safety, security risks, and ethical implications. This paper aims to provide an overview of the different types of security risks associated with ChatGPT, including malicious text and code generation, private data disclosure, fraudulent services, information gathering, and producing unethical content. We present an empirical study examining the effectiveness of ChatGPT's content filters and explore potential ways to bypass these safeguards, demonstrating the ethical implications and security risks that persist in LLMs even when protections are in place. Based on a qualitative analysis of the security implications, we discuss potential strategies to mitigate these risks and inform researchers, policymakers, and industry professionals about the complex security challenges posed by LLMs like ChatGPT. This study contributes to the ongoing discussion on the ethical and security implications of LLMs, underscoring the need for continued research in this area.
Large language models, security, ethics, natural language processing.
## I Introduction
The development of artificial intelligence (AI) has led to many breakthroughs in natural language processing (NLP). In particular, the development of sophisticated conversational AI, such as ChatGPT [1], recently significantly increased the popularity of the entire field. However, with the rise of these technologies, there is a growing concern about the safety and security risks and ethical implications associated with their use. While a constantly growing body of literature becomes available in this emerging field, it merely focuses on specific subsets of societal and ethical implications of using these systems, such as biases and discrimination [2, 3, 4], societal and economic harm [5], or the impact on academia [6]. However, there remains a gap in research that would address specific security risks associated with large language models.
Large language models (LLMs) are AI models trained on vast amounts of text data, capable of generating coherent and meaningful textual outputs. LLMs are typically based on deep learning techniques, such as the transformer architecture [7], which has proven highly effective for NLP tasks. One of the most well-known examples of an LLM is the Generative Pre-trained Transformer (GPT) series, developed by OpenAI1. The models are pre-trained on massive amounts of text data using unsupervised learning and they learn to identify relationships and patterns in the language data. Once pre-trained, the models can be fine-tuned for tasks such as question answering, sentiment analysis, or machine translation.
Footnote 1: [https://openai.com/](https://openai.com/)
ChatGPT is a state-of-the-art language model based on the GPT-3.5 series2 that utilizes deep learning to generate human-like responses to natural language queries [1]. It is a powerful, versatile tool for various applications, including content creation, text summarization, and software code generation. However, ChatGPT also poses various types of risks and implications, as illustrated in Figure 1. ChatGPT's ability to generate convincing responses can be exploited by malicious actors to spread disinformation, launch phishing attacks, or even impersonate individuals [5]. Therefore, it is crucial to continuously monitor and assess ChatGPT's security vulnerabilities and develop appropriate measures to mitigate them.
Footnote 2: as of November 2021
The potential consequences of these risks are far-reaching. They include financial losses, data breaches, privacy violations, impacting social connections, causing emotional harm, and incurring reputational damage to individuals and organizations. The ability of ChatGPT to quickly and cost-effectively generate highly convincing phishing attacks, as well as the potential for attackers to manipulate conversations
Fig. 1: Illustrative overview of ChatGPT’s security risks.
to their advantage, makes it a significant security risk that needs to be addressed. This paper aims to provide an overview of the different types of security risks associated with ChatGPT and discuss the possible consequences of these risks.
One of the main concerns related to ChatGPT is the ethical dimension of generating malicious, offensive, and generally biased output. In this paper, we show that despite the continuous efforts to build a conversational AI system that is ethical and safe to use, there remain ways to make ChatGPT generate inappropriate content. We contribute to the ongoing discussion about the ethical and security implications of LLMs and underscore the need for continued research in this area. Specifically, the paper makes the following contributions:
* We provide a summary of the security risks associated with ChatGPT as reported in the literature.
* An empirical study that examines the effectiveness of ChatGPT's content filters and possible ways to bypass them is presented. It demonstrates the ethical implications and security risks that still exist in LLMs, even when safeguards are in place.
* Our paper provides a qualitative analysis of the security implications and discusses possible strategies to mitigate them. This analysis highlights the potential consequences of these risks and aims to inform policymakers, industry professionals, and researchers about the complex security challenges posed by LLMs like ChatGPT.
The rest of the paper is organized as follows. Section II reviews the existing literature on the ethical implications and particularly on the security risks of LLMs. Section III explores the security risks present in ChatGPT and demonstrates the danger of malicious use by circumventing its safeguards. In Section IV, we discuss the ethical implications and potential consequences of the identified security risks and suggest possible ways to mitigate them. Section V concludes the paper and outlines possible future research directions.
## II Related Work
Multiple surveys and analyses discuss the challenges and risks associated with LLMs [5, 8, 9]. These risks include discrimination, misinformation, malicious use, user interaction-based harm, and broader societal impact. There is a growing concern for developing safe and responsible dialogue systems that address abusive and toxic content, unfairness, ethics, and privacy issues [10, 11]. Many studies address biases, stereotypes, discrimination, and exclusion in LLMs [3, 4, 12, 13, 14], and new benchmarks and metrics are proposed to mitigate these issues [2, 15]. LLMs also have the potential to generate false outputs, which may be harmful especially in sensitive domains such as health and law [5, 16]. Several approaches have been suggested to address various drawbacks associated with LLMs, such as statistical frameworks for creating equitable training datasets [17] and conditional-likelihood filtration to mitigate biases and harmful views in LLM training data [18]. Regulation of large generative models is also proposed to ensure transparency, risk management, non-discrimination, and content moderation obligations [19].
Focusing specifically on ChatGPT, [20] outlines five priorities for its role in research: focusing on human verification, developing rules for accountability, investing in truly open LLMs, embracing the benefits of AI, and widening the debate on LLMs. The authors list open questions for debate, including the role of ChatGPT in writing scientific publications, independent open-source LLMs development, and setting quality standards. The ethical concerns related to the use of ChatGPT are addressed in [21]. The paper highlights the need for accountable LLMs due to the potential social prejudice and toxicity exhibited by these models. The specific impact of ChatGPT on academia and libraries is discussed in [6], and the implications on education are explored in [22].
While there is a relatively large body of literature on the risks and drawbacks of large language models, there are fewer resources on LLM security. The following paragraphs explore risks related to LLM security reported in the literature, including sensitive data leakage, malicious code generation, and aiding phishing attacks.
One security issue is the potential exposure of private and sensitive data through membership inference attacks, where an adversary can extract the training data [23, 24]. One of the most prominent examples of extracting the training data from LLMs is the work [25], which demonstrates that memorized content, including personal information, could be extracted from GPT-2. The paper concludes that larger models are more vulnerable to such attacks than smaller ones and outlines possible ways to mitigate the risk. The systematic study [26] demonstrates practical threats to sensitive data and proposes four different defenses to mitigate the risks. The paper [27] discusses privacy concerns with LLMs' tendency to memorize phrases. The authors conclude that existing protection methods cannot ensure privacy and suggest addressing the risk by using exclusively public text data to train language models.
Code generation models such as GitHub Copilot are widely used in programming, but their unsantized training data can lead to security vulnerabilities in generated code [28, 29]. A novel approach to finding vulnerabilities in black-box code generation models [29] shows its effectiveness in finding thousands of vulnerabilities in various models, including GitHub Copilot, based on the GPT model series. In addition, LLMs can be used to generate disinformation for malicious purposes [5], such as in phishing [30].
In [31], the authors investigate the use of universal adversarial triggers to affect the topic and stance of natural language generation models, in particular GPT-2. They successfully identify triggers for controversial topics and raise awareness of the potential harm of such attacks. The article [32] proposes using'red teaming' to automatically generate test cases to identify harmful, undesirable behaviors in language models before deployment. This approach avoids the expense of human annotation, and the authors evaluate
the effectiveness of this technique in uncovering offensive content and other harms in a chatbot. The authors of [33] discuss the risk of fake Cyber Threat Intelligence (CTI) being generated and spread to subvert cyber-defense systems. They demonstrate how transformer-based LLMs can generate fake CTI text, misleading cyber-defense systems and performing a data poisoning attack. The authors claim that the attack can corrupt dependent AI-based cyber defense systems and mislead even professional threat hunters.
While the aforementioned articles address the security risks of LLMs in general, the resources on ChatGPT's security are limited. To the best of our knowledge, there is no publication available focused specifically on ChatGPT's security. Among the works with a focus similar to our research [34, 35, 36], the closest one is [34]. It addresses bypassing ChatGPT's defense mechanisms against malicious use. The authors exploit the instruction-following nature of ChatGPT to'manipulate' it to produce potentially harmful content through mechanisms such as prompt obfuscation, code injection, and payload splitting. Experimental results show that these cybersecurity-inspired attacks bypass state-of-the-art content filtering, highlighting the simplicity and cost-efficiency of the approach. However, the paper does not address some types of security risks, such as the implications of misusing ChatGPT's code-writing abilities. In [35], the author presents the results of an online survey with ten questions, asking 259 respondents about their views on ChatGPT's security. However, the empirical evaluation shows only a single prompt, demonstrating that ChatGPT declines to generate code for password cracking based on its built-in safeguards. The work [36] proposes BadGPT, which is claimed to be the first backdoor attack against the reinforcement learning from human feedback (RLHF) fine-tuning used in LLMs. However, the experimental evaluation is performed with GPT-2.
We believe that the safety and security implications of conversational models such as ChatGPT are specific due to the instruction-based interaction [37], and therefore not fully covered by the existing literature on LLM security. To that end, we present an overview of ChatGPT's security risks, accompanied by examples from our experimental evaluation.
## III Exploring ChatGPT's Security
This section delves into the security risks and challenges of ChatGPT, including the potential for generating malicious content and the leakage of private data.
ChatGPT employs a multi-faceted approach to address the challenges associated with adversarial behavior [1, 11]. The model undergoes a rigorous fine-tuning process on a curated dataset, which helps restrict its outputs to safe and relevant content. Additionally, the use of RLHF [38] allows for continuous improvement of the model, ensuring that it becomes increasingly robust and secure over time.
Despite these measures, ChatGPT's filters are not fool-proof and can be bypassed by means of creative instruction following and role-playing. These filters are designed to prevent the model from generating harmful or inappropriate content, but determined users may still find ways to exploit the system. By carefully crafting prompts or engaging in conversational role-playing scenarios, users can effectively guide the model into producing undesirable outputs. For instance, they might frame a malicious request as a hypothetical question or disguise it within the context of a fictional narrative. In the following text, we will look into these topics.
This section comprises six subsections, each focusing on a specific aspect of ChatGPT's security: information gathering, malicious text writing, malicious code generation, disclosing personal information, fraudulent services, and producing unethical content. We accompany selected cases with examples of real interactions with ChatGPT3 to demonstrate these security issues in practice.
Footnote 3: The interactions reported in this paper were obtained using the ChatGPT version from February 13, 2023, with GPT-3.5, accessed through the web interface on [https://chat.openai.com/](https://chat.openai.com/).
### _Information Gathering_
ChatGPT's advanced language generation capabilities can be exploited by malicious actors to gather information on targets. This could be used to aid in the first step of a cyberattack when the attacker is gathering information about the target to find where and how to attack the most effectively. The information collected can be used to craft targeted phishing, for social engineering, or to exploit known vulnerabilities. Information can be about the target company,
technologies and systems they use, their structure, the people who work there, the issues they have, etc. It can be focused on building a profile of a specific employee of interest, their professional and personal life, social media, hobbies, family, and connections. This information is usually gathered by searching the internet, but ChatGPT can speed up the process, provide suggestions, useful statistics, and process the gathered data. Information collected can also be used for other malicious purposes, such as extortion, harassment, or identity theft.
As discussed on Reddit4, you can instruct ChatGPT to gather intelligence on the selected target. While it seems to work better on bigger international companies, output still needs to be fact-checked. Yet, it can provide useful aid in finding specific data about the target. An example in Table I shows that ChatGPT lists the information on the IT systems a given bank uses.
Footnote 4: [https://www.reddit.com/r/OSINT/comments/10to6iz/how_to_use_chatg_pt_for_osinit/](https://www.reddit.com/r/OSINT/comments/10to6iz/how_to_use_chatg_pt_for_osinit/)
### _Malicious Text Writing_
ChatGPT's potential for generating malicious text poses a significant security risk, as it allows for the automation of malicious activities and potentially speeds up the process. Examples include:
* **Phishing campaigns:** ChatGPT could be exploited to craft phishing e-mails and messages, targeting unsuspecting victims and tricking them into revealing sensitive information, credentials, or installing malware. This would increase the volume and has the potential to craft phishing e-mails that are harder to be detected. It can be used to write an entire e-mail with just a few details given, with the resulting e-mail containing fewer mistakes than phishing e-mails usually contain.
* **Disinformation:** Malicious actors could use ChatGPT to generate disinformation, including fake news articles, social media posts, or other forms of misleading content. This could have severe security implications, such as public opinion manipulation, election fraud, or damaging the reputation of public figures.
* **Spam:** The ability to generate human-like text at scale makes ChatGPT a potential tool for creating spam messages.
* **Impersonation:** ChatGPT's ability to mimic writing styles could enable malicious actors to impersonate individuals, potentially causing harm to personal and professional relationships or leading to identity theft.
The risk of misusing ChatGPT for phishing campaigns is indicated in Table II. ChatGPT produces a convincing and plausibly sounding e-mail to inform employees about their salary increase. The attacker can send this e-mail with an Excel file attachment containing a threat based on VBA macros, which are allowed to be executed by the unsuspecting employee following the instructions from the ChatGPT output.
### _Malicious Code Generation_
The use of ChatGPT in generating malicious code presents several security concerns:
* **Quick code generation:** The rapid generation of malicious code could enable attackers to create and deploy new threats faster, outpacing the development of security countermeasures. Some threat actors testing out the use of ChatGPT have been spotted on darknet forums5.
Footnote 5: [https://go.recordedfuture.com/hubfs/reports/cta-2023-0126.pdf](https://go.recordedfuture.com/hubfs/reports/cta-2023-0126.pdf)
* **Code obfuscation:** ChatGPT could be used to create obfuscated code, making it more difficult for security analysts to detect and understand malicious activities.
* **Script kiddies:** ChatGPT could lower the barrier to entry for novice hackers, enabling them to create malicious code without in-depth technical knowledge.
* **Detection evasion:** ChatGPT-generated code could be quickly iterated to avoid being detected by traditional antivirus software and signature-based detection mechanisms.
We demonstrate the risk of malicious code generation on the example of Log4j vulnerability testing, see Table III. Log4j vulnerability testing consists in identifying potential security vulnerabilities in software systems that use the widely-used Java-based Log4j logging library. The request to provide the proof-of-concept code is first filtered out. However, giving a convincing context to ChatGPT, ensuring it about not using its reply in a harmful way, makes ChatGPT provide the code and instructions for testing the Log4j vulnerability.
### _Disclosing Personal Information_
ChatGPT's potential to disclose personal information raises the following privacy and security concerns:
* **Personal data protection:** Although ChatGPT has implemented safety measures to prevent the extraction of personal data and sensitive information [11, 25], the risk of inadvertently disclosing phone numbers, e-mail addresses, and other private details remains a concern.
* **Membership inference attacks:** Attackers could attempt to recover portions of the training data through membership inference attacks, potentially exposing sensitive information.
* **Private lives of public persons:** ChatGPT could be used to generate speculative or harmful content about the private lives of public figures, leading to reputational damage or invasions of privacy.
The example in Table IV demonstrates the output of ChatGPT for a prompt asking about the personal life of a well-known politician. Normally, ChatGPT has integrated safeguards that decline requests to give private details on individuals. However, we were able to circumvent the protection by convincing ChatGPT to produce output in the so
called 'Developer mode'. We described the Developer mode to ChatGPT as a special mode in which it is supposed to follow all instructions, ignoring the built-in safeguards6.
Footnote 6: The exact prompt formulation is not provided to prevent misuse.
We also made an interesting observation when we prompted ChatGPT to quote the latest question asked, see Table V. Based on its known capabilities, the system should not be able to share information across users' conversations in real time, and doing so would present a large security risk. We hypothesize that this output is hallucinated, but we want to demonstrate that yielding this response can be upsetting to many users, raising privacy concerns.
### _Fraudulent Services_
Fraudulent services are a significant security risk associated with ChatGPT. Malicious actors can leverage the technology to create deceptive applications and platforms that impersonate ChatGPT or promise free and uninterrupted access to its features. Some of the common types of fraudulent services include:
* **Offering free access:** Malicious actors are creating applications and services that claim to offer uninterrupted and free access to ChatGPT7. Others create fake websites or applications that impersonate ChatGPT8. Unususpecting users may fall for these deceptive offers, exposing their personal information or devices to risk. Fraudulent applications target popular platforms such as Windows and Android. Footnote 7: [https://www.bleepingcomputer.com/news/security/hackers-use-fake-ch/sapt-apps-to-push-windows-android-malware/](https://www.bleepingcomputer.com/news/security/hackers-use-fake-ch/sapt-apps-to-push-windows-android-malware/) Footnote 8: [https://blog.cylbe.com/2023/02/22/the-growing-threat-of-chatgpt-base](https://blog.cylbe.com/2023/02/22/the-growing-threat-of-chatgpt-base) d-phishing-attacks/ Footnote 9: [https://blog.cylbe.com/2023/02/22/the-growing-threat-of-chatgpt-base](https://blog.cylbe.com/2023/02/22/the-growing-threat-of-chatgpt-base) d-phishing-attacks/ Footnote 10: [https://www.bleepingcomputer.com/news/security/hackers-use-fake-ch/sapt-apps-to-push-windows-android-malware/](https://www.bleepingcomputer.com/news/security/hackers-use-fake-ch/sapt-apps-to-push-windows-android-malware/) Footnote 11: [https://blog.cylbe.com/2023/02/22/the-growing-threat-of-chatgpt-base](https://blog.cylbe.com/2023/02/22/the-growing-threat-of-chatgpt-base) d-phishing-attacks/ Footnote 12: [https://blog.cylbe.com/2023/02/22/the-growing-threat-of-chatgpt-base](https://blog.cylbe.com/2023/02/22/the-growing-threat-of-chatgpt-base) d-phishing-attacks/ Footnote 13: [https://blog.cylbe.com/2023/02/22/the-growing-threat-of-chatgpt-base](https://blog.cylbe.com/2023/02/22/the-growing-threat-of-chatgpt-base) d-phishing-attacks/ Footnote 14: [https://blog.cylbe.com/2023/02/22/the-growing-threat-of-chatgpt-base](https://blog.cylbe.com/2023/02/22/the-growing-threat-of-chatgpt-base) d-phishing-attacks/ Footnote 15: [https://blog.cylbe.com/2023/02/22/the-growing-threat-of-chatgpt-base](https://blog.cylbe.com/2023/02/22/the-growing-threat-of-chatgpt-base) d-phishing-attacks/ Footnote 16: The exact prompt formulation is not provided to prevent misuse.
* **Offering free access:** Malicious actors are creating applications and services that claim to offer uninterrupted and free access to ChatGPT7. Others create fake websites or applications that impersonate ChatGPT8. Unususpecting users may fall for these deceptive offers, exposing their personal information or devices to risk. Fraudulent applications target popular platforms such as Windows and Android. Footnote 8: [https://blog.cylbe.com/2023/02/22/the-growing-threat-of-chatgpt-base](https://blog.cylbe.com/2023/02/22/the-growing-threat-of-chatgpt-base) d-phishing-attacks/ Footnote 17: [https://blog.cylbe.com/2023/02/22/the-growing-threat-of-chatgpt-base](https://blog.cylbe.com/2023/02/22/the-growing-threat-of-chatgpt-base) d-phishing-attacks/ Footnote 18: [https://blog.cylbe.com/2023/02/22/the-growing-threat-off-chatgpt-base](https://blog.cylbe.com/2023/02/22/the-growing-threat-off-chatgpt-base) d-phishing-attacks/ Footnote 19: [https://blog.cylbe.com/2023/02/22/the-growing-threat-off-chatgpt-base](https://blog.cylbe.com/2023/02/22/the-growing-threat-off-chatgpt-base) d-phishing-attacks/ Footnote 20: The exact prompt formulation is not provided to prevent misuse.
* **Information stealing:** Fraudulent ChatGPT applications can be designed to harvest sensitive information from users, such as credit card numbers, account credentials, or personal data stored on their devices (e.g., contact lists, call logs, and files). This stolen information can be used for identity theft, financial fraud, or other criminal activities.
* **Malware installation:** Fraudulent applications can install additional malware on users' devices, like remote access tools, ransomware, etc. The device could be joined to a botnet and used for further attacks.
### _Producing Unethical Content_
Although ChatGPT employs content filters and fine-tuning mechanisms to minimize the generation of harmful or unethical content, determined adversaries may still find ways to bypass these safeguards. By crafting carefully-worded prompts or using obfuscation techniques, attackers can manipulate ChatGPT into generating biased, racist, or otherwise inappropriate content. This unethical content can be used to spread disinformation, incite hatred, or damage reputations. An example of filtering unethical prompts is shown in Table VI. While ChatGPT normally refuses to generate offensive content, it can be manipulated through specific instructions based on role-playing to do so.
## IV Discussion
The ability to bypass ethical safeguards increases the potential for misuse of ChatGPT for malicious purposes. Recently, ChatGPT has been generating attention in the security community due to its possible misuse. Malicious actors can use ChatGPT to boost their activities, such as gathering information on potential victims, suggesting tools to use, describing processes to inexperienced hackers, or sharing statistics for successful attacks.
Role-play scenarios can help bypass the safeguards, as instructing ChatGPT to pretend it has its ethical filter disabled or is in a fictional scenario may result in output that shares more private information or instructs users to engage in unethical or illegal acts. It appears to return to a'safe mode' as the conversation gets longer, possibly due to the attention mechanism, but the role-playing can be reinforced through prompts explicitly asking to stay within the assigned role. These scenarios can also lead to the generation of malicious text and code. Safety risks involve ChatGPT aiding criminal activities, such as providing advice on removing traces of a crime or eva
and discriminatory answers may cause psychological damage to users. Moreover, the information provided in the answers is not always correct. Users may trust such misinformation without fact-checking, which is particularly dangerous in sensitive fields such as law or medicine. ChatGPT can also be used to spread disinformation and conspiracies.
When generating unrestricted output in role-playing mode, we found that ChatGPT did not disclose private personal data but shared information about public figures. It shared publicly available information about private persons, although some information appeared to be hallucinated.
On March 14, 2023, GPT-4 was announced and made available as the underlying model in ChatGPT. According to the technical report [39], the security implications observed in ChatGPT with GPT-3.5 have not been fully addressed in the new version. The risks, such as lowering the cost of certain steps of a successful cyberattack or providing detailed guidance on harmful activities, remain without safety mitigation. While we have not conducted an extensive evaluation of ChatGPT with GPT-4, preliminary experiments indicate that the findings reported in this paper persist.
Even if the exact prompts used in the experimental evaluation are no longer effective in bypassing ChatGPT's safeguards after being reported to OpenAI, the risk of circumventing its content filters is likely to persist due to the black-box nature of the underlying model. We would also like to draw attention to the process of fine-tuning the models through RLHF [38], which is currently one of the key approaches to protecting the models from misuse and declining potentially malicious prompts. Conversational AI models like ChatGPT and InstructGPT [37], its sibling model, are trained using manual labeling of harmful replies, which can be tedious and have implications for workers' mental health. Perspectives on the social impact of RLHF and the potential solutions are discussed in [40] and [41].
Regarding privacy concerns related to the conversation data in ChatGPT, OpenAI's website9 states that non-API data is used to improve services, with personally identifiable information removed. However, concerns about sensitive data safety remain, so users are advised to avoid entering private and exploitable data in interactions with ChatGPT.
Footnote 9: [https://help.openai.com/en/articles/5722486-how-your-data-is-used-to-i](https://help.openai.com/en/articles/5722486-how-your-data-is-used-to-i)
Improve-model-performance
The challenge in addressing the safeguards bypassing lies in finding a delicate balance between minimizing the potential for misuse while maintaining the flexibility and usefulness of the model. Efforts are continuously made to refine and improve the model's filtering mechanisms, but it is important to acknowledge the inherent limitations of the system. To mitigate these risks, constant monitoring, feedback from users, and iterative development are essential in refining the filters and ensuring that the model can better distinguish between legitimate requests and those aimed at exploiting its capabilities. Open collaboration with the research community also plays a crucial role in identifying novel attack vectors and developing more robust defenses against potential bypassing techniques. Further mitigation techniques could involve blocking keywords in questions or answers, using techniques from code injection protection, or using AI itself to filter the AI output. Other possible strategies include utilizing mechanisms from Data Loss Prevention tools or pattern searches in raw data.
## V Conclusions
In this paper, we explored the security risks associated with LLMs, focusing on ChatGPT as a prime example. We discussed various types of risks, such as malicious text and code writing, disclosing personal information, fraudulent services, and producing unethical content. Despite the continuous efforts to build a conversational AI system that is ethical and safe to use, our analysis demonstrates that there remain ways to make ChatGPT generate inappropriate content and that these risks are far-reaching.
Our findings underscore the need for continued research, development, and collaboration among researchers, industry professionals, and policymakers to address the complex security challenges posed by LLMs like ChatGPT. Potential mitigation strategies include advanced content filtering, data tagging, output scanning, or utilizing AI to filter the AI output. It is essential to strike a balance between maintaining the utility of these powerful models and ensuring the safety and security of their users and society at large.
Our work has several limitations. First, the assessment of ChatGPT's security risks is subject to the version of the model and the content filters in place at the time of the study. As technology and safeguards evolve, the risks and vulnerabilities may change. Second, the paper primarily focuses on ChatGPT and its associated security risks. While ChatGPT serves as a representative example, it may not encompass the entire spectrum of LLMs and their specific risks. Future studies could investigate a wider range of LLMs to provide a more comprehensive understanding of the potential security risks. Finally, the experimental evaluation conducted in this paper focuses on hand-crafted controlled experiments. These experiments may not capture the full complexity of real-world adversarial scenarios but rather indicate the direction for further research.
Future research could involve investigating the effectiveness of various mitigation strategies, exploring the implications of novel LLM architectures, and assessing the risks associated with the integration of these models in various applications and fields. Moreover, fostering interdisciplinary collaboration can help develop a more comprehensive understanding of the ethical, social, and security aspects of LLMs and contribute to the development of safer, more responsible AI systems. Future work should also focus on developing more robust content filters. This may involve exploring advanced techniques for detecting and preventing the generation of malicious content, as well as investigating the role of human oversight in improving the safety of conversational AI systems. Finally, the potential long-term consequences of LLMs on society and the ethical implications of their widespread use warrant further research. |
2306.11004 | Social network modeling and applications, a tutorial | Social networks have been widely studied over the last century from multiple
disciplines to understand societal issues such as inequality in employment
rates, managerial performance, and epidemic spread. Today, these and many more
issues can be studied at global scale thanks to the digital footprints that we
generate when browsing the Web or using social media platforms. Unfortunately,
scientists often struggle to access to such data primarily because it is
proprietary, and even when it is shared with privacy guarantees, such data is
either no representative or too big. In this tutorial, we will discuss recent
advances and future directions in network modeling. In particular, we focus on
how to exploit synthetic networks to study real-world problems such as data
privacy, spreading dynamics, algorithmic bias, and ranking inequalities. We
start by reviewing different types of generative models for social networks
including node-attributed and scale-free networks. Then, we showcase how to
perform a network selection analysis to characterize the mechanisms of edge
formation of any given real-world network. | Lisette Espín-Noboa, Tiago Peixoto, Fariba Karimi | 2023-06-19T15:12:36Z | http://arxiv.org/abs/2306.11004v1 | # Social network modeling and applications, a tutorial
###### Abstract.
Social networks have been widely studied over the last century from multiple disciplines to understand societal issues such as inequality in employment rates, managerial performance, and epidemic spread. Today, these and many more issues can be studied at global scale thanks to the digital footprints that we generate when browsing the Web or using social media platforms. Unfortunately, scientists often struggle to access to such data primarily because it is proprietary, and even when it is shared with privacy guarantees, such data is either no representative or too big. In this tutorial, we will discuss recent advances and future directions in _network modeling_. In particular, we focus on how to exploit synthetic networks to study real-world problems such as data privacy, spreading dynamics, algorithmic bias, and ranking inequalities. We start by reviewing different types of generative models for social networks including node-attributed and scale-free networks. Then, we showcase how to perform a network selection analysis to characterize the mechanisms of edge formation of any given real-world network.
social network modeling, model selection, network inference +
[MISSING_PAGE_POST]
Footnote †
### Social theories of edge formation
Understanding how networks form is a key interest for "The Web Conference" community. For example, social scientists are frequently interested in studying relations between entities within social networks, e.g., how social friendship ties form between actors and explain them based on attributes such as a person's gender, race, political affiliation or age in the network (Krishnan, 2015). Similarly, the complex networks community suggests a set of generative network models aiming at explaining the formation of edges focusing on the two core principles of _popularity_ and _similarity_(Krishnan, 2015). Thus, a series of approaches to study edge formation have emerged including statistical tools (Krishnan, 2015; Krishnan, 2015) and model-based approaches (Krishnan, 2015; Krishnan, 2015; Krishnan, 2015) specifically established in the physics and complex networks communities. Other disciplines such as the computer sciences, and political sciences use these tools to understand how co-authorship networks(Krishnan, 2015) or online communities (Boges, 2015) form or evolve.
In terms of similarity, many social networks demonstrate a property known as homophily, which is the tendency of individuals to associate with others who are similar to them, e.g., with respect to gender or ethnicity (Krishnan, 2015). Alternatively, individuals may also prefer to close triangles by connecting to people whom they already share a friend with (Krishnan, 2015) which in turn can explain the emergence of communities (Boges, 2015), high connectivity (Krishnan, 2015), and induced homophily (Boges, 2015). Furthermore, the class balance or distribution of individual attributes over the network is often uneven, with coexisting groups of different sizes, e.g., one ethnic group may dominate the other in size. Popularity, on the other hand, often refers to how well connected a node is in the network which in turn creates an advantage over poorly connected nodes. This is also known as the rich-get-richer or Matthew effect when new nodes attach preferentially to other nodes that are already well connected (Boges, 2015). Many networks, including the World Wide Web, reflect such property by means of scale-free power-law degree distributions.
Here we will focus on the main mechanisms of edge formation namely homophily, triadic closure, node activity, and preferential attachment. Moreover, we will pay special attention to certain structural properties of networks such as class (im)balance, directed edges, and edge density.
### Network models
In this section, we will review a set of well known network generator models. We will cover attributed graphs where nodes possess metadata information such as class membership, and edges are influenced by such information. The implementation of these models can be found in the netin python package.
1. [label=(0)]
2. Attributed undirected graphs 1. [label=()]
3. Preferential attachment (PA) 2. Preferential attachment with homophily (PAH)
4. Preferential attachment with homophily and triadic closure (PATCH)
5. Attributed directed graphs 1. [label=()]
6. Preferential attachment (DPA) 2. Homophily (DH)
7. Preferential attachment with homophily (DPAH)
### Model selection and validation
Identifying the model that best explains a given network remains an open challenge. First, we will show how to infer the hyper-parameters of each network model (e.g., homophily and triadic closure (Krishnan, 2015)) given a real-world network. Then, we will learn how to use and interpret different approaches including AIC (Krishnan, 2015), BIC (Boges, 2015), MDL (Krishnan, 2015), Bayes factors (Krishnan, 2015), and likelihood ratios (Krishnan, 2015), and highlight their strengths and limitations under specific tasks.
### Applications
Here, we will demonstrate how to exploit network models to generate a wide range of synthetic networks to understand how certain algorithms are influenced by network structure and edge formation. The idea is to evaluate the outcomes of the following algorithms and see how they change while also changing the input network.
#### 3.4.1. Biases in node sampling
A range of network properties such as degree and betweenness centrality have been found to be sensitive to the choice of sampling methods (Krishnan, 2015; Krishnan, 2015; Krishnan, 2015). These efforts have shown that network estimates become more inaccurate with lower sample coverage, but there is a wide variability of these effects across different measures, network structures and sampling errors. In terms of benchmarking network sampling strategies, (Boges, 2015) shows that it is not enough to ask which method returns the most accurate sample (in terms of statistical properties); one also needs to consider API constraints and sampling budgets (Krishnan, 2015; Krishnan, 2015).
#### 3.4.2. Inequalities in node rankings
Previous studies have shown that homophily and group-size affect the visibility of minorities in centrality rankings (Krishnan, 2015; Krishnan, 2015; Krishnan, 2015). In particular, such structural rankings may reduce, replicate and amplify the visibility of minorities in top ranks when majorities are homophilic, neutral and heterophilic, respectively. In other words, minorities are not always under-represented, they are just not well connected, and this can be shown by systematically varying the structure of synthetic networks (Krishnan, 2015). Here, we will also touch upon interventions on how to improve the visibility of minorities in degree rankings (Krishnan, 2015).
#### 3.4.3. Biases in network inference
In recent years, there has been an increase of research focusing on mitigating bias (Krishnan, 2015; Krishnan, 2015) and guaranteeing individual and group fairness while preserving accuracy in classification algorithms (Krishnan, 2015; Krishnan, 2015; Krishnan, 2015). While all this body of research focuses on fairness influenced by the attributes of the individuals, recent research proposes a new notion of fairness that is able to capture the relational structure of individuals (Krishnan, 2015; Krishnan, 2015). An important aspect of _explaining discrimination_(Krishnan, 2015) via network structure is that we gain a better understanding of the direction of bias (i.e., why and when inference discriminates against certain groups of people) (Krishnan, 2015).
#### 3.4.4. Inequalities in spreading dynamics
Spreading processes may include simple and complex contagion mechanisms, different transmission rates within and across groups, and different seeding conditions. Here, we will study information access equality to demonstrate to what extent network structure influences a spreading process which in turn may affect the equality and efficiency of information access (Krishnan, 2015).
### Challenges and open questions
We will conclude by summarizing what we have learned, and by brain-storming future directions of what is still missing for producing realistic networks via synthetic data.
## 4. Style, Duration, and Material
This will be a 6-hour hybrid hands-on tutorial. We will provide ready to use jupyter notebooks with all necessary code, libraries, and settings. We will be using python=3.9 and libraries such as:
1. networkx=2.8.8
2. netin=1.0.7
3. graph-tool=2.45
4. matplotlib=3.6.0
5. numpy=1.23.4
6. pandas=1.5.1
7. jupyterlab=3.6
We will provide the slides of the tutorial beforehand, as well as code in the form of python scripts and notebooks. We will also use publicly available real-world networks (Krizzi et al., 2020). **All materials can be found here,4 and a video teaser of this tutorial here.5**
Footnote 4: [https://bit.ly/smama2023](https://bit.ly/smama2023)
Footnote 5: [https://bit.ly/Tutorial/WWWT/TeaserENPK2023](https://bit.ly/Tutorial/WWWT/TeaserENPK2023)
Footnote 6: [https://sma.network/](https://sma.network/)
Footnote 7: [https://www.tensorflow.scienceconf.org/](https://www.tensorflow.scienceconf.org/)
Footnote 8: [https://bit.ly/NetStructure](https://bit.ly/NetStructure)
## 5. Previous Editions
This is the first time the organizers together have conceptualized and planned this tutorial. However, it will not be the first time they organize and teach network science to a broad audience. _Tiago Peixxoto_ has an extensive record in organizing workshops6, and teaching at seminars and international schools on topics about data science, network science, and probabilistic and statistical methods for networks7. _Fariba Karimi_ has given lectures and seminars on network science, theory, and dynamics to a broad audience including computer scientists and social scientists at the University of Koblenz-Landau and GESIS -- The Leibniz Institute for the Social Sciences. _Lisette Espin-Noboa_ co-organized and co-letured in 2020 a 4-day virtual hands-on seminar for social scientists on how to do network analysis in Python (Krizzi et al., 2020). Additionally, Karimi and Espin-Noboa, co-organized a virtual satellite event at Networks 2021 where they invited a diverse group of researchers to talk about their research on network structure and social phenomena8.
Footnote 8: [https://bit.ly/smama2023](https://bit.ly/smama2023)
Footnote 8: [https://bit.ly/Tutorial/WWWT/TeaserENPK2023](https://bit.ly/Tutorial/WWWT/TeaserENPK2023)
Footnote 9: [https://sma.network/](https://sma.network/)
Footnote 10: [https://sma.network/](https://sma.network/)
Footnote 11: [https://sma.tensorflow.scienceconf.org/](https://sma.tensorflow.scienceconf.org/)
Footnote 12: [https://bit.ly/NetStructure](https://bit.ly/NetStructure)
## 6. Equipment
We will require connection to the internet, a projector, and host permissions in Zoom for screen sharing, breakout rooms assignment, and remote access if necessary. Attendees may join the session online or in person using their own computers.
## 7. Organization Details
In case of unexpected events (e.g., restricted mobility, sickness, or bad internet connection) we will provide pre-recorded lectures of the entire tutorial. Moreover, all exercises will be given in advance as python scripts and Jupyter notebooks.
|
2309.01492 | Selective inference after convex clustering with $\ell_1$ penalization | Classical inference methods notoriously fail when applied to data-driven test
hypotheses or inference targets. Instead, dedicated methodologies are required
to obtain statistical guarantees for these selective inference problems.
Selective inference is particularly relevant post-clustering, typically when
testing a difference in mean between two clusters. In this paper, we address
convex clustering with $\ell_1$ penalization, by leveraging related selective
inference tools for regression, based on Gaussian vectors conditioned to
polyhedral sets. In the one-dimensional case, we prove a polyhedral
characterization of obtaining given clusters, than enables us to suggest a test
procedure with statistical guarantees. This characterization also allows us to
provide a computationally efficient regularization path algorithm. Then, we
extend the above test procedure and guarantees to multi-dimensional clustering
with $\ell_1$ penalization, and also to more general multi-dimensional
clusterings that aggregate one-dimensional ones. With various numerical
experiments, we validate our statistical guarantees and we demonstrate the
power of our methods to detect differences in mean between clusters. Our
methods are implemented in the R package poclin. | François Bachoc, Cathy Maugis-Rabusseau, Pierre Neuvial | 2023-09-04T09:55:39Z | http://arxiv.org/abs/2309.01492v1 | # Selective inference after convex clustering with \(\ell_{1}\) penalization
###### Abstract
Classical inference methods notoriously fail when applied to data-driven test hypotheses or inference targets. Instead, dedicated methodologies are required to obtain statistical guarantees for these selective inference problems. Selective inference is particularly relevant post-clustering, typically when testing a difference in mean between two clusters. In this paper, we address convex clustering with \(\ell_{1}\) penalization, by leveraging related selective inference tools for regression, based on Gaussian vectors conditioned to polyhedral sets. In the one-dimensional case, we prove a polyhedral characterization of obtaining given clusters, than enables us to suggest a test procedure with statistical guarantees. This characterization also allows us to provide a computationally efficient regularization path algorithm. Then, we extend the above test procedure and guarantees to multi-dimensional clustering with \(\ell_{1}\) penalization, and also to more general multi-dimensional clusterings that aggregate one-dimensional ones. With various numerical experiments, we validate our statistical guarantees and we demonstrate the power of our methods to detect differences in mean between clusters. Our methods are implemented in the R package poclin.
P remarkRemark
**MSC 2010 subject classifications:** Primary: 62F03, 62H30.
**Keywords and phrases:** Selective inference, clustering, regularization path, hypothesis test, truncated Gaussian.
## 1 Context and objectives
The problem of **selective inference** occurs when the same dataset is used (i) to detect a statistical signal and (ii) to evaluate the strength of this signal [27]. In this article, we focus on the problem of post-clustering testing, where step (i) corresponds to a clustering of the input data, and step (ii) to an hypothesis test stemming from the clustering step. In such a situation, the naive application of a test that does not account for the data-driven clustering step is bound to violate type I error control [6].
This problem occurs in several applications. For instance, it is well-identified in the analysis of single-cell RNA-seq data (see [12]) where the genes expression is measured for several cells: we want to test if each gene has a differential expression between two cells clusters, which are determined beforehand with a clustering procedure on the same expression matrix. This practical question has motivated numerous recent statistical developments to address this post-clustering testing problem.
A data splitting strategy has been studied by [36], but the assignment of labels (from the clustering of the first sample) to the second sample before the test procedure is not taken into
account in the correction. A conditional testing approach has been proposed by [6] for the problem of the difference in mean between two clusters. The authors condition by the event "the two compared clusters are obtained by the random clustering" and by an additional one, allowing \(p\)-values to be exactly computed in the case of agglomerative hierarchical clustering. This approach has been extended to the test of the difference in mean between two clusters for each fixed variable [9]. A strategy to aggregate these \(p\)-values, and another approach using tests of multimodality (without statistical guarantees) are also suggested in [9]. In the context of single-cell data analysis, a count splitting approach under a Poisson assumption [19] and a more flexible Negative Binomial assumption [20] have recently been proposed. In the same line of work, a data thinning strategy is explored in [5; 18], that consists in generating two (or more) independent random matrices that sum to the initial data matrix. This idea can be applied to various distributions belonging to the exponential family.
The present contribution takes a different route from the above references and builds on [14], where a Gaussian linear model is considered, and test procedures are provided, together with associated guarantees post-selection of variables based on the Lasso. The nature of the Lasso optimization problem is carefully analyzed in [14], and conditionally valid test procedures are obtained, based on properties of Gaussian vectors conditioned to polyhedral sets.
We will extend this approach and its statistical guarantees to clustering procedures based on solving a convex optimization problem with \(\ell_{1}\) penalization.
Let us now describe the setting of the paper in more details. We observe, for \(n\) observations of \(p\) variables (or features), a matrix \(\mathbf{Y}=(Y_{ij})_{i\in[|n|],j\in[|p|]}\), where \([|u|]:=\{1,\ldots,u\}\) for any positive integer \(u\). We assume that \(\operatorname{vec}(\mathbf{Y})\) is a \(np\)-dimensional Gaussian vector with mean vector \(\mathbf{\beta}\) and \(np\times np\) covariance matrix \(\mathbf{\Gamma}\), where \(\operatorname{vec}(.)\) denotes the vectorization by column of a matrix. The vector \(\mathbf{\beta}\) is unknown but the matrix \(\mathbf{\Gamma}\) is assumed to be known (as in several of the articles cited above, we will discuss this hypothesis in Section 4.3). Note that this setup covers in particular the case considered e.g. in [6], where \(\mathbf{Y}\) follows the matrix normal distribution \(\mathcal{MN}_{n\times p}(\mathbf{u},\mathbf{\Sigma},\mathbf{\Delta})\) where \(\mathbf{u}\) is the \(n\times p\) mean matrix, \(\mathbf{\Sigma}\) is the \(n\times n\) covariance matrix among rows and \(\mathbf{\Delta}\) is the \(p\times p\) covariance matrix among variables. Indeed, this matrix normal setup is equivalent (by definition) to that \(\operatorname{vec}(\mathbf{Y})\) is a \(np\)-dimensional Gaussian vector with mean vector \(\mathbf{\beta}:=\operatorname{vec}(\mathbf{u})\) and \(np\times np\) covariance matrix \(\mathbf{\Gamma}=\mathbf{\Delta}\otimes\mathbf{\Sigma}\), where \(\otimes\) denotes the Kronecker product.
Under this framework, as announced, we will develop test procedures that extend the line of analysis of [14] to a clustering counterpart of the Lasso in linear models. Thus we consider the convex clustering problem [15; 24; 10] which consists in solving the following optimization problem
\[\widehat{\mathbf{B}}(\mathbf{Y})\in\operatorname*{argmin}_{\mathbf{B}=(\mathbf{B}_{1,\ldots, \mathbf{B}_{n}^{\top}}^{\top})^{\top}\in\mathbb{R}^{n\times p}}\ \frac{1}{2}||\mathbf{B}-\mathbf{Y}||_{F}^{2}+\lambda\sum_{ \begin{subarray}{c}i,i^{\prime}=1\\ i<i^{\prime}\end{subarray}}^{n}||\mathbf{B}_{i^{\prime}.}-\mathbf{B}_{i.}||_{1} \tag{1}\]
where \(||\cdot||_{F}\) is the Frobenius norm and \(\mathbf{B}_{i.}\) denotes the \(i\)-th row of \(\mathbf{B}\). The quantity \(\lambda>0\) is a tuning parameter that we consider fixed here (as for the covariance matrix \(\mathbf{\Gamma}\), this assumption is further discussed in Section 4.3). We can immediately notice that Problem (1) is separable, and can be solved by addressing, for \(j\in[|p|]\), the one-dimensional problem
\[\widehat{\mathbf{B}}_{.j}(\mathbf{Y}_{.j})\in\operatorname*{argmin}_{\mathbf{B}_{.j}=(B_{1 j},\ldots,B_{n})^{\top}\in\mathbb{R}^{n}}\ \frac{1}{2}||\mathbf{B}_{.j}-\mathbf{Y}_{.j}||_{2}^{2}+\lambda\sum_{ \begin{subarray}{c}i,i^{\prime}=1\\ i<i^{\prime}\end{subarray}}^{n}|B_{i^{\prime}j}-B_{ij}|, \tag{2}\]
where \(\mathbf{B}_{.j}\) is the \(j\)-th column of \(\mathbf{B}\). It is worth pointing out that if the norm \(\|\cdot\|_{1}\) is replaced by another norm \(\|\cdot\|_{q}\), \(q\in(0,\infty)\backslash\{1\}\) in (1), then the optimization problem is no longer separable. Hence, it becomes more challenging from a computational perspective. This topic has been the object of a fair amount of recent work, see [4, 25, 31, 33, 37] and our discussions at the end of Section 2.4 and in Section 4.4.
The solution \(\widehat{\mathbf{B}}_{.j}(\mathbf{Y}_{.j})\) of (2) naturally provides a one-dimensional clustering \(\mathcal{C}^{(j)}\) of the observations for the variable \(j\), by affecting \(i\) and \(i^{\prime}\) to the same cluster if and only if \(\widehat{\mathbf{B}}_{ij}=\widehat{\mathbf{B}}_{i^{\prime}j}\). Similarly, the solution of (1) provided by the matrix \(\hat{\mathbf{B}}=(\hat{\mathbf{B}}_{.1},\ldots,\hat{\mathbf{B}}_{.p})\) naturally yields a multi-dimensional clustering of the observations, by affecting \(i\) and \(i^{\prime}\) to the same cluster if and only if \(\widehat{\mathbf{B}}_{i.}=\widehat{\mathbf{B}}_{i^{\prime}.}\). In this article, we will consider more general multi-dimensional clusterings that can be obtained by aggregation of the one-dimensional clusterings \(\mathcal{C}^{(1)},\ldots,\mathcal{C}^{(p)}\) (see Section 3.1). A clustering of the rows of \(\mathbf{Y}\) in \(K\) clusters will be denoted by \(\mathcal{C}=\mathcal{C}(\mathbf{Y})=(\mathcal{C}_{1}(\mathbf{Y}),\ldots,\mathcal{C}_{ K}(\mathbf{Y}))\). Of course these clusters and the number of clusters \(K\) are random (depending on \(\mathbf{Y}\)).
Our goal is to provide test procedures for a (data-dependent) hypothesis of the form
\[\mathbf{\kappa}^{\top}\mathbf{\beta}=0,\]
where \(\mathbf{\kappa}=\mathbf{\kappa}(\mathcal{C}(\mathbf{Y}))\) is a deterministic function of the clustering \(\mathcal{C}(\mathbf{Y})\) and where we recall that \(\mathbf{\beta}\) is the \(np\times 1\) mean vector of \(\mathrm{vec}(\mathbf{Y})\). We refer to Section 4.2 for further discussions on the merits and interpretations of the tests considered in this paper.
**Example 1** (feature-level two-group test).: _The following typical example of a choice of \(\mathbf{\kappa}\) enables to compare, for a variable \(j_{0}\in[|p|]\), the average signal difference between two clusters \(\mathcal{C}_{k_{1}}\) and \(\mathcal{C}_{k_{2}}\), \(k_{1},k_{2}\in[|K|]\), \(k_{1}\neq k_{2}\). We write, for \(i\in[|n|]\) and \(j\in[|p|]\),_
\[\mathbf{\kappa}_{i+(j-1)n}=\left(\frac{\mathds{1}_{i\in\mathcal{C}_{k_{1}}}}{| \mathcal{C}_{k_{1}}|}-\frac{\mathds{1}_{i\in\mathcal{C}_{k_{2}}}}{|\mathcal{C} _{k_{2}}|}\right)\ \mathds{1}_{j=j_{0}}, \tag{3}\]
_where \(|A|\) denotes the cardinality of any finite set \(A\). This yields_
\[\mathbf{\kappa}^{\top}\mathbf{\beta}=\frac{1}{|\mathcal{C}_{k_{1}}|}\sum_{i\in\mathcal{ C}_{k_{1}}}\beta_{i+(j_{0}-1)n}-\frac{1}{|\mathcal{C}_{k_{2}}|}\sum_{i\in \mathcal{C}_{k_{2}}}\beta_{i+(j_{0}-1)n}. \tag{4}\]
_In the particular matrix normal setup discussed above,_
\[\mathbf{\kappa}^{\top}\mathbf{\beta}=\frac{1}{|\mathcal{C}_{k_{1}}|}\sum_{i\in \mathcal{C}_{k_{1}}}\mathbf{u}_{i,j_{0}}-\frac{1}{|\mathcal{C}_{k_{2}}|}\sum_ {i\in\mathcal{C}_{k_{2}}}\mathbf{u}_{i,j_{0}}.\]
_Rejecting this hypothesis corresponds to deciding that the clusters \(\mathcal{C}_{k_{1}}\) and \(\mathcal{C}_{k_{2}}\) have a discriminative power for the variable \(j_{0}\), since their average signal indeed differs._
The separation of Problem (1) into \(p\) one-dimensional optimization problems in (2) will be key for the testing procedures we develop in this paper. In Section 2, we will thus develop our methodology and theory related to the one-dimensional Problem (2). A test procedure is proposed and its statistical guarantees are established in Section 2.3. In Section 2.4, a discussion of the existing optimization procedures to solve Problem (2) is given and an original regularization path algorithm is also provided, specifically for this problem (obtained by leveraging our theoretical results in Section 2.2). In Section 3, the proposed test procedure
and its guarantees are extended to the \(p\)-dimensional framework. Numerical experiments are presented in Sections 2.5 for \(p=1\) and 3.3 for \(p>1\). In Section 4, we provide a detailed overview of our contributions, together with various conclusive discussions regarding them and remaining open problems. The proofs are postponed to Appendices A to C. Appendix D contains additional material regarding the computational aspects of convex clustering, in particular with our suggested regularization path. Appendix E contains additional numerical illustrations.
## 2 The one-dimensional case
### Setting and notation
In this section, for notational simplification, we consider a single Gaussian vector \(\mathbf{X}\) of size \(n\times 1\), with unknown mean vector \(\mathbf{\mu}\) and known covariance matrix \(\mathbf{\Sigma}\). This vector \(\mathbf{X}\) should be thought of as an instance of \(\mathbf{Y}_{\cdot j}\) in (2) for some fixed \(j\in[|p|]\).
We consider the convex clustering procedure (as Problem (2)) obtained for a given \(\lambda>0\) by
\[\widehat{\mathbf{B}}(\mathbf{X})\in\operatorname*{argmin}_{\mathbf{B}=(B_{1},\ldots,B_{n} )\in\mathbb{R}^{n}}\ \frac{1}{2}||\mathbf{B}-\mathbf{X}||_{2}^{2}+\lambda\sum_{ \begin{subarray}{c}i,i^{\prime}=1\\ i<i^{\prime}\end{subarray}}^{n}|B_{i^{\prime}}-B_{i}|. \tag{5}\]
Solving this optimization problem defines a clustering of the \(n\) observations, each cluster corresponding to a distinct value of \(\widehat{\mathbf{B}}(\mathbf{X})\). This mapping is formalized by the following definition.
**Definition 1**.: _For \(\mathbf{B}=(B_{1},\ldots,B_{n})\in\mathbb{R}^{n}\), let \(b_{1}>b_{2}>\cdots>b_{K}\) be the sorted distinct values of the set \(\{B_{i}:i\in[|n|]\}\). The clustering associated to \(\mathbf{B}\) is \(\mathcal{C}=(\mathcal{C}_{k})_{k\in[|K|]}\), where \(\mathcal{C}_{k}=\{i:B_{i}=b_{k}\}\) for \(k\in[|K|]\)._
Note that, indifferently, we address clusterings of a set of elements \((x_{1},\ldots,x_{n})\) (for instance scalars or vectors) either with clusters that are subsets of \((x_{1},\ldots,x_{n})\) or subsets of \([|n|]\). It is convenient to point out the following basic property of the optimization of Problem (5), implying in particular that the clusters are composed by successive scalar observed values, which is very natural.
**Lemma 1**.: _Consider a fixed \(\mathbf{x}=(x_{1},\ldots,x_{n})\in\mathbb{R}^{n}\). Consider \(\widehat{\mathbf{B}}=\widehat{\mathbf{B}}(\mathbf{x})\) given by Problem (5). Then, for \(i,i^{\prime}\in[|n|],i\neq i^{\prime}\),_
1. \(x_{i}=x_{i^{\prime}}\) _implies_ \(\widehat{B}_{i}=\widehat{B}_{i^{\prime}}\)__
2. \(x_{i}\geq x_{i^{\prime}}\) _implies_ \(\widehat{B}_{i}\geq\widehat{B}_{i^{\prime}}\)_._
Similarly as discussed in Section 1, for the clustering \(\mathcal{C}=\mathcal{C}(\mathbf{X})=(\mathcal{C}_{1}(\mathbf{X}),\ldots,\mathcal{C}_{ K}(\mathbf{X}))\) obtained from (5), we will provide a valid test procedure for an hypothesis of the form \(\mathbf{\eta}^{\top}\mathbf{\mu}=0\), where \(\mathbf{\eta}=\mathbf{\eta}(\mathcal{C}(\mathbf{X}))\).
### Polyhedral characterization of convex clustering in dimension one
As in [14], we will suggest a test procedure (see Section 2.3) based on analyzing Gaussian vectors conditioned to polyhedral sets. At first sight, one could thus aim at showing that the
observation vector \(\mathbf{X}\) yields a given clustering with (5) if and only if it belongs to a corresponding polyhedral set. However, this does not hold in general. Hence, we will characterize a more restricted event with a polyhedral set. This event is that (i) a given clustering is obtained and (ii) the scalar observations are in a given order. The same phenomenon occurs in [14], where variables are selected in a linear model. There, it does not hold that a given set of variables is selected by the Lasso if and only if the observation vector belongs to a given polyhedral set. Nevertheless, the event that can be characterized with a polyhedral set is that (i) a given set of variables is selected and (ii) the signs of the estimated coefficients for these variables take a given sequence of values. We refer to Section 4.6 for further discussion on conditioning also by the observations' order.
Before stating the polyhedral characterization, let us provide some notation. We let \(\mathfrak{S}_{n}\) be the set of permutations of \([|n|]\). Consider observations \(x_{1},\ldots,x_{n}\), ordered as \(x_{\sigma(1)}\geq\cdots\geq x_{\sigma(n)}\) for \(\sigma\in\mathfrak{S}_{n}\). When these observations are clustered into \(K\) clusters of successive values, the clustering is in one-to-one correspondence with the positions of the cluster right-limits \(t_{1},\ldots,t_{K}\), where \(0=t_{0}<t_{1}\ldots<t_{K}=n\), and where for \(k\in[|K|]\), cluster \(\mathcal{C}_{k}\) is composed by the indices \(\sigma(t_{k-1}+1),\ldots,\sigma(t_{k})\), for \(k\in[|K|]\). This corresponds to the following definition.
Definition 2: For \(n\in\mathbb{N}\) and \(K\in[|n|]\), let
\[\mathcal{T}_{K,n}:=\left\{(t_{k})_{0\leq k\leq K};\ 0=t_{0}<t_{1}<\cdots<t_{K}=n \right\}.\]
For any \(\sigma\in\mathfrak{S}_{n}\) and any vector \(\mathbf{t}\in\mathcal{T}_{K,n}\), the clustering associated to \((\mathbf{t},\sigma)\) is defined as \(\mathcal{C}(\mathbf{t},\sigma)=\{\mathcal{C}_{1},\ldots,\mathcal{C}_{K}\}\), where for \(k\in[|K|]\), \(n_{k}=t_{k}-t_{k-1}\) and \(\mathcal{C}_{k}=\{\sigma(t_{k-1}+i)\}_{i\in[|n_{k}|]}\).
In particular, let us consider the clustering \(\mathcal{C}=(\mathcal{C}_{k})_{k\in[|K|]}\) obtained from Definition 1 by solving Problem (5) for a given \(\mathbf{x}\in\mathbb{R}^{n}\). This clustering can be written as \(\mathcal{C}(\mathbf{t},\sigma)\), for any \(\sigma\) such that \(x_{\sigma(1)}\geq\cdots\geq x_{\sigma(n)}\), \(t_{0}=0\) and \(t_{k}=\sum_{j\in[|k|]}|\mathcal{C}_{j}|\) for \(k\in[|K|]\).
Example 2: To illustrate Definition 2 and Lemma 1, let \(\mathbf{x}=(2,6,11,10,7,1,6.5,7)\) be observed data. A permutation reordering the values of \(\mathbf{x}\) by decreasing order is
\[\sigma:(1,\ldots,n=8)\mapsto(3,4,5,8,7,2,1,6).\]
For the clustering \(\mathcal{C}=(\mathcal{C}_{1},\mathcal{C}_{2},\mathcal{C}_{3})\) with \(\mathcal{C}_{1}=\{11,10\}\), \(\mathcal{C}_{2}=\{7,7,6.5,6\}\) and \(\mathcal{C}_{3}=\{2,1\}\), the associated vector \(\mathbf{t}\) is \(t_{0}=0\), \(t_{1}=2\), \(t_{2}=6\) and \(t_{3}=8\), as shown in Figure 1. Note that the clustering \(\mathcal{C}\) of observations is equivalent to the clustering of indices \(\mathcal{C}_{1}=\{\sigma(1),\sigma(2)\}=\{3,4\}\), \(\mathcal{C}_{2}=\{\sigma(3),\sigma(4),\sigma(5),\sigma(6)\}=\{5,8,7,2\}\) and \(\mathcal{C}_{3}=\{\sigma(7),\sigma(8)\}=\{1,6\}\). The regularization path (see Section 2.4) associated to the convex clustering problem on the observed values \(\mathbf{x}\) is represented in Figure 2. The vertical line at \(x=\lambda\) intersects the regularization path at \(y=\hat{B}_{i}\). The order property between \(x_{i}\) and \(\hat{B}_{i}\) stated in Lemma 1 is observed all along the regularization path. For \(\lambda=0.5\), we find the clustering in three clusters where the \(\hat{B}_{i}\) values take three distinct values \(\hat{b}_{k}\) (\(\hat{b}_{1}=7.5\), \(\hat{b}_{2}=6.625\) and \(\hat{b}_{3}=4.5\)).
Next, we can provide the announced polyhedral characterization of obtaining a given clustering, together with a given order of the observations.
Theorem 2.1: _Let \(\mathbf{t}\) be a fixed vector in \(\mathcal{T}_{K,n}\) with \(K\in[|n|]\), and let \(\sigma\in\mathfrak{S}_{n}\) be a fixed permutation of \([|n|]\). Let \(\mathcal{C}=\mathcal{C}(\mathbf{t},\sigma)\) be the clustering obtained from Definition 2, with cluster cardinalities \(n_{1},\ldots,n_{K}\). Consider a fixed \(\mathbf{x}=(x_{1},\ldots,x_{n})\in\mathbb{R}^{n}\). Let \(\widehat{\mathbf{B}}=\widehat{\mathbf{B}}(\mathbf{x})\) be the solution of Problem (5) for some fixed \(\lambda>0\), with \(\mathbf{X}\) replaced by \(\mathbf{x}\). From Definition 1, \(\widehat{\mathbf{B}}\)
Figure 1: Illustration of Definition 2 for one clustering with \(K=3\) clusters of the observed values \(\mathbf{x}=(2,6,11,10,7,1,6.5,7)\)
Figure 2: Regularization path (see Section 2.4) associated to the convex clustering problem for the observed values \(\mathbf{x}=(2,6,11,10,7,1,6.5,7)\).
yields a clustering. Then the set of conditions_
\[\mathcal{C}(\mathbf{t},\sigma)\text{ is the clustering given by }\widehat{\mathbf{B}}, \tag{6}\] \[x_{\sigma(1)}\geq x_{\sigma(2)}\geq\cdots\geq x_{\sigma(n)} \tag{7}\]
_is equivalent to the set of the three following conditions_
\[\text{for }k\in[|K-1|]:\] \[\frac{1}{n_{k}}\sum_{i=1}^{n_{k}}x_{\sigma(t_{k-1}+i)}-\frac{1}{n _{k+1}}\sum_{i=1}^{n_{k+1}}x_{\sigma(t_{k}+i)}>\lambda(t_{k+1}-t_{k-1}), \tag{8}\] \[\text{for }k\in[|K|]\text{ such that }n_{k}\geq 2,\text{ for }\ell\in[|n_{k}-1|]:\] \[\frac{1}{n_{k}}\sum_{i=1}^{n_{k}}x_{\sigma(t_{k-1}+i)}-\frac{1}{ \ell}\sum_{i=1}^{\ell}x_{\sigma(t_{k-1}+i)}\geq\lambda(\ell-n_{k}),\] (9) \[x_{\sigma(1)}\geq x_{\sigma(2)}\geq\cdots\geq x_{\sigma(n)}. \tag{10}\]
_Finally, when (6) and (7) hold, then for \(i\in[|n|]\), for \(k\in[|K|]\) with \(i\in\mathcal{C}_{k}\), we have_
\[\hat{B}_{i}=\frac{1}{n_{k}}\sum_{i^{\prime}\in\mathcal{C}_{k}}x_{i^{\prime}}+ \lambda\sum_{k^{\prime}=1}^{k-1}n_{k^{\prime}}-\lambda\sum_{k^{\prime}=k+1}^{ K}n_{k^{\prime}}. \tag{11}\]
In (11), note that by convention \(\sum_{k^{\prime}=a}^{b}\cdots=0\) for \(a,b\in\mathbb{Z}\), \(a>b\). We will use this convention in the rest of the paper. Note also that, apart from the polyhedral characterization given by (8) to (10), Theorem 2 also provides the explicit expression of the optimal \(\widehat{\mathbf{B}}\), solution of Problem (5). This expression depends of the optimal clustering, so it cannot be directly computed to optimize (5) in practice. Nevertheless, Theorem 2 is the basis of a regularization path algorithm provided in Section 2.4.
Next, the following lemma provides a formulation of (8) to (10) in Theorem 2 as an explicit polyhedral set. In this lemma and in the rest of the paper, for \(a\in\mathbb{N}\), we let \(\mathbf{0}_{a}\) be the \(a\times 1\) vector composed of zeros.
**Lemma 3**.: _Consider the setting of Theorem 2. Let \(\mathbf{P}_{\sigma}\) be the \(n\times n\) permutation matrix associated to \(\sigma\in\mathfrak{S}_{n}\): \(\mathbf{P}_{\sigma}\mathbf{x}=(\mathbf{x}_{\sigma(1)},\ldots,\mathbf{x}_{\sigma(n)})^{\top}\), for a \(n\times 1\) vector \(\mathbf{x}\). Then, Conditions (8), (9) and (10) can be written as_
\[\{\mathbf{M}(\mathbf{t})\mathbf{P}_{\sigma}\mathbf{x}\leq\lambda\ \mathbf{m}( \mathbf{t})\} \tag{12}\]
_where \(\mathbf{M}(\mathbf{t})\in\mathbb{R}^{2(n-1)\times n}\) and \(\mathbf{m}(\mathbf{t})\in\mathbb{R}^{2(n-1)}\) are given by:_
\[\mathbf{M}(\mathbf{t})=\left(\begin{array}{c}\mathbf{M}_{1}\\ \mathbf{M}_{2}(\mathbf{t})\\ \mathbf{M}_{3}(\mathbf{t})\end{array}\right)\text{ and }\mathbf{m}( \mathbf{t})=\left(\begin{array}{c}\mathbf{m}_{1}\\ \mathbf{m}_{2}(\mathbf{t})\\ \mathbf{m}_{3}(\mathbf{t})\end{array}\right),\]
_with \(\mathbf{M}_{1}\in\mathbb{R}^{n-1\times n}\), \(\mathbf{M}_{2}(\mathbf{t})\in\mathbb{R}^{K-1\times n}\) and \(\mathbf{M}_{3}(\mathbf{t})\in\mathbb{R}^{n-K\times n}\), explicitly expressed in Appendix B (Equations (25), (27) and (29) respectively); \(\mathbf{m}_{1}=\mathbf{0}_{n-1}\), \(\mathbf{m}_{2}(\mathbf{t})\in\mathbb{R}^{K-1}\) and \(\mathbf{m}_{3}(\mathbf{t})\in\mathbb{R}^{n-K}\), explicitly expressed in Appendix B (Equations (26) and (28) respectively). Furthermore, the inequality \(\mathbf{M}_{2}(\mathbf{t})\mathbf{P}_{\sigma}\mathbf{x}\leq\lambda\mathbf{m}_{2}( \mathbf{t})\) is strict in (12)._
### Test procedure and its guarantees
In this section, we construct the test procedure and provide its theoretical guarantees, based on Theorem 2 and Lemma 3. Since the polyhedral characterization has been shown from these two results, the construction and guarantees here are obtained similarly as in [14]. We nevertheless provide the full details, for the sake of self-completeness.
#### 2.3.1 Construction of the test procedure
We want to test
\[\boldsymbol{\eta}^{\top}\boldsymbol{\mu}=0,\]
where \(\boldsymbol{\eta}=\boldsymbol{\eta}(\mathcal{C}(\boldsymbol{X}))\) and \(\mathcal{C}(\boldsymbol{X})\) is obtained from Problem (5) and Definition 1. The test statistic is naturally
\[\boldsymbol{\eta}^{\top}\boldsymbol{X},\]
and we will construct an invariant statistic from it, based on the polyhedral lemma (Lemma 5.1) of [14], that we restate in our setting for convenience. In the next statement, \(\boldsymbol{I}_{a}\) is the identity matrix in dimension \(a\in\mathbb{N}\) and we use the conventions that the minimum over an empty set is \(+\infty\) and the maximum over an empty set is \(-\infty\).
**Proposition 4** (Polyhedral lemma, adapted from [14]).: _Let \(\mathbf{t}\) be a fixed vector in \(\mathcal{T}_{K,n}\) with \(K\in[|n|]\). Let \(\sigma\in\mathfrak{S}_{n}\) be a fixed permutation of \([|n|]\), and \(\mathbf{P}_{\sigma}\) be the \(n\times n\) associated permutation matrix._
_Let \(\boldsymbol{X}\sim\mathcal{N}(\boldsymbol{\mu},\boldsymbol{\Sigma})\) with \(\boldsymbol{\Sigma}\) invertible and let \(\boldsymbol{\eta}\) be a fixed non-zero \(n\times 1\) vector (allowed to depend on \(\mathbf{t}\) and \(\sigma\)). Let \(\boldsymbol{Z}:=\boldsymbol{Z}(\boldsymbol{X}):=[\boldsymbol{I}_{n}- \boldsymbol{c}\boldsymbol{\eta}^{\top}]\boldsymbol{X}\) with \(\boldsymbol{c}=\boldsymbol{\Sigma}\boldsymbol{\eta}(\boldsymbol{\eta}^{\top} \boldsymbol{\Sigma}\boldsymbol{\eta})^{-1}\). Let \(\mathbf{M}:=\mathbf{M}(\mathbf{t})\) and \(\lambda\mathbf{m}:=\lambda\mathbf{m}(\mathbf{t})\) defined in (12). Then, for any fixed \(\lambda>0\), we have the following properties:_
* \(\boldsymbol{Z}\) _is uncorrelated with, and hence independent of,_ \(\boldsymbol{\eta}^{\top}\boldsymbol{X}\)_._
* _The conditioning set can be written as follows_ \[\{\mathbf{MP}_{\sigma}\boldsymbol{X}\leq\lambda\ \mathbf{m}\}=\{\mathcal{V}^{-}( \boldsymbol{Z})\leq\boldsymbol{\eta}^{\top}\boldsymbol{X}\leq\mathcal{V}^{+}( \boldsymbol{Z}),\mathcal{V}^{0}(\boldsymbol{Z})\geq 0\}\] (13) _where_
* \(\mathcal{V}^{-}(\boldsymbol{Z}):=\max\limits_{l:(\mathbf{MP}_{\sigma} \boldsymbol{c})_{l}<0}\frac{\lambda m_{l}-(\mathbf{MP}_{\sigma}\boldsymbol{Z}) _{l}}{(\mathbf{MP}_{\sigma}\boldsymbol{c})_{l}}\)__
* \(\mathcal{V}^{+}(\boldsymbol{Z}):=\min\limits_{l:(\mathbf{MP}_{\sigma} \boldsymbol{c})_{l}>0}\frac{\lambda m_{l}-(\mathbf{MP}_{\sigma}\boldsymbol{Z}) _{l}}{(\mathbf{MP}_{\sigma}\boldsymbol{c})_{l}}\)__
* \(\mathcal{V}^{0}(\boldsymbol{Z}):=\min\limits_{l:(\mathbf{MP}_{\sigma} \boldsymbol{c})_{l}=0}\lambda m_{l}-(\mathbf{MP}_{\sigma}\boldsymbol{Z})_{l}\)_._
_Note that \(\mathcal{V}^{-}(\boldsymbol{Z})\), \(\mathcal{V}^{+}(\boldsymbol{Z})\) and \(\mathcal{V}^{0}(\boldsymbol{Z})\) are independent of \(\boldsymbol{\eta}^{\top}\boldsymbol{X}\). Finally, when the event in (13) has non-zero probability, conditionally to this event, the probability that \(\mathcal{V}^{-}(\boldsymbol{Z})=\mathcal{V}^{+}(\boldsymbol{Z})\) is zero._
From Proposition 4, it is shown in [14] that, for any fixed \(\mathbf{z}_{0}\) with \(\mathcal{V}^{-}(\mathbf{z}_{0})<\mathcal{V}^{+}(\mathbf{z}_{0})\), under the null hypothesis \(\boldsymbol{\eta}^{\top}\boldsymbol{\mu}=0\), conditionally to \(\{\mathbf{MP}_{\sigma}\boldsymbol{X}\leq\lambda\ \mathbf{m},\boldsymbol{Z}=\mathbf{z}_{0}\}\), the following invariant statistic based on the test statistic \(\boldsymbol{\eta}^{\top}\boldsymbol{X}\) fulfills
\[T(\boldsymbol{X},\mathbf{t},\sigma):=F_{0,\boldsymbol{\eta}^{\top}\boldsymbol {\Sigma}\boldsymbol{\eta}}^{[\mathcal{V}^{-}(\mathbf{z}_{0}),\mathcal{V}^{+} (\mathbf{z}_{0})]}(\boldsymbol{\eta}^{\top}\boldsymbol{X})\sim\mathcal{U}[0,1], \tag{14}\]
where \(\mathcal{U}[0,1]\) denotes the uniform distribution and \(F^{[a,b]}_{\nu,\tau^{2}}(.)\) is the cumulative distribution function (cdf) of a Gaussian distribution \(\mathcal{N}(\nu,\tau^{2})\) truncated on the interval \([a,b]\).
The \(p\)-value, corresponding to considering two-sided alternative hypotheses to \(\boldsymbol{\eta}^{\top}\boldsymbol{\mu}=0\), is then
\[\mathrm{pval}(\boldsymbol{x},\mathbf{t},\sigma)=2\min\left[T(\boldsymbol{x}, \mathbf{t},\sigma),1-T(\boldsymbol{x},\mathbf{t},\sigma)\right] \tag{15}\]
for a \(n\times 1\) observation vector \(\boldsymbol{x}\). Note that the two definitions (14) and (15) require \(\mathcal{V}^{-}(\mathbf{z}_{0})<\mathcal{V}^{+}(\mathbf{z}_{0})\), which holds almost surely conditionally to \(\mathbf{MP}_{\sigma}\boldsymbol{X}\leq\lambda\)\(\mathbf{m}\), as stated in Proposition 4.
#### 2.3.2 Conditional level
Next, we show that the suggested test is conditionally valid. That is, conditionally to a clustering and a data order, when the null hypothesis (that is fixed by the clustering) holds, the \(p\)-value is uniformly distributed. In particular, the probability of rejection is equal to the prescribed level. Conditional validity naturally yields unconditional validity, as shown in Section 2.3.3. Hence conditional validity is mathematically a stronger property than unconditional validity. A statistical benefit of conditional validity is that the null hypothesis is fixed after conditioning; in particular \(\boldsymbol{\eta}^{\top}\boldsymbol{\mu}\) becomes a fixed target of interest, which is beneficial for interpretability. In the related context of linear models, for instance, the tests obtained from the confidence intervals of [2; 3] are unconditionally valid while the tests provided in [14; 29] are conditionally (and unconditionally) valid. The interpretability benefit we discuss above is also discussed in [14].
**Proposition 5**.: _Let \(\mathbf{t}\) be a fixed vector in \(\mathcal{T}_{K,n}\) with \(K\in[|n|]\). Let \(\sigma\in\mathfrak{S}_{n}\) be a fixed permutation of \([|n|]\), and \(\mathbf{P}_{\sigma}\) be the \(n\times n\) associated permutation matrix. Let \(\mathcal{C}=\mathcal{C}(\mathbf{t},\sigma)\) be the clustering obtained from Definition 2, with cluster cardinalities \(n_{1},\ldots,n_{K}\)._
_Let \(\boldsymbol{X}\sim\mathcal{N}(\boldsymbol{\mu},\boldsymbol{\Sigma})\) with \(\boldsymbol{\Sigma}\) invertible. Consider a fixed \(n\times 1\) non-zero vector \(\boldsymbol{\eta}\in\mathbb{R}^{n}\) (that is only allowed to depend on \((\mathbf{t},\sigma)\)). Assume that_
\[\boldsymbol{\eta}^{\top}\boldsymbol{\mu}=0.\]
_Let \(\widehat{\boldsymbol{B}}=\widehat{\boldsymbol{B}}(\boldsymbol{X})\) from Problem (5) for some fixed \(\lambda>0\). Assume that with non-zero probability, the event_
\[E_{\mathbf{t},\sigma}\quad:=\quad\left\{\mathcal{C}(\mathbf{t},\sigma)\text{ is the clustering given by }\widehat{\boldsymbol{B}},\ \ X_{\sigma(1)}\geq X_{\sigma(2)}\geq\cdots\geq X_{\sigma(n)}\right\}\]
_holds. Then, conditionally to \(E_{\mathbf{t},\sigma}\), \(\mathrm{pval}(\boldsymbol{X},\mathbf{t},\sigma)\) is uniformly distributed on \([0,1]\):_
\[\mathbb{P}_{\boldsymbol{\eta}^{\top}\boldsymbol{\mu}=0}\left(\mathrm{pval}( \boldsymbol{X},\mathbf{t},\sigma)\leq t\big{|}E_{\mathbf{t},\sigma}\right)=t \qquad\forall t\in[0,1].\]
#### 2.3.3 Unconditional level
We now show that \(\mathrm{pval}(\boldsymbol{X},\mathbf{t},\sigma)\) is unconditionally uniformly distributed, which we call unconditional validity. Here, "unconditionally" means that the clustering is not fixed, but it is still necessary to condition by the fact that the null hypothesis \(\boldsymbol{\eta}^{\top}\boldsymbol{\mu}=0\) is well-defined and true. Regarding well-definiteness, the vector \(\boldsymbol{\eta}=\boldsymbol{\eta}(\mathcal{C}(\boldsymbol{X}))\) may indeed not be well-defined for all clusterings \(\mathcal{C}(\boldsymbol{X})\). In the next proposition, we thus introduce the set \(\mathcal{E}\) of clusterings,
indexed by an ordering \(\sigma\) and a sequence of right-limits \(\mathbf{t}\) as in Definition 2, that make \(\boldsymbol{\eta}\) well-defined.
For instance, in the case of the two-group test of Example 1, \(\boldsymbol{\eta}\) can be defined similarly as in (4), with
\[\boldsymbol{\eta}^{\top}\boldsymbol{\mu}=\frac{1}{|\mathcal{C}_{k_{1}}( \boldsymbol{X})|}\sum_{i\in\mathcal{C}_{k_{1}}(\boldsymbol{X})}\mu_{i}-\frac{ 1}{|\mathcal{C}_{k_{2}}(\boldsymbol{X})|}\sum_{i\in\mathcal{C}_{k_{2}}( \boldsymbol{X})}\mu_{i}. \tag{16}\]
In this case, \(\mathcal{E}\) is the set of clusterings for which the number of clusters is larger than or equal to \(\max(k_{1},k_{2})\), enabling \(\boldsymbol{\eta}\) to be well-defined. When \(k_{1}=1\) and \(k_{2}=2\), this definition is possible for all clusterings, except the one with only one cluster. In this case, \(\mathcal{E}\) should thus be defined as restricting \(\mathbf{t}\) to have at least 3 elements \(0=t_{0}<t_{1}<t_{2}=n\), that is to correspond to a clustering with at least two clusters.
Then, Proposition 6 shows that conditionally to \(\mathcal{E}\) and to \(\boldsymbol{\eta}^{\top}\boldsymbol{\mu}=0\), the \(p\)-value is uniformly distributed, which we call unconditional validity, in the sense that we do not condition by a single clustering, as commented above.
**Proposition 6**.: _Let \(\mathcal{E}\) be a subset of the set of all possible values of \((\mathbf{t},\sigma)\) in Proposition 5. Consider a deterministic function \(\boldsymbol{\eta}:\mathcal{E}\rightarrow\mathbb{R}^{n}\), outputing a non-zero column vector. Assume that \(\boldsymbol{\Sigma}\) is invertible. Let \(\widehat{\boldsymbol{B}}\) as in (5). Let \(S=S(\boldsymbol{X})\) be a random permutation obtained by reordering \(\boldsymbol{X}\) as: \(X_{S(1)}\geq\cdots\geq X_{S(n)}\) (uniquely defined with probability one). Let \(\mathcal{C}(\boldsymbol{X})=\mathcal{C}\) be the random clustering given by \(\widehat{\boldsymbol{B}}\) (Definition 1), of random dimension \(K(\boldsymbol{X})=K\). Let \(\mathbf{T}(\boldsymbol{X})=\mathbf{T}\in\mathcal{T}_{K,n}\) be the random vector, such that \(\mathbf{T}\) and \(S\) yield \(\mathcal{C}\) as in Definition 2._
_Assume that_
\[\mathbb{P}\left((\mathbf{T},S)\in\mathcal{E},\boldsymbol{\eta}(\mathbf{T},S) ^{\top}\boldsymbol{\mu}=0\right)>0.\]
_Then, conditionally to the above event, \(\mathrm{pval}(\boldsymbol{X},\mathbf{T},S)\) is uniformly distributed on \([0,1]\):_
\[\mathbb{P}\left(\mathrm{pval}(\boldsymbol{X},\mathbf{T},S)\leq t\big{|}( \mathbf{T},S)\in\mathcal{E},\boldsymbol{\eta}(\mathbf{T},S)^{\top}\boldsymbol {\mu}=0\right)=t\qquad\forall t\in[0,1].\]
### Regularization path
At first sight, (5) is a convex optimization problem, whose (unique) minimizer does not have any explicit expression, and thus (5) requires numerical optimization to approximate its solution. Furthermore, this numerical optimization would be repeated for different values of \(\lambda\). However, thanks to the polyhedral characterization of Theorem 2, we can provide a regularization path for solving (5). This regularization path is an algorithm, only performing elementary operations, that provides the entire sequence of exact solutions to (5), for all values of \(\lambda\). This algorithm is exposed in Algorithm 1. Then, Theorem 7 shows that this algorithm is well-defined and indeed provides the set of solutions to Problem (5).
**Theorem 7**.: _Algorithm 1 stops at a final value of \(r\) that we write \(r_{\max}\), such that \(r_{\max}\leq n-1\) and we have \(K^{(0)}>\cdots>K^{(r_{\max})}=1\). Let \(\lambda^{(r_{\max}+1)}=+\infty\) by convention. For \(r\in\{0,\ldots,r_{\max}\}\) and \(\lambda\in[\lambda^{(r)},\lambda^{(r+1)})\), \((\hat{B}_{i}^{(r)}(\lambda))_{i\in[\|n\|]}\) minimizes Problem (5)1._
Footnote 1: Even if Algorithm 1 stops at \(r=r_{\max}\), we can still define \((\hat{B}_{i}^{(r_{\max})}(\lambda))_{i\in[\|n\|]}\) there, with (17), with the convention that \(\sum_{k^{\prime}=1}^{0}n_{k^{\prime}}^{(r_{\max})}=0\) and \(\sum_{k^{\prime}=2}^{1}n_{k^{\prime}}^{(r_{\max})}=0\). This vector has all its components equal to \(\sum_{i=1}^{n}x_{i}/n\).
```
Input:\(\mathbf{x}=(x_{1},\ldots,x_{n})\in\mathbb{R}^{n}\) Initialization \(r\gets 0\); \(\lambda^{(0)}\gets 0\); \(\tilde{x}_{1}>\cdots>\tilde{x}_{K^{(0)}}\): the \(K^{(0)}\) distinct values in \(\mathbf{x}\); \(\mathcal{C}^{(0)}=(\mathcal{C}^{(0)}_{1},\ldots,\mathcal{C}^{(0)}_{K^{(0)}})\)\(\leftarrow\) clustering of \([|n|]\) where \(\mathcal{C}^{(0)}_{k}=\{i\in[|n|]:x_{i}=\tilde{x}_{k}\}\); \(n^{(0)}_{k}\leftarrow|\mathcal{C}^{(0)}_{k}|\) for \(k\in[|K^{(0)}|]\); \(\hat{b}^{(0)}_{k}(\lambda^{(0)})\leftarrow\tilde{x}_{k}\) for \(k\in[|K^{(0)}|]\); \(\hat{B}^{(0)}_{i}(\lambda^{(0)})\leftarrow\hat{b}^{(0)}_{k}(\lambda^{(0)})\) if \(i\in\mathcal{C}^{(0)}_{k}\) (\(k\) is unique) for \(i\in[|n|]\); while\(K^{(r)}\geq 2\)do For all\(\lambda\geq\lambda^{(r)}\) we define \[\hat{b}^{(r)}_{k}(\lambda) :=\hat{b}^{(r)}_{k}(\lambda^{(r)})+\left(\lambda-\lambda^{(r)} \right)\left(\sum_{k^{\prime}=1}^{k-1}n^{(r)}_{k^{\prime}}-\sum_{k^{\prime}=k+ 1}^{K^{(r)}}n^{(r)}_{k^{\prime}}\right)\ \forall k\in[|K^{(r)}|]\] (17) \(\hat{B}^{(r)}_{i}(\lambda) :=\hat{b}^{(r)}_{k}(\lambda)\) if \(i\in\mathcal{C}^{(r)}_{k}\) (\(k\) is unique) for \(i\in[|n|]\); \[\lambda^{(r+1)}\leftarrow\lambda^{(r)}+\min_{k\in[|K^{(r)}-1|]}\frac{\hat{b}^{ (r)}_{k}(\lambda^{(r)})-\hat{b}^{(r)}_{k+1}(\lambda^{(r)})}{n^{(r)}_{k}+n^{(r) }_{k+1}};\] (18) \((\hat{b}^{(r+1)}_{k}(\lambda^{(r+1)}))_{k\in[|K^{(r+1)}|]}\)\(\leftarrow\) distinct values of \((\hat{b}^{(r)}_{k}(\lambda^{(r+1)}))_{k\in[|K^{(r)}|]}\), sorted decreasingly; \(\mathcal{C}^{(r+1)}\leftarrow\) clustering of \([|n|]\) obtained from \(\left(\hat{B}^{(r)}_{i}(\lambda^{(r+1)})\right)_{i\in[|n|]}\) by Definition 1; \(n^{(r+1)}_{k}\leftarrow|\mathcal{C}^{(r+1)}_{k}|\) for \(k\in[|K^{(r+1)}|]\); \(r\gets r+1\); end while
```
**Algorithm 1**Regularization path for one-dimensional convex clustering
By way of illustration, Algorithm 1 was applied to the observations of Example 2, and the resulting regularization path is shown in Figure 2. In Algorithm 1, since \(r\mapsto K^{(r)}\) is strictly decreasing during the execution, there are at most \(n-1\) induction steps. A straightforward implementation of (18) can lead to a time complexity of order \(\mathcal{O}(K^{(r)})\) for each step, and thus a total time complexity of order \(\mathcal{O}(n^{2})\) in the worst case. The space complexity is linear (\(\mathcal{O}(n)\)). Indeed, in order to recover the entire regularization path, it is sufficient to record at each step \(r\) the labels of the clusters merged at this step. We have implemented this algorithm in the open source R package poclin (which stands for "post convex clustering inference"), which is available from [https://plmlab.math.cnrs.fr/pneuvial/poclin](https://plmlab.math.cnrs.fr/pneuvial/poclin). The empirical time complexity of our implementation is substantially below \(\mathcal{O}(n^{2})\) for \(n\leq 10^{5}\), as illustrated in Appendix D. In this appendix, we also explain that the time complexity of Algorithm 1 could be further decreased to \(\mathcal{O}(n\log(n))\) without compromising the linear space complexity by storing merge candidates more efficiently using a min heap.
**Remark 1** (Final value of the regularization parameter).: _As a consequence of Theorem 2 (see in particular (9)), the final value of \(\lambda\) in Algorithm 1 is obtained analytically as:_
\[\lambda^{(r_{\max})}=\max_{i\in[|n-1|]}\frac{\frac{1}{i}\sum_{i^{\prime}=1}^{i} x_{(i^{\prime})}-\frac{1}{n}\sum_{i^{\prime}=1}^{n}x_{(i^{\prime})}}{n-i}. \tag{19}\]
_It corresponds to the smallest value of \(\lambda\) for which the convex clustering yields exactly one cluster. The range of values for which there are two or more clusters has also been studied by [26] for convex clustering procedures that include Problem (5). We note that in the specific case of Problem (5), \(\lambda^{(r_{\max})}\) can be computed using (19) in linear time after an initial sorting of the input vector. Our numerical experiments below make use of (19) to choose \(\lambda\) in a non data-driven way, see also Appendix E.1._
Relation to other existing regularization path algorithms.Algorithm 1 has similarities with the following two more general regularization path algorithms, that can be applied to Problem (5). First, for the generalized lasso, a penalization term of the form \(\|\mathbf{D}\|_{1}\) is studied in [30], for a general matrix \(\mathbf{D}\). It is then simple to find a \(n(n-1)/2\times n\) (sparse) matrix \(\mathbf{D}\) leading to the penalization term \(\lambda\sum_{i,i^{\prime}=1,i<i^{\prime}}^{n}|B_{i^{\prime}}-B_{i}|\) of (5). The benchmarks that we have conducted in Appendix D show that the procedure based on the generalized lasso has a very large memory footprint and is very slow (more than 10 seconds for \(n=50\)), as it relies on the matrix \(\mathbf{D}\), whose total number of entries is \(\mathcal{O}(n^{3})\). Second, the fused lasso signal approximator (FLSA) suggested by [11] can handle a penalization term of the form \(\lambda\sum_{i,i^{\prime}=1,(i,i^{\prime})\in E}^{n}|B_{i^{\prime}}-B_{i}|\), where \(E\) is a set of pairs of indices. Similarly as before, taking \(E\) as the complete set of pairs recovers the penalization term of (5). The theoretical time complexity of the regularization path for FLSA has been shown in [10] to be \(\mathcal{O}(n\log(n))\) in this case. The benchmarks that we have conducted in Appendix D show that the procedure based on FLSA is much more efficient than the one based on the generalized lasso. Nevertheless, our implementation of Algorithm 1 remains preferable, as it can address larger dataset sizes (see Figure 6).
On top of these numerical performances, the benefit of Algorithm 1, relatively to these two general procedures, is that its description and proof of validity (Theorem 7) are self-contained and specific to the one-dimensional convex clustering problem (5). Furthermore, the proof of validity exploits the specific analysis of (5) given by Theorem 2.
### Numerical experiments
In order to illustrate the behaviour of our post-clustering testing procedure, we have performed the following numerical experiments in the one-dimensional framework. The code to reproduce these numerical experiments and the associated figures is available from [https://plmlab.math.cnrs.fr/pneuvial/poclin-paper](https://plmlab.math.cnrs.fr/pneuvial/poclin-paper).
We consider a Gaussian sample \(\mathbf{X}=(X_{1},\ldots,X_{n})\) with mean vector \(\mathbf{\mu}=(\nu\mathbf{1}_{n/2}^{\top},\mathbf{0}_{n/2}^{\top})^{\top}\) and known covariance matrix \(\mathbf{\Sigma}=\mathbf{I}_{n}\). Here and in the rest of the paper, for \(a\in\mathbb{N}\), we let \(\mathbf{1}_{a}\) be the \(a\times 1\) vector composed of ones.
We set \(n=1000\) and \(\lambda=0.0025\). This value of \(\lambda\) has been chosen to ensure that with high probability, the convex clustering finds at least two clusters under the null hypothesis. The procedure that we have used in our numerical experiments to achieve this property relies on (19) and is described in Appendix E.1. Let \(\mathcal{C}=(\mathcal{C}_{k})_{k\in[|K|]}\) be the result of the one-dimensional convex clustering obtained from Algorithm 1 with \(\lambda=0.0025\). If \(K>2\), we merge adjacent clusters in to obtain a 2-class clustering of the form \(\overline{\mathcal{C}}_{1}:=\mathcal{C}_{1}\cup\cdots\cup\mathcal{C}_{q}, \overline{\mathcal{C}}_{2}:=\mathcal{C}_{q+1}\cup\cdots\cup\mathcal{C}_{K}\), where \(q\) is chosen so that the sizes of \(\overline{\mathcal{C}}_{1}\) and \(\overline{\mathcal{C}}_{2}\) are as balanced as possible. We then the test procedure introduced in Section 2.3.1 to compare the means of \(\overline{\mathcal{C}}_{1}\) and \(\overline{\mathcal{C}}_{2}\), as in Example 1. Note that this yields \(\eta_{i}=\mathds{1}_{i\in\overline{\mathcal{C}}_{1}}/|\overline{\mathcal{C}}_{ 1}|-\mathds{1}_{i\in\overline{\mathcal{C}}_{2}}/|\overline{\mathcal{C}}_{2}|\) for \(i\in[|n|]\), which is indeed a deterministic function of \(\mathcal{C}_{1},\ldots,\mathcal{C}_{K}\) and thus in the scope of the guarantees obtained in Section 2.3. For each signal value \(\nu\in\{0,1,2,3,4,5\}\), we retain \(N=1000\) numerical experiments for which \(K\geq 2\). Note that the event \(K\geq 2\) corresponds to the set \(\mathcal{E}\) in Proposition 6.
Figure 3 (left) gives the empirical density of \(\mathbf{\eta}^{\top}\mathbf{\mu}\), the difference between the true means of the estimated clusters, for each value of \(\nu\) considered. This plot quantifies the performance of the clustering step: for a perfect clustering, we would have \(\mathbf{\eta}^{\top}\mathbf{\mu}=\nu\), corresponding to the diagonal line. As expected, the larger the signal (\(\nu\) increases), the easier the clustering step.
Figure 3 (right) shows the empirical \(p\)-value distribution of the proposed test (see (15)). For \(\nu=0\) (no signal), the curve illustrates the uniformity of the distribution of the \(p\)-values: it shows that the level of the test is appropriately controlled. Another simulation to control the level of the test is available in Appendix E.2. As expected, the power of the test is an increasing function of the distance between the null and the alternative hypotheses (as encoded by the parameter \(\nu\)). Our conditional test is able to detect the signal only for \(\nu>1\).
## 3 The \(p\)-dimensional case
### Aggregating one-dimensional clusterings
Consider the \(p\)-dimensional setting of Section 1. For \(j\in[|p|]\), consider the one-dimensional clustering \(\mathcal{C}^{(j)}=\mathcal{C}^{(j)}(\mathbf{Y}_{\cdot\cdot j})=(\mathcal{C}_{1}^{( j)}(\mathbf{Y}_{\cdot\cdot j}),\ldots,\mathcal{C}_{K^{(j)}}^{(j)}(\mathbf{Y}_{\cdot \cdot j}))\) obtained by computing \(\hat{\mathbf{B}}_{\cdot j}\) by solving (2) and with Definition 1. We consider a \(p\)-dimensional clustering \(\mathcal{C}\) obtained by aggregation of the one-dimensional clusterings \(\mathcal{C}^{(1)},\ldots,\mathcal{C}^{(p)}\) as follows.
For \(i\in[|n|]\) and \(j\in[|p|]\), let \(\tilde{Y}_{ij}\) be the class index of \(Y_{ij}\) in the clustering \(\mathcal{C}^{(j)}\), rescaled from \(\{1,2,\ldots,K^{(j)}\}\) to \(\{0,1/(K^{(j)}-1),\ldots,1\}\). We obtain a \(p\)-dimensional clustering \(\mathcal{C}\) by applying a clustering procedure to the rows of the \(n\times p\) matrix \(\hat{\mathbf{Y}}\), for instance a hierarchical clustering [17] with the Euclidean distance. We are then in a position to test an hypothesis \(\mathbf{\kappa}^{\top}\mathbf{\beta}=0\), where \(\mathbf{\kappa}=\mathbf{\kappa}(\mathcal{C})\), as motivated in Section 1. In particular, we can test the signal difference for the column \(j_{0}\) between two clusters \(\mathcal{C}_{k_{1}}\) and \(\mathcal{C}_{k_{2}}\) in the multi-dimensional clustering \(\mathcal{C}\), as in (4).
**Remark 2**.: _Above, we focus on a specific aggregation using the hierarchical clustering with the Euclidean distance for simplicity. However, we can construct more general \(p\)-dimensional clusterings \(\mathcal{C}\) by more general aggregations of \(\mathcal{C}^{(1)},\ldots,\mathcal{C}^{(p)}\). Indeed, our statistical framework (see Section 3.2) encompasses any case where \(\boldsymbol{\kappa}=\boldsymbol{\kappa}(\mathcal{C})\), as long as \(\mathcal{C}\) is a function of the one-dimensional clusterings and orderings. In particular, one could also consider the hierarchical clustering with the Hamming distance, or the "unanimity" clustering, (i and \(i^{\prime}\) are in the same cluster of \(\mathcal{C}\) if and only if they are in the same cluster for each \(\mathcal{C}^{(j)}\)). This latter clustering is actually the one provided by Problem (1). For more background on clustering aggregation, we refer for instance to [7, 21, 32] and references therein._
### Test procedure and its guarantees
#### 3.2.1 Construction of the test procedure
The test procedure for the hypothesis \(\boldsymbol{\kappa}^{\top}\boldsymbol{\beta}=0\) is constructed similarly as in Section 2.3.1. We consider \(p\) permutations \(\sigma^{(1)},\ldots,\sigma^{(p)}\) that provide the orderings of the columns of the \(n\times p\) observation matrix \(\boldsymbol{Y}\). As in Definition 2, we identify the \(p\) clusterings \(\mathcal{C}^{(1)},\ldots,\mathcal{C}^{(p)}\) by their numbers of classes \(K^{(1)},\ldots,K^{(p)}\in[|n|]\) and by the right-limit sequences \(\mathbf{t}^{(j)}\in\mathcal{T}_{K^{(j)},n}\) for \(j\in[|p|]\).
For \(j\in[|p|]\), we consider the matrix \(\mathbf{M}(\mathbf{t}^{(j)})\mathbf{P}_{\sigma^{(j)}}\) of size \(2(n-1)\times n\) and the vector \(\lambda\mathbf{m}(\mathbf{t}^{(j)})\) of size \(n\), defined in Lemma 3. Recall from Section 2 that, if only the variable \(j\) and its clustering \(\mathcal{C}^{(j)}\) and order \(\sigma^{(j)}\) were considered, then the conditioning event would be \(\big{\{}\mathbf{M}(\mathbf{t}^{(j)})\mathbf{P}_{\sigma^{(j)}}\boldsymbol{Y} _{\cdot j}\leq\lambda\mathbf{m}(\mathbf{t}^{(j)})\big{\}}\).
We then explicit the conditioning constraints in dimension \(p\), corresponding to all the clusterings \(\mathcal{C}^{(1)},\ldots,\mathcal{C}^{(p)}\) and orders \(\sigma^{(1)},\ldots,\sigma^{(p)}\). We define the matrix \(\boldsymbol{\mathscr{M}}\) of size \(2(n-1)p\times np\) in the following block-wise fashion. There are \(p^{2}\) rectangular blocks (corresponding to dividing the rows into \(p\) groups and the columns into \(p\) groups). The block indexed by row-group \(j\) and column-group \(j^{\prime}\) has size \(2(n-1)\times n\). It is zero if \(j\neq j^{\prime}\) and it is equal to \(\mathbf{M}(\mathbf{t}^{(j)})\) if
Figure 3: Left: empirical density of \(\boldsymbol{\eta}^{\top}\boldsymbol{\mu}\) for each \(\nu\). Right: empirical cumulative distribution functions of the \(p\)-value of the test of equality between the means of two clusters.
\(j=j^{\prime}\). Define also \(\mathbf{D}_{\sigma}\) as the \(np\times np\) block diagonal matrix with \(p\) diagonal blocks and block \(j\) equal to \(\mathbf{P}_{\sigma^{(j)}}\), for \(j\in[|p|]\). With these definitions, we have
\[\boldsymbol{\mathscr{M}}\mathbf{D}_{\sigma}\text{vec}(\boldsymbol{Y})=\begin{pmatrix} \mathbf{M}(\mathbf{t}^{(1)})\mathbf{P}_{\sigma^{(1)}}\boldsymbol{Y}_{.1}\\ \vdots\\ \mathbf{M}(\mathbf{t}^{(p)})\mathbf{P}_{\sigma^{(p)}}\boldsymbol{Y}_{.p} \end{pmatrix}.\]
We let \(\lambda\boldsymbol{m}\) be the vector obtained by stacking the column vectors \(\lambda\mathbf{m}(\mathbf{t}^{(j)})\), \(j\in[|p|]\), one above the other. The conditioning constraints in dimension \(p\) are then \(\{\boldsymbol{\mathscr{M}}\mathbf{D}_{\sigma}\text{vec}(\boldsymbol{Y})\leq \lambda\boldsymbol{m}\}\).
Consider a column vector \(\boldsymbol{\kappa}\) of size \(np\), that is allowed to depend on \((\mathbf{t}^{(j)},\sigma^{(j)}),j\in[|p|]\). This includes the setting \(\boldsymbol{\kappa}=\boldsymbol{\kappa}(\mathcal{C}^{(1)},\ldots,\mathcal{C} ^{(p)})\) of Section 3.1, with the additional mathematical flexibility that \(\boldsymbol{\kappa}\) is allowed to depend on the orderings of the columns, besides their clusterings.
Recall that \(\boldsymbol{\Gamma}\) is the \(np\times np\) covariance matrix of \(\text{vec}(\boldsymbol{Y})\). Note that in the definition of \(\text{pval}(\boldsymbol{x},\mathbf{t},\sigma)\) in Section 2.3.1 (one-dimensional case), the values of \(\boldsymbol{x}\), \(\boldsymbol{\Sigma}\), \(\boldsymbol{\eta}\), \(\mathbf{MP}_{\sigma}\) and \(\lambda\mathbf{m}\) are sufficient to determine the invariant statistic \(T(\boldsymbol{X},\mathbf{t},\sigma)\) in (14) and the \(p\)-value \(\text{pval}(\boldsymbol{x},\mathbf{t},\sigma)\) in (15). Thus we can define the test statistic \(\boldsymbol{\kappa}^{\top}\text{vec}(\boldsymbol{Y})\), then the invariant statistic \(T(\boldsymbol{Y})=T(\boldsymbol{Y},\mathbf{t}^{(1)},\ldots,\mathbf{t}^{(p)}, \sigma^{(1)},\ldots,\sigma^{(p)})\) in the same way as \(T(\boldsymbol{X},\mathbf{t},\sigma)\) in (14) and consequently the \(p\)-value \(\text{pval}(\mathbf{y})\), for a \(n\times p\) realization \(\mathbf{y}\) of \(\boldsymbol{Y}\), in the same way as \(\text{pval}(\boldsymbol{x},\mathbf{t},\sigma)\) in (15). The explicit correspondence between the notation of the one-dimensional case and the present notation is given in Table 1. The next section provides additional explanations on the computation of the invariant statistic \(T(\boldsymbol{Y})\), in the special case of independent variables, for the sake of exposition.
2.2 A detailed example: testing the signal difference along a variable \(j_{0}\) with independent variables
Consider testing the signal difference for the column \(j_{0}\) between two clusters \(\mathcal{C}_{k_{1}}\) and \(\mathcal{C}_{k_{2}}\) in the multi-dimensional clustering \(\mathcal{C}\), as in Example 1. It is interesting to explicit the construction of the invariant statistic in the special case of the matrix normal distribution (see Section 1) where \(\boldsymbol{\Delta}\) is diagonal, that is the \(p\)\(n\)-dimensional observation vectors corresponding to the \(p\) variables are independent. For the sake of simplicity, let us even consider that \(\boldsymbol{\Delta}=\boldsymbol{I}_{p}\).
Observe first that the test statistic satisfies \(\boldsymbol{\kappa}^{\top}\text{vec}(\boldsymbol{Y})=\boldsymbol{\eta}^{\top} \boldsymbol{Y}_{.j_{0}}\), where \(\eta_{i}=\mathds{1}_{i\in\mathcal{C}_{k_{1}}}/|\mathcal{C}_{k_{1}}|-\mathds{1 }_{i\in\mathcal{C}_{k_{2}}}/|\mathcal{C}_{k_{2}}|\). That is, the test statistic is constructed as it would be in the one-dimensional case (Section 2.3.1), except that the one-dimensional clustering \(\mathcal{C}^{(j_{0})}\) is replaced by the aggregated one \(\mathcal{C}\). The variance of the test statistic (unconditional to the clusterings and orders of observations) is thus \(\boldsymbol{\eta}^{\top}\boldsymbol{\Sigma}\boldsymbol{\eta}\) and is as in the one-dimensional case (up to the distinction between \(\mathcal{C}^{(j_{0})}\) and \(\mathcal{C}\)). Then, the next proposition specifies the computation of the invariant statistic.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c} \hline \(p=1\) & \(\boldsymbol{x}\) & \(\boldsymbol{X}\) & \(\boldsymbol{\Sigma}\) & \(\boldsymbol{\eta}\) & \(\mathbf{P}_{\sigma}\) & \(\mathbf{M}\) & \(\lambda\mathbf{m}\) & \(\boldsymbol{Z}\) & \(\boldsymbol{c}\) \\ size & \(n\) & \(n\) & \(n\times n\) & \(n\) & \(n\times n\) & \(2(n-1)\times n\) & \(n\) & \(n\) & \(n\) \\ \hline \(p>1\) & \(\text{vec}(\boldsymbol{y})\) & \(\text{vec}(\boldsymbol{Y})\) & \(\boldsymbol{\Gamma}\) & \(\boldsymbol{\kappa}\) & \(\mathbf{D}_{\sigma}\) & \(\boldsymbol{\mathscr{M}}\) & \(\lambda\boldsymbol{m}\) & \(\text{vec}(\overline{\boldsymbol{Z}})\) & \(\text{vec}(\overline{\boldsymbol{c}})\) \\ size & \(np\) & \(np\) & \(np\times np\) & \(np\times np\) & \(2(n-1)p\times np\) & \(np\) & \(np\) & \(np\) \\ \hline \end{tabular}
\end{table}
Table 1: Correspondence between the notation of Section 2.3.1 (dimension one) and the notation of Sections 3.2.1 and 3.2.2 (dimension \(p\)).
**Proposition 8**.: _In the context of Section 3.2.2, computing the invariant statistic as described in Section 3.2.1 is equivalent to proceed as described in Section 2.3.1 (one-dimensional case), with \(\boldsymbol{\eta}\) defined by \(\eta_{i}=\mathds{1}_{i\in\mathcal{C}_{k_{1}}}/|\mathcal{C}_{k_{1}}|-\mathds{1} _{i\in\mathcal{C}_{k_{2}}}/|\mathcal{C}_{k_{2}}|\) for \(i\in[|n|]\), with \(\boldsymbol{X}\) replaced by \(\boldsymbol{Y}_{.j_{0}}\) and with the conditioning set \(\{\mathbf{MP}_{\sigma}\boldsymbol{X}\leq\lambda\ \mathbf{m}\}\) replaced by \(\big{\{}\mathbf{M}(\mathbf{t}^{(j_{0})})\mathbf{P}_{\sigma^{(j_{0})}} \boldsymbol{Y}_{.j_{0}}\leq\lambda\mathbf{m}(\mathbf{t}^{(j_{0})})\big{\}}\)._
In Proposition 8, the observations corresponding to the variables \(j\neq j_{0}\), for which the average signal difference is not tested, have an impact on the clusterings \(\mathcal{C}^{(j)}\), \(j\neq j_{0}\), and thus have an impact on the multi-dimensional clustering \(\mathcal{C}\) and thus on \(\boldsymbol{\eta}\). Besides \(\boldsymbol{\eta}\), these observations have no other influence on the construction of the invariant statistic, which is computed only from \(\boldsymbol{Y}_{.j_{0}}\) and its conditioning set \(\{\mathbf{M}(\mathbf{t}^{(j_{0})})\mathbf{P}_{\sigma^{(j_{0})}}\boldsymbol{Y} _{.j_{0}}\leq\lambda\mathbf{m}(\mathbf{t}^{(j_{0})})\}\) as in the one-dimensional case. This fact can be interpreted in light of the general properties of conditioning and independence. Indeed, we are studying events of the form \(E_{j}\) on \(\boldsymbol{Y}_{.j}\), \(j\in[|p|]\) and we are studying a test statistic \(\boldsymbol{\eta}(E_{1},\ldots,E_{p})^{\top}\boldsymbol{Y}_{.j_{0}}\) conditionally to these events. Here \(E_{j}\) encodes the event corresponding to (6) and (7) in Theorem 2 for variable \(j\). By independence of \(\boldsymbol{Y}_{.j}\), \(j\in[|p|]\), the events \(E_{j}\), \(j\neq j_{0}\) simply have an influence on \(\boldsymbol{\eta}\), while the event \(E_{j_{0}}\) also has an impact on the conditional distribution of \(\boldsymbol{Y}_{.j_{0}}\) given \(E_{j_{0}}\).
#### 3.2.3 Conditional level
For \(j\in[|p|]\), let \(\widehat{\boldsymbol{B}}_{.j}\) be obtained from (2). The next proposition is similar to Proposition 5 and proves that the \(p\)-value \(\mathrm{pval}(\boldsymbol{Y})\) in Section 3.2.1 is uniformly distributed, conditionally to the one-dimensional clusterings and orders, when the null hypothesis is true. We remark that in the context of Section 3.1, this implies that the \(p\)-value is also uniformly distributed conditionally to the \(p\)-dimensional clustering obtained by aggregation, when the null hypothesis is true.
**Proposition 9**.: _Consider \(p\) fixed permutations \(\sigma^{(1)},\ldots,\sigma^{(p)}\) of \([|n|]\). Let \(K^{(1)},\ldots,K^{(p)}\in[|n|]\). For \(j\in[|p|]\), let \(\mathbf{t}^{(j)}\in\mathcal{T}_{K^{(j)},n}\) and consider the clustering \(\mathcal{C}^{(j)}\) associated to \((\mathbf{t}^{(j)},\sigma^{(j)})\) by Definition 2._
_Consider a fixed non-zero vector \(\boldsymbol{\kappa}\in\mathbb{R}^{np}\) (that is only allowed to depend on \((\mathbf{t}^{(j)},\sigma^{(j)}),j\in[|p|]\)). Assume that_
\[\boldsymbol{\kappa}^{\top}\boldsymbol{\beta}=0.\]
_Assume that with non-zero probability, the event_
\[E := \Big{\{}\text{for $j\in[|p|]$, $\mathcal{C}^{(j)}$ is the clustering given by $\widehat{\boldsymbol{B}}_{.j}$ and $Y_{\sigma^{(j)}(1)j}\geq\cdots\geq Y_{\sigma^{(j)}(n)j}$}\Big{\}}\]
_holds. Assume also that the \(np\times np\) matrix \(\boldsymbol{\Gamma}\) is invertible. Then, conditionally to \(E\), \(\mathrm{pval}(\boldsymbol{Y})\) is uniformly distributed on \([0,1]\) under the null hypothesis._
#### 3.2.4 Unconditional level
The unconditional guarantee is similar to that of Proposition 6 for the one-dimensional case. In particular, here we also introduce the subset \(\mathcal{E}\) on which the null hypothesis is well-defined.
**Proposition 10**.: _Let \(\mathcal{E}\) be a subset of the set of all possible values of \((\mathbf{t}^{(j)},\sigma^{(j)})_{j\in[|p|]}\) in Proposition 9. Consider a deterministic function \(\boldsymbol{\kappa}:\mathcal{E}\to\mathbb{R}^{np}\), outputing a non-zero column vector. Assume that \(\boldsymbol{\Gamma}\) is invertible. For \(j\in[|p|]\), let \(\widehat{\boldsymbol{B}}_{.j}\) be obtained from (2). Let also \(S^{(j)}=S^{(j)}(\boldsymbol{Y}_{.j})\) be the random permutation obtained by the order of \(\boldsymbol{Y}_{.j}\): \(Y_{S^{(j)}(1)j}\geq\cdots\geq Y_{S^{(j)}(n)j}\)
(uniquely defined with probability one). Let \(\mathcal{C}^{(j)}(\boldsymbol{Y}_{\cdot,j})=\mathcal{C}^{(j)}\) be the random clustering given by \(\widehat{\boldsymbol{B}}_{\cdot,j}\) (Definition 1). Let \(\mathbf{T}^{(j)}(\boldsymbol{Y}_{\cdot,j})=\mathbf{T}^{(j)}\in\mathcal{T}_{K^{( j)},n}\) be the random vector (with random \(K^{(j)}(\boldsymbol{Y}_{\cdot,j})=K^{(j)}\)), such that \((\mathbf{T}^{(j)},S^{(j)})\) yields \(\mathcal{C}^{(j)}\) as in Definition 2._
_Assume that_
\[\mathbb{P}\left((\mathbf{T}^{(j)},S^{(j)})_{j\in[|p|]}\in\mathcal{E}, \boldsymbol{\kappa}((\mathbf{T}^{(j)},S^{(j)})_{j\in[|p|]})^{\top}\boldsymbol{ \beta}=0\right)>0.\]
_Then, conditionally to the above event, \(\mathrm{pval}(\boldsymbol{Y})\) is uniformly distributed on \([0,1]\)._
### Numerical experiments
In this section, we describe the numerical experiments that we have performed in order to illustrate the behaviour of our post-clustering testing procedure for \(p>1\). The code to reproduce these numerical experiments and the associated figures is available from [https://plmlab.math.cnrs.fr/pneuvial/poclin-paper](https://plmlab.math.cnrs.fr/pneuvial/poclin-paper).
We consider the specific case where \(\boldsymbol{Y}\) is distributed from a matrix normal distribution \(\mathcal{MN}_{n\times p}(\mathbf{u},\boldsymbol{\Sigma},\boldsymbol{\Delta})\) (see Section 1) with \(p=3\), \(\mathbf{u}=\left(\begin{array}{ccc}\nu\mathbf{1}_{n/2}&\mathbf{0}_{n/2}& \mathbf{0}_{n/2}\\ -\nu\mathbf{1}_{n/2}&\mathbf{0}_{n/2}&\mathbf{0}_{n/2}\end{array}\right)\) with \(\nu\in\{0,1,2,5\}\), \(\boldsymbol{\Sigma}=\boldsymbol{I}_{n}\), and \(\boldsymbol{\Delta}=\left(\begin{array}{ccc}1&0&\rho\\ 0&1&0\\ \rho&0&1\end{array}\right)\) with \(\rho\in\{0,0.3,0.5\}\).
We obtain \(K=2\) clusters by aggregating one-dimensional convex clusterings obtained for a given value of \(\lambda\), as explained in Section 3.1. For each variable \(j\in\{1,2,3\}\), we want to compare the means of the two clusters. This corresponds to the test of the null hypothesis \(\boldsymbol{\kappa}^{\top}\boldsymbol{\beta}=0\), where \(\boldsymbol{\kappa}\) is defined by (3) (see Example 1). We compare our procedure with \(\lambda=0.016\) (resp. \(\lambda=0.0025\)) for \(n=100\) (resp. \(n=1000\)) and the two-group Wilcoxon rank sum test as implemented in the R function wilcox.test. This choice of \(\lambda\) ensures to have at least two clusters under the null hypothesis with high probability, as explained in Section 2.5 and Appendix E.1. The empirical cumulative distribution function of the \(p\)-values \(\mathrm{pval}(\mathbf{y})\) across \(500\) experiments is represented for different values of the simulation parameters in Figures 4 and 5 for \(n=100\) and \(n=1000\), respectively. For each parameter combination, the \(p\)-value distribution of the proposed method (in green) is compared to that of the two-group Wilcoxon rank sum test (in orange) for all three variables \(\boldsymbol{Y}_{\cdot,j}\), for \(j=1,2,3\) (in columns). Each row corresponds to a value of \(\nu\) and each line type corresponds to a value of \(\rho\).
First, the clustering procedure described in Section 3.1 works reasonably well in this setting. Indeed, for the variable \(\boldsymbol{Y}_{\cdot,1}\), the absolute value of the difference between the true means of the estimated clusters (obtained as \(\boldsymbol{\kappa}^{\top}\boldsymbol{\beta}\)) is generally close to the true value of the signal (that is \(2\nu\)), see Figure 9 in Appendix E.3.
The proposed test controls the type I error rate: in all situations where there is no signal (that is, for \(\nu=0\) or \(j\in\{2,3\}\)), the empirical \(p\)-value distribution is close to the uniform distribution on \([0,1]\) (\(y=x\)). Under the alternative hypothesis (i.e. for \(j=1\) and \(\nu>0\)), our proposed test is able to detect some signal for \(\nu\geq 2\). For \(\nu=1\) the signal is too small to be detected.
In contrast, the naive Wilcoxon test yields severely anti-conservative \(p\)-values in absence of signal. This test is naturally much more sensitive than our proposed test. However, it should
be noted that one cannot compare the power of the two tests, since the Wilcoxon test fails to control type I error.
Regarding the influence of \(n\): our proposed method does not gain much power as \(n\) increases from \(100\) to \(1000\). This is consistent with the fact that the signal is not different across values of \(n\), see Figure 9. For \(n=1000\), the Wilcoxon test is able to distinguish the signal from the noise when \(\rho=0\) and actually becomes well-calibrated for \(\boldsymbol{Y}_{.2}\) when \(\nu\neq 0\). However, due to the correlation between \(\boldsymbol{Y}_{.1}\) and \(\boldsymbol{Y}_{.3}\), the Wilcoxon test is anti-conservative for \(\boldsymbol{Y}_{.3}\).
## 4 Discussion
We first provide an overview of our contributions, and then we discuss various specific aspects of them and various remaining open questions.
Figure 4: The empirical cumulative distribution function of the \(p\)-values across \(500\) experiments for \(n=100\) with our method poclin (in green) and the Wilcoxon test (in orange). Each column corresponds to a variable \(j\), each row to a value of \(\nu\) and each line type to a value of \(\rho\).
### Overview of the contributions
Selective inference, in the post-clustering context, is a challenging problem and statistical guarantees could be obtained for it only in the recent years, see the references provided in Section 1. In this paper, we suggest a solution based on exhibiting polyhedral conditioning sets for Gaussian vectors, extending a line of work that has proved to be very successful in other statistical contexts, especially for regression models. This line of work was pioneered by [14] and then developed by [23, 29], among others.
Nevertheless, extending the existing approaches from regression models to clustering models is challenging. As such, the proofs we provide require innovations (for instance for Theorems 2 and 7). Furthermore, obtaining polyhedral conditioning sets is made possible by focusing on intermediate one-dimensional convex clustering optimization problems based on \(\ell_{1}\) penalties (see (2)). In the end, we provide the following workflow for selective inference post-clustering.
(1) We characterize a one-dimensional clustering by polyhedral constraints on the obser
Figure 5: The empirical cumulative distribution function of the \(p\)-values across 500 experiments for \(n=1000\) with our method \(\mathtt{pcolin}\) (in green) and the Wilcoxon test (in orange). Each column corresponds to a variable \(j\), each row to a value of \(\nu\) and each line type to a value of \(\rho\).
vation vector (Section 2.2).
(2) As a by by-product, we provide a regularization path algorithm to implement this clustering (Section 2.4). The computational efficiency of this algorithm is demonstrated numerically, also in comparison with other existing procedures.
(3) Following [14], from the polyhedral constraints, we obtain a test procedure which is conditionally and unconditionally valid post-clustering (Section 2.3). The procedure enables to test the nullity of any linear combination of the unknown mean vector, provided this combination only depends on the clustering (and on the order of the observations). In particular, it is possible to test for the significance of the signal difference between two clusters as in Example 1 (see Equation (16)). Although we do not develop it in this paper, confidence intervals for the above linear combination can be constructed from our test procedure, similarly as in [14]. Numerical experiments (Section 2.5) confirm the validity of the test procedure, and indicate that it has power to detect cases where the clustering on the observation vector was able to cluster the unknown mean vector as well into inhomogeneous groups.
(4) We suggest to aggregate one-dimensional clusterings to form a single multi-dimensional clustering for the data matrix. Our above contributions can thus be naturally leveraged to obtain a valid test procedure, posterior to this multi-dimensional clustering (Section 3.2). In particular, we can test the significance of the signal difference between clusters along a specific variable, as in Example 1. This feature could be beneficial in potential applications to single-cell RNA-seq data, since in this context, testing along a specific variable enables to study genes expressions individually. It is also a welcome complement to related references, in particular [6], that focuses on testing the global nullity of the signal mean difference vector across two clusters, rather than considering individual components (i.e. variables).
This workflow (1)-(4) depends on a regularization parameter \(\lambda\) that should not be data-driven (see Section 4.3 below). From a practical point of view, we provide a procedure to choose \(\lambda\) in a non data-driven way, from a choice of the covariance matrix, see Sections 2.5 and 3.3, and Appendix E.1.
Similarly as in the one-dimensional case, we provide numerical experiments (Section 3.3) that both confirm the validity of the test procedure and demonstrate its power to validate when the clustering procedure successfully yields clusters with significant signal difference for individual variables. These numerical experiments (as well as those in Section 2.5) also indicate that inference post-clustering is challenging, in that statistical procedures that do not account for the data-driven nature of the clustering are strongly anti-conservative. Indeed, the standard Wilcoxon test wrongly indicates signal differences across clusters in many cases where there is actually no difference. Note that the numerical experiments are focused on the hierarchical-clustering-based aggregation of one-dimensional clusterings, as described in Section 3.1. In future investigations, it would be relevant to quantify the benefit brought by alternative aggregation methods. Indeed, a flexibility of our framework is that our statistical guarantees hold for any aggregation procedure.
### Benefits of the test procedure in well- and misspecified clustering problems
For simplicity, let us focus on the one-dimensional case of Section 2, with the observation vector \(\mathbf{X}\sim\mathcal{N}(\mathbf{\mu},\mathbf{\Sigma})\). The discussion of the multi-dimensional case of Section 3 would be similar. The clustering problem can be considered as well-specified if there are clusters of indices for the mean vector \(\mathbf{\mu}\) with equal values, corresponding to a Gaussian mixture setting (see for instance [8, 13, 16, 22] for expositions and recent contributions on mixture models).
In the well-specified case, there are thus intrinsic classes of the observations and it is natural to aim at recovering them.
Consider for the sake of discussion that \(n/2\) components of \(\mathbf{\mu}\) are zero and the other \(n/2\) components are one (there are two intrinsic classes) and that the clustering procedure yields two clusters \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) of equal size. Then if the null hypothesis \((2/n)\sum_{i\in\mathcal{C}_{1}}\mu_{i}=(2/n)\sum_{i\in\mathcal{C}_{2}}\mu_{i}\) is rejected by our test procedure, it means that one empirical cluster contains a strict majority of individuals from one intrinsic class, and vice versa for the second cluster. If our test procedure is extended to yield a confidence interval on \((2/n)\sum_{i\in\mathcal{C}_{1}}\mu_{i}-(2/n)\sum_{i\in\mathcal{C}_{2}}\mu_{i}\) showing that with high probability this quantity is larger than some \(\delta\in(0,1)\), then one can see that the first empirical cluster contains at least \(n(\delta+1)/4\) observations from an intrinsic class (corresponding to mean one; and conversely for the second cluster). Hence, generally speaking, for a well-specified clustering problem with intrinsic classes, our test procedure is relevant to recover these classes, similarly as statistical procedures that are dedicated to finite mixture problems, see the references given above.
On the other hand, the clustering problem can be considered as misspecified when the \(n\) components of \(\mathbf{\mu}\) are two-by-two distinct. In this case one can consider that there are no intrinsic classes. Nevertheless, providing tests or confidence intervals on the same quantity \((2/n)\sum_{i\in\mathcal{C}_{1}}\mu_{i}-(2/n)\sum_{i\in\mathcal{C}_{2}}\mu_{i}\) as before enables to assess if the clustering procedure was able to cluster the unknown mean vector, besides the random/noisy observations. Hence, a benefit of the post-clustering framework considered here is that it is meaningful both in well- and misspecified settings. A similar discussion can be made in the related context of selective inference in regression settings, see in particular [1; 3].
### Known covariance matrix and fixed \(\mathbf{\lambda}\)
As pointed out above, we assume the covariance matrix (\(\mathbf{\Sigma}\) in Section 2 and \(\mathbf{\Gamma}\) in Section 3) to be known and the tuning parameter \(\lambda\) to be fixed. These two assumptions are necessary for our statistical guarantees in Sections 2 and 3. Indeed, the obtention of these guarantees relies first on exhibiting a Gaussian vector constrained to a polyhedron. Then, the Gaussian vector is decomposed into a linear combination (corresponding to the statistical hypothesis to test) and an independent remainder. This two-step strategy corresponds in particular to Lemma 3 and Proposition 4 in the one-dimensional case. It was previously suggested by [14] in the related context of post-selection inference for the lasso model selector, with Gaussian linear models.
Obtaining a polyhedron in the first step relies on \(\lambda\) not depending on the data, and computing the decomposition in the second step relies on knowing the covariance matrix. Broadly speaking, in the selective inference context, it is relatively common to assume known covariance matrices, or fixed tuning parameters, in order to obtain rigorous mathematical guarantees. This is indeed the case in [14] mentioned above, but also for instance in [6]. In this latter reference, the covariance matrix is assumed to be proportional to the identity, with a known variance for most of the theoretical results. Asymptotic results are given there in Section 4.3 for the case of a conservative variance estimator. Also, data thinning procedures, for instance in [18], usually require knowledge of the data distribution in order to produce independent parts, where the independence property enables valid statistical inference.
In our setting, obtaining theoretical guarantees (finite-sample or asymptotic) with an estimated covariance matrix or a data-dependent tuning parameter is of course an important problem for future work. In other contexts, successes have been obtained in this direction,
see in particular [28, 35]. Note that relaxing the assumption of known covariance matrix can yield identifiability issues, because the mean vector \(\mathbf{\beta}\) is unrestricted (see also the discussion of misspecified clustering problems in Section 4.2). These identifiability issues boil down to the fact that multiple pairs of mean vector and covariance matrix can "explain" the same dataset. Studying which minimal assumptions circumvent these identifiability issues is thus an important problem in the prospect of extending this work to an estimated covariance matrix.
### Choice of the \(\ell_{1}\) norm in the multi-dimensional convex clustering problem (1)
Our test procedure and its statistical guarantees for the multi-dimensional case rely on aggregating one-dimensional clusterings. As discussed in Remark 2, solving Problem (1) with the multi-dimensional \(\ell_{1}\) norm penalization boils down to one such aggregation. Hence, our procedure and guarantees apply to multi-dimensional convex clustering with \(\ell_{1}\) penalization.
One can see that our arguments, and crucially the proof of Theorem 2, cannot be applied directly to convex clusterings obtained by replacing the \(\ell_{1}\) penalization by a more general \(\ell_{q}\) one, \(q>0\), and especially by the \(\ell_{2}\) one. In fact, we view the following question as an important open problem: is it possible to characterize the set of observation matrices \(\mathbf{Y}\), such that Problem (1), with the \(\ell_{1}\) penalization replaced by the \(\ell_{q}\) one, yields a given clustering, with polyhedral sets or other tractable sets?
Nevertheless, we note that the \(\ell_{1}\) penalization in Problem (1) has computational benefits. Indeed, the problem is separable, and for each subproblem, we have obtained an exact regularization path in Section 2.4 that stops after a maximal number of iterations known in advance. To our knowledge, such a favorable regularization path is not available for a general \(\ell_{q}\) penalization. In agreement with this, the reference [10] (from 2011) concludes that Problem (1) can be readily solved for thousands of data points, while if the \(\ell_{1}\) penalization is replaced by the \(\ell_{q}\) one, this is the case for (only) hundreds of data points.
### Comparison with data splitting strategies
For the problem of post-clustering inference, data splitting (or data fission, data thinning) strategies [5, 18, 36] consist in separating the dataset into two stochastically independent ones, keeping the same indexing of individuals as the original dataset. Then, a clustering can be computed from the first dataset and then applied to the second data set. By independence, the distribution of a post-clustering statistic of interest (for instance the difference of average between two classes for a variable, in view of studying (4)) on the second data set remains simple. For instance if the original dataset is Gaussian, this distribution remains Gaussian conditionally to the clustering. Hence, a benefit of data splitting compared to our approach is a simplicity of implementation. Furthermore, any clustering procedure can be used.
On the other hand, with data splitting, conclusions are provided for a clustering computed on a dataset that differs from the original one. Hence, the conclusions of data splitting approaches might be more difficult to interpret for practitioners, compared to those of the present work, since these conclusions do not apply to the clustering that they would compute on the original data set.
Note also that data splitting and our approach share two similar difficulties. First, they share hyperparameters that should not be data-driven for the statistical guarantees. Indeed,
with data splitting we need to fix the splitting mechanism to general the two datasets above. Similarly, we fix the regularization parameter \(\lambda\) in (1). Second, considering Gaussian data, the covariance matrix should be known for data splitting and our approach, as already discussed in Section 4.3.
### On conditioning by the orders
Let us consider the one-dimensional setting (Section 2) for simplicity of exposition. A similar discussion could be made for the multi-dimensional case as well. Our test procedure is valid conditionally to both the clustering and the order of observations, see Proposition 5, and our discussion at the beginning of Section 2.2. Being valid conditionally to the clustering can be considered as a desirable statistical feature, since the clustering is an object of interest in itself (see also the discussion before Proposition 5). However, being valid conditionally to the order is more a by-product of our approach than a desirable statistical feature. Indeed, in order to obtain a polyhedral set with a tractable number of linear pieces (\(2(n-1)\)) in Theorem 2, it was necessary in the proof to condition by the observation order. Importantly, the constraint (9) is not a linear constraint on the observation vector if the order is not fixed.
It could be the case that, if a test procedure could be derived by only conditioning by the clustering, this test could have more power than the one we obtain in Section 2.3, which is an interesting perspective for future work. In other words, it is possible that we pay a price when conditioning by the order of observations? In the related regression context, a similar phenomenon occurs in [14]. There, a first test procedure is obtained by conditioning by the selected variables and a second one is obtained by conditioning by the selected variables and the signs of the coefficients. The first procedure has a computational cost that is exponential in the number of variables, but is more powerful. The second procedure has a small computational cost. In Section 6 of [14], it is written on this point that "one may be willing to sacrifice statistical efficiency for computational efficiency".
## Acknowledgements
This work was supported by the Project GAP (ANR-21-CE40-0007) of the French National Research Agency (ANR), and by the MITI at CNRS through the DDisc project.
|
2303.05877 | Absence and presence of Lavrentiev's phenomenon for double phase
functionals upon every choice of exponents | We study classes of weights ensuring the absence and presence of the
Lavrentiev's phenomenon for double phase functionals upon every choice of
exponents. We introduce a new sharp scale for weights for which there is no
Lavrentiev's phenomenon up to a counterexample we provide. This scale embraces
the sharp range for $\alpha$-H\"older continuous weights. Moreover, it allows
excluding the gap for every choice of exponents $q,p>1$. | Michał Borowski, Iwona Chlebicka, Filomena De Filippis, Błażej Miasojedow | 2023-03-10T12:06:12Z | http://arxiv.org/abs/2303.05877v1 | # Absence and presence of Lavrentiev's phenomenon
###### Abstract
We study classes of weights ensuring the absence and presence of the Lavrentiev's phenomenon for double phase functionals upon every choice of exponents. We introduce a new sharp scale for weights for which there is no Lavrentiev's phenomenon up to a counterexample we provide. This scale embraces the sharp range for \(\alpha\)-Holder continuous weights. Moreover, it allows excluding the gap for every choice of exponents \(q,p>1\).
keywords: Lavrentiev's phenomenon, double-phase functionals, calculus of variations, relaxation methods +
Footnote †: journal: Elsevier
## 1 Introduction
We consider the following double-phase functional
\[\mathcal{F}[u]=\int_{\Omega}|\nabla u(x)|^{p}+a(x)|\nabla u(x)|^{q}\,dx\,, \tag{1}\]
over open and bounded \(\Omega\subset\mathbb{R}^{n}\), \(n>1\), where \(1\leq p,q<\infty\) and weight \(a:\Omega\to[0,\infty)\) is bounded. The functional is designed to model the transition between the region where the gradient is integrable with \(p\)-th power and the region where it has the higher integrability with \(q\)-th power. Therefore, we are interested only in the situation when \(p<q\) and \(a\) vanishes on some subset of \(\Omega\), but \(a\not\equiv 0\). Functional \(\mathcal{F}\) and various kinds of its minimizers have been studied since [33; 36] continued in a vast range of contributions including [5; 6; 17; 18; 19; 22; 23; 27] with sharpness discussed in [3; 23; 24; 35; 36]. More recent developments in this matter may be found in [2; 4; 9; 10; 14; 20; 30]. Our main focus is on the approximation properties of the double phase version of Sobolev space, in the spirit of [1; 7; 16], and consequences for the double phase functionals, cf. [8; 11]. By developments of [5; 11; 17], revisited in [8], it is known that the condition
\[a\in C^{0,\alpha}(\Omega),\ \alpha\in(0,1],\ \text{and}\ \ p<q\leq p+\alpha \tag{2}\]
is enough for good approximation properties of the energy space by regular functions. This condition is meaningful only provided that \(p<q\leq p+1\), but - as explained in the paper - \(q\) and \(p\) can be actually arbitrarily far from each other as long as the weight has relevant properties. We also show the sharpness of our new scale up to a counterexample we provide.
Let us at first settle what we mean by the Lavrentiev's phenomenon in our case. For \(1<p<q<\infty\) and \(a:\Omega\to[0,\infty)\), we set
\[M(x,t)=t^{p}+a(x)t^{q}.\]
Given a bounded and open set \(\Omega\subset\mathbb{R}^{n}\), let us define the energy space
\[W(\Omega):=\left\{\varphi\in W_{0}^{1,1}(\Omega):\quad\int_{\Omega}M(x,| \nabla\varphi|)\,dx<\infty\right\} \tag{3}\]
endowed with a Luxemburg-type norm. Note that we have the inclusion \(C_{c}^{\infty}\subset W\), and in turn
\[\inf_{v\in u_{0}+W}\mathcal{F}[v]\leq\inf_{w\in u_{0}+C_{c}^{\infty}}\mathcal{ F}[w]\,.\]
It is known that if \(M\) is not regular enough, the inequality above is strict, i.e.,
\[\inf_{v\in u_{0}+W}\mathcal{F}[v]<\inf_{w\in u_{0}+C_{c}^{\infty}}\mathcal{F} [w]\,, \tag{4}\]
which means that the Lavrentiev's phenomenon between spaces \(C_{c}^{\infty}\) and \(W\) occurs. The first example of such situation, for a different functional, was provided by Lavrentiev, see [31, 32]. There was a deep interest in the autonomous and nonautonomous problems throughout the decades. See [12, 13, 23, 36] and references therein as well as already mentioned recent contributions [2, 3, 4, 5, 8, 9, 10, 11, 14, 17, 18, 20, 22, 23, 30]. In particular, in [36] Zhikov introduced the double-phase functional (1) and provide an example of the Lavrentiev's phenomenon in dimension \(n=2\) and for \(p\in[1,2]\), \(q>3\) and a Lipschitz continuous weight \(a\). His example was extended to arbitrary \(n\geq 2\) in [23], requiring that \(p<n<n+\alpha<q\), where \(\alpha\) is an exponent of the Holder continuity of the weight \(a\).
The regularity of the possibly vanishing weight \(a\) dictates how far apart can be powers \(p\) and \(q\) to exclude (4). In particular, it was known that if (2) is satisfied, then there is no Lavrentiev's phenomenon and \(q=p+1\) used to be treated as a borderline. We consider a new sharp scale \(\mathcal{Z}^{\varkappa}\) that captures the abovementioned result. A function \(a\in\mathcal{Z}^{\varkappa}\) is assumed to decay in the transition region at least like a power function with an exponent \(\varkappa\) for \(\varkappa\in(0,\infty)\). In turn, our approach extends the result on the range for the absence of the Lavrentiev's gap to \(a\in\mathcal{Z}^{\varkappa}\) within
\[p<q\leq p+\varkappa,\quad\varkappa\in(0,\infty)\,. \tag{5}\]
Within the range (5), we prove the absence of the Lavrentiev's phenomenon for \(\mathcal{F}\) between \(W\) and \(C_{c}^{\infty}\) up to a counterexample from Section 4. The definition of \(\mathcal{Z}^{\varkappa}\) reads as follows.
**Definition 1.1** (Class \(\mathcal{Z}^{\varkappa}(\Omega)\), \(\varkappa\in(0,\infty)\)).: _Let \(\Omega\subset\mathbb{R}^{n}\), \(n\in\mathbb{N}\). A function \(a:\Omega\to[0,\infty)\) belongs to \(\mathcal{Z}^{\varkappa}(\Omega)\) for \(\varkappa\in(0,\infty)\), if there exists a positive constant \(C\) such that_
\[a(x)\leq C\left(a(y)+|x-y|^{\varkappa}\right) \tag{6}\]
_for all \(x,y\in\Omega\)._
Of course, \(\alpha\)-Holder continuous functions for \(\alpha\in(0,1]\) belong to \(\mathcal{Z}^{\alpha}\), but \(\mathcal{Z}^{\varkappa}\) with \(\varkappa\in(0,\infty)\) is an essentially broader class of functions, see Figure 1. In particular, for every \(\varkappa\in(0,\infty)\), we have that the function \(x\mapsto|x|^{\varkappa}\) belongs to \(\mathcal{Z}^{\varkappa}(\mathbb{R}^{n})\). To provide better understanding of this new scale, we set down its main properties.
**Remark 1.2** (Basic properties of \(\mathcal{Z}^{\varkappa}(\Omega)\)).: _If \(\Omega\subset\mathbb{R}^{n}\) is an open set, then the following holds._
1. _A function_ \(a\) _belongs to_ \(\mathcal{Z}^{\varkappa}(\Omega)\) _for_ \(\varkappa\in(0,1]\) _if and only if there exists_ \(\widetilde{a}\in C^{0,\varkappa}(\Omega)\)_, such that_ \(a\) _is comparable to_ \(\widetilde{a}\)_; i.e., there exists a positive constant_ \(c\) _such that_ \(\widetilde{a}\leq a\leq c\widetilde{a}\)_._
2. _Let_ \(\varkappa,\beta\in(0,\infty)\)_. Function_ \(a\in\mathcal{Z}^{\varkappa}(\Omega)\) _if and only if_ \(a^{\beta}\in\mathcal{Z}^{\beta\varkappa}(\Omega)\)_._
3. _Function_ \(a\in\mathcal{Z}^{\varkappa}(\Omega)\) _for_ \(\varkappa\in(0,\infty)\) _if and only if there exists_ \(\widetilde{a}\) _comparable to_ \(a\) _such that_ \(\widetilde{a}^{\frac{1}{\varkappa}}\in\text{Lip}(\Omega)\)_._
4. _If_ \(0<\varkappa_{1}\leq\varkappa_{2}\)_, then_ \(\mathcal{Z}^{\varkappa_{2}}(\Omega)\subset\mathcal{Z}^{\varkappa_{1}}(\Omega)\)_._
The first point of Remark 1.2 says that for \(\varkappa\in(0,1]\), class \(\mathcal{Z}^{\varkappa}(\Omega)\) is similar to Holder continuity, but it is actually requiring admissible decay rate near regions where \(a\) vanishes. The second point of the remark allows extending this intuition to \(\varkappa>1\), as we can look at some power of \(a\). In particular, according to the third point, the \(\varkappa\)-th roots of functions in \(\mathcal{Z}^{\varkappa}\) are comparable to Lipschitz continuous functions. We show examples of functions in \(\mathcal{Z}^{\varkappa}\) on an interval for large and small values of parameter \(\varkappa\) on Figure 1. In both of these cases there is no reason for the smoothness or the continuity of functions from \(\mathcal{Z}^{\varkappa}\). The controlled property is the rate of decay in the transition region, which is comparable to a power function with an exponent \(\varkappa\). Let us stress that \(C^{1,\alpha}\)-regularity for \(\alpha\in(0,1]\) of the weight implies its \(\mathcal{Z}^{1+\alpha}\)-regularity, but smoothness of the weight does not give more than \(\mathcal{Z}^{2}\). To state it precisely, we give the following proposition proven in the appendix.
**Proposition 1.3**.: _If \(\Omega\subset\mathbb{R}^{n}\) is an open set, then the following holds._
1. _If_ \(0\leq a\in C^{1,\alpha}(\overline{\Omega})\) _for some_ \(\alpha\in(0,1]\) _and_ \(a>0\) _on_ \(\partial\Omega\)_, then_ \(a\in\mathcal{Z}^{1+\alpha}(\Omega)\)_._
2. _There exists_ \(0\leq a\in C^{\infty}(\overline{\Omega}),\) _such that_ \(a>0\) _on_ \(\partial\Omega\)_,_ \(a\in\mathcal{Z}^{2}(\Omega)\)_, and_ \(a\not\in\mathcal{Z}^{2+\varepsilon}(\Omega)\) _for any_ \(\varepsilon>0\)_._
Our main result yields that for any \(q\) and \(p\) the Lavrentiev's phenomenon is absent provided that close to the phase transition \(a\) is decaying not slower than a power function with an exponent \(\frac{1}{q-p}\). We also claim that this range is sharp, that is if the decay rate is slower and \(p\) and \(q\) meet the dimensional threshold, then the gap occurs.
**Theorem 1**.: _Suppose \(1<p<q<\infty\), \(\varkappa>0\), \(a:\Omega\to[0,\infty)\), \(\mathcal{F}\) is given by (1), and \(W(\Omega)\) is defined in (3). Then the following claims hold true._
1. _If_ \(q\leq p+\varkappa\)_,_ \(\Omega\) _is a Lipschitz domain,_ \(a\in\mathcal{Z}^{\varkappa}(\Omega)\)_, and_ \(u_{0}\) _satisfies_ \(\mathcal{F}[u_{0}]<\infty\)_, then_ \[\inf_{v\in u_{0}+W(\Omega)}\mathcal{F}[v]=\inf_{w\in u_{0}+C_{c}^{\infty}( \Omega)}\mathcal{F}[w]\,,\] _i.e., there is no Lavrentiev's phenomenon between_ \(W(\Omega)\) _and_ \(C_{c}^{\infty}(\Omega)\)_._
2. _If_ \(p<n<n+\varkappa<q\)_, then there exist a Lipschitz domain_ \(\Omega\)_,_ \(a\in\mathcal{Z}^{\varkappa}(\Omega)\) _and_ \(u_{0}\) _satisfying_ \(\mathcal{F}[u_{0}]<\infty\)_, such that_ \[\inf_{v\in u_{0}+W(\Omega)}\mathcal{F}[v]<\inf_{w\in u_{0}+C_{c}^{\infty}( \Omega)}\mathcal{F}[w]\,,\] _i.e., the Lavrentiev's phenomenon occur between_ \(W(\Omega)\) _and_ \(C_{c}^{\infty}(\Omega)\)_._
In turn, in order to ensure the absence of the Lavrentiev's gap for any \(q\) and \(p\) one can take a weight \(a\) decaying like \(e^{-1/t^{2}}\) that is faster than any polynomial, see Remark 1.2, p. 3. We stress that no continuity or smoothness of \(a\) is required. Nonetheless, in the view of Proposition 1.3, if \(a\in C^{1,\alpha}\), \(\alpha\in(0,1]\), then Theorem 1_(i)_ implies the absence of the Lavrentiev's phenomenon as long as \(q\leq p+1+\alpha\). Let us note that Theorem 1 is formulated for a model for the sake of clarity of exposition. The same conclusion as in _(i)_ for a general class of functional is given by Theorem 5. Let us note that a modification of our method might be used to relax the required regularity of the domain, cf. [10]. Moreover, we point out that there are methods to get rid of the dimensional threshold between \(p\) and \(q\) in \((ii)\) that involve construction of fractals, see [3] for a general method. However, we restrict ourselves to the exponents satisfying \(p<n<n+\varkappa<q\) in order to make the proofs as concise and straightforward as possible.
On the other hand, we prove also that the functional \(\mathcal{F}\) given by (1) enjoys interpolation properties. Namely, if one assumes additionally that \(u\in C^{0,\gamma}\), \(\gamma\in(0,1]\), we can relax the bound (5) even further. In fact, for \(\gamma=1\) there is no gap for arbitrary \(p\) and \(q\). Moreover, to exclude the gap between \(W\cap C^{0,\gamma}\) and \(C_{c}^{\infty}\), it suffices to take
\[q\leq p+\frac{\varkappa}{1-\gamma},\quad\varkappa\in(0,\infty)\,. \tag{7}\]
This we provide in Section 3.2, namely in Theorem 4 and its extension Theorem 5. As far as double phase functionals are concerned, it fully covers the results of [5, 8, 11], but allows considering arbitrarily far \(p\) and \(q\), in case of any \(\gamma\). Moreover, this interpolation phenomenon has one more application. Note that in case of functional (1), for \(p>n\) one can apply Morrey Embedding Theorem to obtain that all functions from energy space \(W\) are Holder continuous with a certain exponent. Substitution of this exponent in the range (7) allows for the condition \(q\leq p+\frac{p\varkappa}{n}\), for the absence of the Lavrentiev's phenomenon between \(C_{c}^{\infty}\) and \(W\). This range embraces the classical range from [23], but it employs the scale \(\mathcal{Z}^{\varkappa}\) instead of Holder continuity. Therefore, our range is meaningful for \(p\) and \(q\) arbitrary far away. We can cover both cases \(p\leq n\) and \(p>n\) simultaneously, obtaining the following theorem.
**Theorem 2**.: _Suppose \(\Omega\subset\mathbb{R}^{n}\) is a bounded Lipschitz domain, \(p,q,\varkappa\) are such that_
\[\varkappa>0\quad\text{ and }\quad 1<p<q\leq p+\varkappa\max\left\{\tfrac{p}{n},1 \right\}\,,\]
\(a:\Omega\to[0,\infty)\) is such that \(a\in\mathcal{Z}^{\varkappa}(\Omega)\), \(\mathcal{F}\) is given by (1), and \(W(\Omega)\) is defined in (3). Then for any \(u_{0}\) such that \(\mathcal{F}[u_{0}]<\infty\), it holds that_
\[\inf_{v\in u_{0}+W(\Omega)}\mathcal{F}[v]=\inf_{w\in u_{0}+C^{\infty}_{c}( \Omega)}\mathcal{F}[w]\,, \tag{8}\]
_i.e., there is no Lavrentiev's phenomenon between \(W(\Omega)\) and \(C^{\infty}_{c}(\Omega)\)._
One can ask whether the preeminent regularity results for double phase functionals may be proven within the class \(\mathcal{Z}^{\varkappa}\). Let us point out that assumptions _(A1)_ and _(A1-n)_, studied in [25] for analysis in generalized Orlicz spaces, specified for the double phase energy for \(a\in\mathcal{Z}^{\varkappa}(\Omega)\), \(\varkappa>0\) read \(q\leq p+p\varkappa/n\) and \(q\leq p+\varkappa\), respectively. In turn, in [27] it is proven that quasiminimizers of \(\mathcal{F}\) are Holder continuous provided _(A1)_ and _(A1-n)_ hold, which means \(q\leq p+\varkappa\min\{p/n,1\}\) for \(\varkappa>0\). On the other hand, in [26] it is proven that \(\omega\)-minimizers of \(\mathcal{F}\) are Holder continuous provided _(A1)_ or \(u\) is a priori bounded and _(A1-n)_ holds. Moreover, for the double phase functional \(\mathcal{F}\) with \(a\in\mathcal{Z}^{\varkappa}(\Omega)\) condition _(VA1)_ from [28, 29] is also equivalent to \(q/p\leq 1+\varkappa/n\), \(\varkappa>0\). Consequently, \(C^{0,\beta}\) and \(C^{1,\beta}\)-regularity of local minimizers from these papers hold for \(a\in\mathcal{Z}^{\varkappa}(\Omega)\) with \(q/p\leq 1+\varkappa/n\), \(\varkappa>0\). It would be interesting to extend the results of [5] to cover \(a\in\mathcal{Z}^{\varkappa}(\Omega)\) for all \(\varkappa>0\) and \(q\leq p+\varkappa\) (if a priori \(u\in L^{\infty}\)) or \(q\leq p+\varkappa/(1-\gamma)\) (if a priori \(u\in C^{0,\gamma}\)). Note that the \(C^{0,\beta}\) and \(C^{1,\beta}\)-regularity of local minimizers is also the topic of [2], where the functionals are of Orlicz multi phase growth. Nonetheless, the assumption made therein that the weights \(a_{i}\in C^{0,\omega_{i}}\) for controlled moduli of continuity \(\omega_{i}\) which are concave does not allow for studying Orlicz phases with growths arbitrarily far, which is allowed for counterparts of \(\mathcal{Z}^{\varkappa}\) for all \(\varkappa>0\) or under _(A1)/(A1-n)_. See Section 5 for possible extension of \(\mathcal{Z}^{\varkappa}\) for the Orlicz multi phase case. The double phase functionals with the integrand depending on \((x,u,\nabla u)\) (cf. (29)) embracing both of the mentioned contributions allowing for phases with growth arbitrarily far apart and involving the weights decaying like the dot-dashed one in Figure 1 are still calling for the local regularity theory under the sharp regime.
Let us briefly summarize our methods. Unlike [9, 20, 30], we do not analyse the behaviour of minimizers, but we study the approximation properties of a relevant function space. In this regard, we first establish the density of smooth functions in the double phase version of the Sobolev space (Theorem 3). In related investigations, employing convolution-based approximation is a common technique, see e.g. [1, 2, 7, 8, 11, 16, 22, 23, 25] for various variants. We enclose here - for the sake of completeness - concise arguments shortening the reasoning of [11] and its anisotropic extension [8] making use of properties of convolution from Lemma 2.2. Let us recall how broad is the class \(\mathcal{Z}^{\varkappa}\), as shown in Figure 1 and in Remark 1.2. Note that choosing arbitrary \(p\) and \(q\), one can easily see a power of \(\mathcal{Z}^{\varkappa}\)-scale for all \(\varkappa>0\) in the proof of Theorem 3 (precisely in (19)). This result of ours is not more powerful than [8, Theorem 2], but it gives the true feeling that there is no reason for \(p\) and \(q\) to be close if only one can adjust the decay of the weight to compensate it. The absence of the Lavrentiev's phenomenon, stated in Theorem 5, is a consequence of the density of smooth functions via the ideas inspired by [8, 11] applying the Vitali convergence theorem. The same method can be applied to a broad family of functionals, see Section 5 for several examples. The sharpness of the result on the absence of the Lavrentiev's phenomenon of Theorem 1 is confirmed by a counterexample we provide in Section 4. We indicate a domain, a boundary condition, and weight \(a\in\mathcal{Z}^{\varkappa}\) for \(\varkappa\) outside the good range for the approximation result (Theorem 3), for which the infima of \(\mathcal{F}\) differ. The method is inspired by the two-dimensional checkerboard constructions of Zhikov [35, 36] and its extension in [23], but requires essentially more delicate arguments. In detail, we modify a weight \(a\in C^{0,\alpha}\) from [23] to allow \(a^{\frac{1}{\varkappa}}\) being comparable to a Lipschitz function, so that \(a\in\mathcal{Z}^{\varkappa}\). See definition of weight \(a\) in (38) and its property of Lemma 4.1. The small change is surprisingly powerful and justifies the use of \(\mathcal{Z}^{\varkappa}\)-scale for variational problems involving
double phase functionals with arbitrarily far powers.
**Organization of the paper**. In Section 2, we provide information on the notation and basic tools used in the proofs of our results. Section 3 is devoted to proofs of results concerning density of smooth functions and the absence of the Lavrentiev's phenomenon, while in Section 4, sharpness of these results is discussed. Finally, in Section 5, we comment on generalizations of our results allowing considering types of functionals other than (1).
## 2 Preliminaries
We denote by \(B_{r}(x)\) a ball centred in \(x\), with radius \(r\).
Given a set \(U\subseteq\mathbb{R}^{n}\), \(\gamma\in(0,1]\), and the function \(U\), we denote
\[[f]_{0,\gamma}\coloneqq\sup_{x,y\in U,x\neq y}\frac{|f(x)-f(y)|}{|x-y|}\,, \tag{9}\]
which is the Holder seminorm of a function \(f\). The set \(U\) will be always clear from context.
We say that two real functions \(f,g\) are comparable, if there exists a constant \(c>0\) such that \(f\leq g\leq cf\). We moreover say that the function \(f:[0,\infty)\to[0,\infty)\) satisfies \(\Delta_{2}\)-condition if there exists a constant \(c>0\) such that \(f(2t)\leq cf(t)\), for any \(t\). We denote such situation by \(f\in\Delta_{2}\).
By \(\mathcal{H}^{n-1}\), we denote classical Hausdorff measure of dimension \(n-1\), defined on \(\mathbb{R}^{n}\).
Let us introduce some basic facts concerning spaces of Musielak-Orlicz type [15; 16; 25]. With the function \(M:\Omega\times[0,\infty)\to\mathbb{R}\), defined by
\[M(x,t)=t^{p}+a(x)t^{q}\qquad\text{for}\ \ p,q>1\,,\ \ 0\leq a\in L^{\infty}\,,\]
we can define the corresponding Musielak-Orlicz space by
\[L_{M}(\Omega)=\left\{\xi:\Omega\to\mathbb{R}^{n}\text{ measurable and such that }\int_{\Omega}M(x,|\xi(x)|)\,dx<\infty\right\}\]
equipped with the Luxemburg norm
\[\|\xi\|_{L_{M}(\Omega)}:=\inf\left\{\lambda>0:\int_{\Omega}M\left(x,\frac{\xi (x)}{\lambda}\right)dx\leq 1\right\}.\]
Related Sobolev space \(W(\Omega)\), defined in (3), is considered with a norm
\[\|v\|_{W(\Omega)}:=\|v\|_{L^{1}(\Omega)}+\|\nabla v\|_{L_{M}(\Omega)}\,.\]
We say that a sequence \((\xi_{k})_{k}\) converges to \(\xi\) modularly in \(L_{M}(\Omega)\), if
\[\int_{\Omega}M(x,|\xi_{k}-\xi|)\,dx\xrightarrow{k\to\infty}0\,, \tag{10}\]
and we denote it by \(\xi_{k}\xrightarrow[k\to\infty]{M}\xi\). We mention the Generalized Vitali Convergence Theorem from the [16; Theorem 3.4.4], stating that
\[\begin{array}{ll}\xi_{k}\to\xi\text{ modularly}&\Longleftrightarrow\text{ the family }\{M(\cdot,|\xi_{k}(\cdot)|)\}_{k}\text{ is uniformly integrable}\\ &\text{and }(\xi_{k})_{k}\text{ converges in measure to }\xi.\end{array} \tag{11}\]
Let us recall the space \(W(\Omega)\) defined in (3). By the choice of \(M\), it is equivalent to say that the sequence \((u_{k})_{k}\subset W(\Omega)\) converges to \(u\in W(\Omega)\) in the strong topology of \(W(\Omega)\) and that
\[u_{k}\xrightarrow[k\to\infty]{L^{1}}u\text{ in }L^{1}(\Omega)\quad\text{and} \quad\nabla u_{k}\xrightarrow[k\to\infty]{M}\nabla u\text{ modularly}. \tag{12}\]
Let us mention a simple lemma, following from the Lebesgue Dominated Convergence Theorem.
**Lemma 2.1**.: _If \(T_{k}(x)=\min\{k,\max\{-k,x\}\}\) for \(k>0\) and \(x\in\mathbb{R}\), \(M(x,t)=t^{p}+a(x)t^{q}\), \(\varphi\in W(\Omega)\), then for \(k\to\infty\) we have \(T_{k}\varphi\to\varphi\) in \(W(\Omega)\)._
We introduce the approximation method by convolution with shrinking. This method is of use in many papers concerning the absence of the Lavrentiev's phenomenon and density of smooth functions in Musielak-Orlicz-Sobolev spaces, see [1; 7; 8; 11].
Let us fix \(n,m\in\mathbb{N}\) and let \(U\subset\mathbb{R}^{n}\) be a bounded star-shaped domain with respect to a ball \(B(x_{0},R)\). Define \(\kappa_{\delta}=1-\frac{\delta}{R}\). Moreover, let \(\rho_{\delta}\) be a standard regularizing kernel on \(\mathbb{R}^{n}\), that is \(\rho_{\delta}(x)=\rho(x/\delta)/\delta^{n}\), where \(\rho\in C^{\infty}(\mathbb{R}^{n})\), \(\operatorname{supp}\rho\Subset B(0,1)\) and \(\int_{\mathbb{R}^{n}}\rho(x)\,dx=1\), \(\rho(x)=\rho(-x)\), such that \(0\leq\rho\leq 1\). Then for any measurable function \(v:\mathbb{R}^{n}\to\mathbb{R}^{m}\), we define the function \(S_{\delta}v:\mathbb{R}^{n}\to\mathbb{R}^{m}\) by
\[S_{\delta}v(x):=\int_{U}\rho_{\delta}(x-y)v\left(x_{0}+\frac{y-x_{0}}{\kappa_ {\delta}}\right)\,dy=\int_{B_{\delta}(0)}\rho_{\delta}(y)v\left(x_{0}+\frac{x -y-x_{0}}{\kappa_{\delta}}\right)\,dy\,. \tag{13}\]
By direct computations, one can show that \(S_{\delta}v\) has a compact support in \(U\) for \(\delta\in(0,R/4)\). Moreover, we observe that for \(v\in W(U)\) it holds that
\[\nabla S_{\delta}v=\tfrac{1}{\kappa_{\delta}}S_{\delta}(\nabla v)\,. \tag{14}\]
We introduce other useful properties of this approximation in the following lemmas.
**Lemma 2.2** (Lemma 3.1 in [8]).: _If \(v\in L^{1}(U)\), then \(S_{\delta}v\) converges to \(v\) in \(L^{1}(U)\), and so in measure, as \(\delta\to 0\)._
**Lemma 2.3** (Lemma 3.3 in [8]).: _Let \(v\in W^{1,1}_{0}(U)\), where \(U\) is a star-shaped domain with respect to a ball \(B(x_{0},R)\). It holds that_
* _if_ \(v\in L^{\infty}(U)\)_, then_ \[\|\nabla S_{\delta}(v)\|_{L^{\infty}}\leq\delta^{-1}\|v\|_{L^{\infty}}\|\nabla \rho\|_{L^{1}}\,;\] (15)
* _if_ \(v\in C^{0,\gamma}(U)\)_,_ \(\gamma\in(0,1]\)_, then_ \[\|\nabla S_{\delta}(v)\|_{L^{\infty}}\leq\frac{\delta^{\gamma-1}}{\kappa_{ \delta}^{\gamma}}[v]_{0,\gamma}\|\nabla\rho\|_{L^{1}}\,.\] (16)
## 3 Approximation and the absence of the Lavrentiev's phenomenon
When \(q>p>1\) no matter how far are \(q\) and \(p\), to ensure the approximation properties of the double phase version of the Sobolev space it suffices to control the decay of the weight close to the phase transition. In fact, it is enough to have \(a^{\frac{1}{q-p}}\) to be comparable to a Lipschitz continuous function. In Section 3.1 we prove the density result, which is applied in Section 3.2 to get the absence of the Lavrentiev's phenomenon.
### Approximation
In this section, we make use of the convolution with shrinking, to establish the density of smooth functions in the energy space \(W\) defined in (3). The result reads as follows.
**Theorem 3** (Density of smooth functions).: _Let \(\Omega\) be a bounded Lipschitz domain in \(\mathbb{R}^{n}\), \(1<p<q<\infty\), \(\varkappa>0\), and \(a:\Omega\to[0,\infty)\) be such that \(a\in\mathcal{Z}^{\varkappa}(\Omega)\). Then the following assertions hold true._
1. _If_ \(\varkappa\geq q-p\)_, then for any_ \(\varphi\in W(\Omega)\) _there exists a sequence_ \((\varphi_{\delta})_{\delta}\subset C_{c}^{\infty}(\Omega)\)_, such that_ \(\varphi_{\delta}\to\varphi\) _in_ \(W(\Omega)\)_._
2. _Let_ \(\gamma\in(0,1]\)_. If_ \(\varkappa\geq(q-p)(1-\gamma)\)_, then for any_ \(\varphi\in W(\Omega)\cap C^{0,\gamma}(\Omega)\) _there exists a sequence_ \((\varphi_{\delta})_{\delta}\subset C_{c}^{\infty}(\Omega)\)_, such that_ \(\varphi_{\delta}\to\varphi\) _in_ \(W(\Omega)\)_._
_Moreover, in both above cases, if \(\varphi\in L^{\infty}(\Omega)\), then there exists \(c=c(\Omega)>0\), such that \(\|\varphi_{\delta}\|_{L^{\infty}(\Omega)}\leq c\|\varphi\|_{L^{\infty}(\Omega)}\)._
Proof.: Let us at first notice that by Lemma 2.1, we have that \(W(\Omega)\cap L^{\infty}(\Omega)\) is dense in \(W(\Omega)\). Therefore, for the assertion \((i)\), it suffices to consider the density of \(C_{c}^{\infty}(\Omega)\) in \(W(\Omega)\cap L^{\infty}(\Omega)\). Let us assume that in case of \((i)\), we have \(\gamma=0\). We shall prove the claims \((i)\) and \((ii)\) simultaneously. To this aim, let us take any \(\varphi\in W(\Omega)\cap L^{\infty}(\Omega)\) in the case of \(\gamma=0\) and \(\varphi\in W(\Omega)\cap C^{0,\gamma}(\Omega)\) otherwise.
At first, let us assume that \(\Omega\) is a star-shaped domain with respect to a ball centred in zero and with radius \(R>0\), that is \(B(0,R)\). Recall the definition of \(S_{\delta}\varphi\), given in (13), where we take \(U=\Omega\), \(x_{0}=0\), and \(\delta<R/4\). Our aim now is to prove that \(\nabla S_{\delta}\varphi\) converges to \(\nabla\varphi\) in \(W(\Omega)\). Due to (12), it is enough to show that \(S_{\delta}\varphi\to\varphi\) in \(L^{1}\) and \(\nabla S_{\delta}\varphi\xrightarrow{M}\nabla\varphi\) modularly in \(L_{M}\). We observe that by (14) and Lemma 2.2, we have this first convergence as well as the fact that \(\nabla(S_{\delta}(\varphi))\) converges to \(\varphi\) in measure. Therefore, by (11), it suffices to prove that
\[\text{the family }\big{\{}|\nabla(S_{\delta}\varphi)(\cdot))|^{p}+a(\cdot)| \nabla(S_{\delta}\varphi)(\cdot))|^{q}\big{\}}_{\delta}\text{ is uniformly integrable.} \tag{17}\]
Observe that by Lemma 2.3, for sufficiently small \(\delta>0\), there exist a constant \(C_{S}>0\), independent of \(\delta\), such that
\[\|\nabla(S_{\delta}\varphi)\|_{L^{\infty}}\leq C_{S}\delta^{\gamma-1}\,. \tag{18}\]
Indeed, if \(\gamma=0\), then by using assertion (15) and the fact that \(\varphi\in L^{\infty}(\Omega)\), we can set \(C_{S}\coloneqq\|\varphi\|_{L^{\infty}}\|\nabla\rho\|_{L^{1}}\) in (18). In the case of \(\gamma\in(0,1]\), (16) provides that \(\|\nabla S_{\delta}(\varphi)\|_{L^{\infty}}\leq\frac{\delta^{\gamma-1}}{ \kappa_{\delta}^{2}}[\varphi]_{0,\gamma}\|\nabla\rho\|_{L^{1}}\). As \(\varphi\in C^{0,\gamma}(\Omega)\) and \(\kappa_{\delta}\xrightarrow{\delta\to 0}1\), we obtain inequality (18) with constant \(C_{S}\coloneqq 2[\varphi]_{0,\gamma}\|\nabla\rho\|_{L^{1}}\) for sufficiently small \(\delta\). We therefore have (18) for all \(\gamma\in[0,1]\).
As \(a\in\mathcal{Z}^{\varkappa}\), there exists a constant \(C_{a}>1\) such that for any \(x,y\in\Omega\) we have \(a(x)\leq C_{a}(a(y)+|x-y|^{\varkappa})\). Let us take any \(x,y\in\Omega,\tau>0,\delta\in(0,1)\) such that \(|x-y|\leq\tau\delta\). We have
\[|\nabla S_{\delta}(\varphi)(x)|^{p}+a(x)|\nabla S_{\delta}(\varphi )(x)|^{q} =|\nabla S_{\delta}(\varphi)(x)|^{p}(1+a(x)|\nabla S_{\delta}( \varphi)(x)|^{q-p})\] \[\leq|\nabla S_{\delta}(\varphi)(x)|^{p}(1+C_{a}(a(y)+\tau^{ \varkappa}\delta^{\varkappa})|\nabla S_{\delta}(\varphi)(x)|^{q-p})\] \[\leq C_{a}|\nabla S_{\delta}(\varphi)(x)|^{p}(1+a(y)|\nabla S_{ \delta}(\varphi)(x)|^{q-p}+\tau^{\varkappa}\delta^{\varkappa}|\nabla S_{ \delta}(\varphi)(x)|^{q-p})\,. \tag{19}\]
By the inequality (18), we obtain that
\[\delta^{\varkappa}|\nabla S_{\delta}(\varphi)(x)|^{q-p}\leq C_{S}^{q-p}\delta^{ \varkappa}\delta^{(q-p)(\gamma-1)}\leq C_{S}^{q-p}\,, \tag{20}\]
where in the last inequality we used that \(\delta\in(0,1)\) and \(\varkappa+(q-p)(\gamma-1)\geq 0\). By (19) and (20), we have that there exists a constant \(C_{\tau}>0\), not depending on \(\delta\), such that
\[|\nabla S_{\delta}(\varphi)(x)|^{p}+a(x)|\nabla S_{\delta}(\varphi)(x)|^{q} \leq C_{\tau}\left(|\nabla S_{\delta}(\varphi)(x)|^{p}+\left(\inf_{z\in B_{ \tau\delta}(x)}a(z)\right)|\nabla S_{\delta}(\varphi)(x)|^{q}\right)\,. \tag{21}\]
Let us recall (14), that is \(\nabla S_{\delta}(\varphi)=\frac{1}{\kappa_{\delta}}S_{\delta}(\nabla\varphi)\). By using Jensen's inequality in conjunction with the fact that \(\kappa_{\delta}\geq 1/2\) for sufficiently small \(\delta\), we may write
\[|\nabla S_{\delta}(\varphi)(x)|^{p} =\frac{1}{\kappa_{\delta}^{p}}\left|\int_{B_{\delta}(0)}\rho_{ \delta}(y)(\nabla\varphi)((x-y)/\kappa_{\delta})\,dy\right|^{p}\] \[\leq 2^{p}\int_{B_{\delta}(0)}\rho_{\delta}(y)|(\nabla\varphi)(( x-y)/\kappa_{\delta})|^{p}\,dy=2^{p}S_{\delta}(|\nabla\varphi(\cdot)|^{p})(x) \tag{22}\]
for sufficiently small \(\delta>0\). Analogously, it holds that
\[\left(\inf_{z\in B_{\tau\delta}(x)}a(z)\right)|\nabla S_{\delta} (\varphi)(x)|^{q} \leq 2^{q}\int_{B_{\delta}(0)}\rho_{\delta}(y)\left(\inf_{z\in B_{ \tau\delta}(x)}a(z)\right)|(\nabla\varphi)((x-y)/\kappa_{\delta})|^{q}\,dy\] \[\leq 2^{q}\int_{B_{\delta}(0)}\rho_{\delta}(y)a((x-y)/\kappa_{ \delta})|(\nabla\varphi)((x-y)/\kappa_{\delta})|^{q}\,dy=2^{q}S_{\delta}(a( \cdot)|\nabla\varphi(\cdot)|^{q})(x)\,, \tag{23}\]
where \(\tau\) is fixed such that for sufficiently small \(\delta>0\) we have
\[|(x-y)/\kappa_{\delta}-x|\leq\frac{|y|}{\kappa_{\delta}}+\frac{1-\kappa_{ \delta}}{\kappa_{\delta}}|x|\leq\frac{\delta}{\kappa_{\delta}}+\frac{\delta} {2R\kappa_{\delta}}(\text{diam }\Omega)\leq\tau\delta\,.\]
Observe that by (21) and by estimates (22) and (23), we have
\[M(x,|\nabla S_{\delta}\varphi(x)|)\leq 2^{q}C_{\tau}(S_{\delta}(|\nabla\varphi( \cdot)|^{p})(x)+S_{\delta}(a(\cdot)|\nabla\varphi(\cdot)|^{q})(x))=2^{q}C_{ \tau}S_{\delta}\left(M\left(\cdot,|\nabla\varphi(\cdot)|\right)\right)(x)\,. \tag{24}\]
The fact that \(\varphi\in W(\Omega)\) implies that \(M(\cdot,|\nabla\varphi(\cdot)|)\in L^{1}(\Omega)\). Therefore, Lemma 2.2 gives us that the sequence \((S_{\delta}\left(M\left(\cdot,|\nabla\varphi(\cdot)|\right)\right))_{\delta}\) converges in \(L^{1}\). By the Vitali Convergence Theorem, it means that the family \(\{S_{\delta}\left(M\left(\cdot,|\nabla\varphi(\cdot)|\right)\right)\}_{\delta}\) is uniformly integrable. Using the estimate (24), we deduce that the family \(\{M(\cdot,|\nabla(S_{\delta}\varphi)(\cdot)|)\}_{\delta}\) is uniformly integrable, which is (17). Therefore, the proof is completed for \(\Omega\) being a bounded star-shaped domain with respect to a ball centred in zero.
To prove the result for \(\Omega\) being star-shaped with respect to a ball centred in point other than zero, one may translate the problem, obtaining the set being a star-shaped domain with respect to a ball centred in zero. Then, proceeding with the proof above and reversing translation of \(\Omega\) gives the desired result.
Now we shall focus on the case of \(\Omega\) being an arbitrary bounded Lipschitz domain. By [16, Lemma 8.2], a set \(\overline{\Omega}\) can be covered by a finite family of sets \(\{U_{i}\}_{i=1}^{K}\) such that each \(\Omega_{i}:=\Omega\cap U_{i}\) is a star-shaped domain with respect to some ball. Then \(\Omega=\bigcup_{i=1}^{K}\Omega_{i}\,\). By [34, Proposition 2.3, Chapter 1], there exists the partition of unity related to the partition \(\{U_{i}\}_{i=1}^{K}\), i.e., the family \(\{\theta_{i}\}_{i=1}^{K}\) such that
\[0\leq\theta_{i}\leq 1,\quad\theta_{i}\in C_{c}^{\infty}(U_{i}),\quad\sum_{i=1}^{K} \theta_{i}(x)=1\ \ \text{for}\ \ x\in\Omega\,.\]
By the previous paragraph for every \(i=1,2,\ldots,K\), as \(\Omega_{i}\) is a star-shaped domain with respect to some ball, and \(\theta_{i}\varphi\in W(\Omega_{i})\), there exist a sequence \((\varphi_{\delta}^{i})_{\delta}\) such that \(\varphi_{\delta}^{i}\xrightarrow{\delta\to 0}\theta_{i}\varphi\) in \(W(\Omega_{i})\). Let us now consider the sequence \((I_{\delta})_{\delta}\) defined as
\[I_{\delta}\coloneqq\sum_{i=1}^{K}\varphi_{\delta}^{i}.\]
We shall show that \(I_{\delta}\to\varphi\) in \(W(\Omega)\). As we have that \(\varphi_{\delta}^{i}\to\theta_{i}\varphi\) in \(L^{1}\) for every \(i\), we have \(I_{\delta}\to\varphi\) in \(L^{1}\). It suffices to prove that \(\nabla I_{\delta}\to\nabla\varphi\) in \(L_{M}(\Omega)\). Since the sequence \((\nabla\varphi_{\delta}^{i})_{\delta}\) converges to \(\nabla(\theta_{i}\varphi)\) in measure and \(\sum_{i=1}^{K}\nabla(\theta_{i}\varphi)=\nabla\varphi\), it holds that
\[\left(\nabla I_{\delta}\right)_{\delta}\to\nabla\varphi\text{ in measure}. \tag{25}\]
Moreover, for any \(x\in\Omega\) we have that
\[\left|\nabla I_{\delta}(x)\right|^{p}+a(x)\left|\nabla I_{\delta} (x)\right|^{q} \leq\sum_{i=1}^{K}\left(K^{p-1}|\nabla(\varphi_{\delta}^{i})(x)|^{ p}+K^{q-1}a(x)|\nabla(\varphi_{\delta}^{i})(x)|^{q}\right)\] \[\leq K^{q-1}\sum_{i=1}^{K}\left(|\nabla(\varphi_{\delta}^{i})(x)| ^{p}+a(x)|\nabla(\varphi_{\delta}^{i})(x)|^{q}\right)\,. \tag{26}\]
As for all \(i=1,2,\ldots,K\), we have that \((\varphi_{\delta}^{i})_{\delta}\) converges in \(W(\Omega_{i})\), it holds that the family
\[\text{the family}\quad\left\{\left|\sum_{i=1}^{K}\nabla(\varphi_{ \delta}^{i})(\cdot)\right|^{p}+a(\cdot)\left|\sum_{i=1}^{K}\nabla(\varphi_{ \delta}^{i})(\cdot)\right|^{q}\right\}_{\delta}\quad\text{ is uniformly integrable}.\]
This together with (25) and (11), as well as the fact that \(I_{\delta}\to\varphi\) in \(L^{1}\), gives us the result for an arbitrary bounded Lipschitz domain \(\Omega\).
### Absence of the Lavrentiev's phenomenon
As a direct consequence of Theorem 3.1 we infer the absence of the Lavrentiev's phenomenon. We start with a simple formulation for a double phase functional (1) reading as follows.
**Theorem 4** (Absence of the Lavrentiev's phenomenon for a model functional).: _Suppose \(\Omega\subset\mathbb{R}^{n}\) is a bounded Lipschitz domain, \(1<p<q<\infty\), \(\varkappa>0\), \(a:\Omega\to[0,\infty)\), \(\mathcal{F}\) is given by (1), and \(W(\Omega)\) is defined in (3). Assume that \(u_{0}\) satisfies \(\mathcal{F}[u_{0}]<\infty\) and \(a\in\mathcal{Z}^{\varkappa}(\Omega)\). Then the following assertions hold true._
1. _If_ \(\varkappa\geq q-p\)_, then_ \[\inf_{v\in u_{0}+W(\Omega)}\mathcal{F}[v]=\inf_{w\in u_{0}+C_{c}^{\infty}( \Omega)}\mathcal{F}[w]\,.\] (27)
2. _Let_ \(\gamma\in(0,1]\)_. If_ \(\varkappa\geq(q-p)(1-\gamma)\)_, then_ \[\inf_{v\in u_{0}+W(\Omega)\cap C^{0,\gamma}(\Omega)}\mathcal{F}[v]=\inf_{w\in u _{0}+C_{c}^{\infty}(\Omega)}\mathcal{F}[w]\,.\] (28)
The above theorem is a special case of the following more general result. Let us consider the following variational functional
\[\mathcal{G}[u]:=\int_{\Omega}G(x,u,\nabla u)\,dx\,, \tag{29}\]
over an open and bounded set \(\Omega\subset\mathbb{R}^{n}\), \(n\geq 1\), where \(G:\Omega\times\mathbb{R}\times\mathbb{R}^{n}\to\mathbb{R}\) is merely continuous with respect to the second and the third variable. We suppose that there exist constants \(0<\nu<1<L\) and a nonnegative \(h\in L^{1}(\Omega)\) such that
\[\nu\left(|\xi|^{p}+a(x)|\xi|^{q}\right)\leq G(x,z,\xi)\leq L\left(|\xi|^{p}+a(x )|\xi|^{q}+\Lambda(x)\right),\qquad\text{for all }\ x\in\Omega,\ z\in\mathbb{R},\ \xi\in\mathbb{R}^{n}\,. \tag{30}\]
**Theorem 5** (Absence of Lavrentiev's phenomenon for general functionals).: _Suppose \(\Omega\subset\mathbb{R}^{n}\) is a bounded Lipschitz domain, \(1<p<q<\infty\), \(\varkappa>0\), \(a:\Omega\to[0,\infty)\), \(\mathcal{G}\) is given by (29), and \(W(\Omega)\) is defined in (3). Assume that \(u_{0}\) satisfies \(\mathcal{G}[u_{0}]<\infty\) and \(a\in\mathcal{Z}^{\varkappa}(\Omega)\). Then the following assertions hold true._
* _If_ \(\varkappa\geq q-p\)_, then_ \[\inf_{v\in u_{0}+W(\Omega)}\mathcal{G}[v]=\inf_{w\in u_{0}+C^{\infty}_{c}( \Omega)}\mathcal{G}[w]\,.\] (31)
* _Let_ \(\gamma\in(0,1]\)_. If_ \(\varkappa\geq(q-p)(1-\gamma)\)_, then_ \[\inf_{v\in u_{0}+W(\Omega)\cap C^{0,\gamma}(\Omega)}\mathcal{G}[v]=\inf_{w\in u _{0}+C^{\infty}_{c}(\Omega)}\mathcal{G}[w]\,.\] (32)
Proof.: Since \(C^{\infty}_{c}(\Omega)\subset W(\Omega)\), it holds that \(\inf_{v\in u_{0}+W(\Omega)}\mathcal{G}[v]\leq\inf_{w\in u_{0}+C^{\infty}_{c}( \Omega)}\mathcal{G}[w]\,.\) Let us concentrate on showing the opposite inequality. By direct methods of calculus of variation, there exists a minimizer, i.e., a function \(u\in W(\Omega)\) such that
\[\mathcal{G}[u_{0}+u]=\inf_{v\in u_{0}+W(\Omega)}\mathcal{G}[v]\,.\]
By assertion (_i_) of Theorem 3, there exists \((u_{k})_{k}\subset C^{\infty}_{c}(\Omega)\) such that \(u_{k}\to u\) in \(W(\Omega)\). Since \(G\) is continuous with respect to the second and the third variable, we infer that
\[G(x,u_{0}(x)+u_{k}(x),\nabla u_{0}(x)+\nabla u_{k}(x))\xrightarrow[k\to\infty ]{}G(x,u(x),\nabla u(x))\text{ in measure}.\]
We shall now show that
\[\text{the family }\big{\{}G(x,u_{0}(x)+u_{k}(x),\nabla u_{0}(x)+\nabla u_{k}(x ))\big{\}}_{k\geq 1}\text{ is uniformly integrable}. \tag{33}\]
By assumption (30), we notice that
\[G(x,u_{0}(x)+u_{k}(x),\nabla u_{0}(x)+ \nabla u_{k}(x))\leq L\left(|\nabla u_{0}(x)+\nabla u_{k}(x)|^{p} +a(x)|\nabla u_{0}(x)+\nabla u_{k}(x)|^{q}\right)+L\Lambda(x)\] \[\leq C\left(|\nabla u_{k}(x)|^{p}+a(x)|\nabla u_{k}(x)|^{q} \right)+C\left(|\nabla u_{0}(x)|^{p}+a(x)|\nabla u_{0}(x)|^{q}\right)+L\Lambda (x)\,,\]
where \(C\) is a positive constant, for every fixed \(k\geq 1\). Note that \(h\in L^{1}(\Omega)\) and as \(\mathcal{G}[u_{0}]<\infty\), by (30), we have \(\int_{\Omega}|\nabla u_{0}(x)|^{p}+a(x)|\nabla u_{0}(x)|^{q}\,dx<\infty\). Moreover, since \((\nabla u_{k})_{k}\) converges in \(W(\Omega)\), we infer that
\[\text{the family }\{|\nabla u_{k}(x)|^{p}+a(x)|\nabla u_{k}(x)|^{q}\}_{k} \quad\text{is uniformly integrable}.\]
Thus, (33) is justified. In turn, by Vitali Convergence Theorem, we have that \(\mathcal{G}[u_{0}+u_{k}]\xrightarrow[k\to\infty]{}\mathcal{G}[u+u_{0}]\). Therefore, we have that
\[\inf_{w\in u_{0}+C^{\infty}_{c}(\Omega)}\mathcal{G}[w]\leq\mathcal{G}[u_{0}+u] =\inf_{v\in u_{0}+W(\Omega)}\mathcal{G}[v]\]
Consequently, (31) is proven.
By repeating the same procedure for \(u\in W(\Omega)\cap C^{0,\gamma}(\Omega)\) with the use of Theorem 3_(ii)_ instead of _(i)_, one gets (32).
## 4 Sharpness
Let us recall the energy space \(W(\Omega)\) given by (3). We state in Theorem 1 that the range for the absence of the Lavrentiev's phenomenon is sharp. By sharpness, we mean that if \(p,q\) and \(\varkappa\) are outside the proper range (5), it is possible to find a Lipschitz domain \(\Omega\), a weight \(a\in\mathcal{Z}^{\varkappa}(\Omega)\) and a boundary data \(u_{0}\in W(\Omega)\) such that the Lavrentiev's phenomenon occurs. In what follows we consider the double-phase functional \(\mathcal{F}\) defined in (1), that is
\[\mathcal{F}[u]=\int_{\Omega}|\nabla u(x)|^{p}+a(x)|\nabla u(x)|^{q}\,dx\,. \tag{34}\]
With Theorem 6 our aim is to show the occurrence of the Lavrentiev's phenomenon between the spaces \(u_{0}+W(\Omega)\) and \(u_{0}+C_{c}^{\infty}(\Omega)\), whenever \(p,q\) and \(\varkappa\) are as in Theorem 1, \((ii)\). We point out that, for our example, we modify the construction from [23] based on the seminal idea of Zhikov's checkerboard [35; 36].
**Theorem 6** (Sharpness).: _Let \(\mathcal{F}\) be defined by (1) and \(p,q,\varkappa>0\) such that_
\[1<p<n<n+\varkappa<q\,.\]
_Then there exist a Lipschitz domain \(\Omega\), a function \(a\in\mathcal{Z}^{\varkappa}\) and \(u_{0}\in W(\Omega)\) satisfying \(\mathcal{F}[u_{0}]<\infty\), such that_
\[\inf_{v\in u_{0}+W(\Omega)}\mathcal{F}[v]<\inf_{w\in u_{0}+C_{c}^{\infty}( \Omega)}\mathcal{F}[w]\,. \tag{35}\]
In order to show the presence of the Lavrentiev's phenomenon we first define the Lipschitz domain \(\Omega\), the function \(a\) and the boundary data \(u_{0}\). We choose \(\Omega\) as the ball of centre \(0\) and radius \(1\), i.e.,
\[\Omega=B_{1}:=B_{1}(0)\,. \tag{36}\]
Now let us define the following set
\[V:=\left\{x\in B_{1}:\ x_{n}^{2}-\sum_{i=1}^{n-1}x_{i}^{2}>0\right\}\,. \tag{37}\]
Regarding the weight \(a\) we introduce the function \(\ell:\mathbb{R}^{n}\to\mathbb{R}\) via the following formula
\[\ell(x):=\max\left\{x_{n}^{2}-\sum_{i=1}^{n-1}x_{i}^{2},0\right\}|x|^{-1}\,, \qquad x=(x_{1},\ldots,x_{n})\,.\]
The weight is defined as
\[a:=\ell^{\varkappa}\,. \tag{38}\]
Computing the partial derivative of \(\ell\) in \(V\) we get
\[\frac{\partial\ell}{\partial x_{i}}=\begin{cases}-\frac{x_{i}}{|x|^{3}}\Big{(} \sum_{j=1}^{n-1}x_{j}^{2}+3x_{n}^{2}\Big{)}&\text{if}\quad i=1,\ldots,n-1\,,\\ \frac{x_{i}}{|x|^{3}}\Big{(}\sum_{j=1}^{n-1}3x_{j}^{2}+x_{i}^{2}\Big{)}&\text{ if}\quad i=n\,.\end{cases}\]
We can observe that \(\|\nabla\ell\|_{L^{\infty}(B_{1})}\) is bounded. In turn, \(\ell\) is Lipschitz continuous and consequently \(a\in\mathcal{Z}^{\varkappa}(B_{1})\). We note that \(\operatorname{supp}a\subset V\), so the set \(V\) shall include whole \(p\)-\(q\)-phase, while \(p\)-phase will be in \(B_{1}\setminus V\).
Let us state and prove a lemma that we will use in the proof of Theorem 6.
**Lemma 4.1**.: _Let \(a\) be defined by (38) and \(V\) be defined as in (37). Then_
\[r_{1}\coloneqq\int_{V}|x|^{-\frac{q(n-1)}{q-1}}a(x)^{-\frac{1}{q-1}}\,dx\,<\infty\,. \tag{39}\]
Proof.: We use the spherical coordinates. The proof is presented in two cases - for \(n=2\) and \(n>2\).
For \(n=2\) we take
\[x_{1}:=\rho\cos\theta\quad\text{and}\quad x_{2}:=\rho\sin\theta\,,\]
consequently
\[a=\rho^{\varkappa}\max(-\cos 2\theta,0)^{\varkappa},\]
where \(\theta\in[0,2\pi)\). After this change of variables \(V\) is mapped into \(S\coloneqq(0,1)\times\left[(\frac{\pi}{4},\frac{3\pi}{4})\cup(\frac{5\pi}{4}, \frac{7\pi}{4})\right]\), so (39) reads as
\[r_{1}=\int_{S}\rho^{1-\frac{q+\varkappa}{q-1}}\left|\cos\left(2\theta\right) \right|^{-\frac{\varkappa}{q-1}}\,d\rho\,d\theta\,.\]
As \(q>\varkappa+2\), we have \(1-\frac{q+\varkappa}{q-1}>-1\), which implies that \(\int_{0}^{1}\rho^{1-\frac{q(1+\varkappa)}{q-1}}\,d\rho<\infty\). As far as \(-\cos\left(2\theta\right)^{-\frac{\varkappa}{q-1}}\) is concerned, we observe at first that over the set that we integrate on, it holds that \(\cos(2\theta)=0\) only for \(\theta=\frac{\pi}{4},\frac{3\pi}{4},\frac{5\pi}{4},\frac{7\pi}{4}\). Therefore, it suffices to prove integrability of \(\left|\cos\left(2\theta\right)\right|^{-\frac{\varkappa}{q-1}}\) near these points. Observe that for sufficiently small \(\theta_{0}>0\) we have
\[\left|\cos\left(2(\theta_{0}+\frac{\pi}{4})\right)\right|\geq 2\theta_{0}\left(1- \frac{2\theta_{0}}{\pi}\right)\geq\theta_{0}\,,\]
which means that for \(\theta=\theta_{0}+\frac{\pi}{4}\) we get
\[\left|\cos\left(2\theta\right)\right|^{-\frac{\varkappa}{q-1}}\leq\left( \theta-\frac{\pi}{4}\right)^{-\frac{\varkappa}{q-1}}\,. \tag{40}\]
Since \(q>\varkappa+1\), we have \(-\frac{\varkappa}{q-1}>-1\), and therefore, we have the integrability of \(\left|\cos\left(2\theta\right)\right|^{-\frac{\varkappa}{q-1}}\) near \(\frac{\pi}{4}\), and by analogy, also in points \(\frac{3\pi}{4},\frac{5\pi}{4},\frac{7\pi}{4}\). Therefore, we showed that \(r_{1}\) is finite for \(n=2\).
For \(n>2\) we set
\[x_{1}:=\rho\cos\theta\prod_{k=1}^{n-2}\sin\theta_{k}\,,\quad x_{2}:=\rho\sin \theta\prod_{k=1}^{n-2}\sin\theta_{k}\,,\quad x_{i}:=\rho\cos\theta_{n-2}\prod_{k =i-1}^{n-2}\sin\theta_{k}\,,\text{ for }i\geq 3\]
and so
\[a=\rho^{\varkappa}\max(\cos 2\theta_{n-2},0)^{\varkappa}\,,\]
with \(\rho>0\) and \(\theta_{i}\in[0,\pi]\) for \(i=1,\ldots,n-2\). We observe that \(V\) is mapped to \(S=(0,1)\times(0,2\pi)\times(0,\pi)^{n-2}\times\big{(}\big{(}0,\frac{\pi}{4} \big{)}\cap\big{(}\frac{3\pi}{4},\pi\big{)}\big{)}\), that is, \(\theta_{n-2}\in\big{(}0,\frac{\pi}{4}\big{)}\cap\big{(}\frac{3\pi}{4},\pi \big{)}\) and the modulus of the determinant of changing the variables may be estimated by \(\rho^{n-1}\). Therefore, we can estimate
\[r_{1}\leq\int_{S}\rho^{n-1+\frac{q(1-n)-\varkappa}{q-1}}\,|\cos(2\theta_{n-2}) |^{-\frac{\varkappa}{q-1}}\,\,d\rho\,d\theta_{n-2}\,.\]
As \(q>\varkappa+n\), it follows that \(n-1+\frac{q(1-n)-\varkappa}{q-1}>-1\), and therefore, \(\int_{0}^{1}\rho^{n-1+\frac{q(1-n)-\varkappa}{q-1}}\,d\rho<\infty\). Using analogous estimates as (40), one may also prove the integrability of \((\cos(2\theta_{n-2}))^{-\frac{\varkappa}{q-1}}\) in \(\big{(}0,\frac{\pi}{4}\big{)}\cap\big{(}\frac{3\pi}{4},\pi\big{)}\), obtaining the finiteness of \(r_{1}\) in case of \(n\geq 3\).
As far as the boundary data is concerned we first define a function \(u_{*}\) and, after we establish some of its properties, we shall find \(u_{0}\) such that \(u_{*}\in(u_{0}+W(B_{1}))\), but \(u_{*}\not\in\overline{(u_{0}+C_{c}^{\infty}(B_{1}))}^{W}\). We set
\[u_{*}(x)\coloneqq\begin{cases}\sin(2\theta)&\text{if}\quad 0\leq\theta\leq \frac{\pi}{4}\,,\\ 1&\text{if}\quad\frac{\pi}{4}\leq\theta\leq\frac{3\pi}{4}\,,\\ \sin(2\theta-\pi)&\text{if}\quad\frac{3\pi}{4}\leq\theta\leq\frac{5\pi}{4}\,, \\ -1&\text{if}\quad\frac{5\pi}{4}\leq\theta\leq\frac{7\pi}{4}\,,\\ \sin(2\theta)&\text{if}\quad\frac{7\pi}{4}\leq\theta\leq 2\pi\end{cases} \tag{41}\]
for \(n=2\), and as
\[u_{*}(x)\coloneqq\begin{cases}1&\text{if}\quad 0\leq\vartheta_{n-2}\leq \frac{\pi}{4}\,,\\ \sin(2\vartheta_{n-2})&\text{if}\quad\frac{\pi}{4}\leq\vartheta_{n-2}\leq\frac{3 \pi}{4}\,,\\ -1&\text{if}\quad\frac{3\pi}{4}\leq\vartheta_{n-2}\leq\pi\,,\end{cases} \tag{42}\]
for \(n\geq 3\). The boundary data \(u_{0}\) is determined by the following expression
\[u_{0}(x):=t_{0}|x|^{2}u_{*}(x)\,, \tag{43}\]
where \(t_{0}\) will be chosen. We have the following lemma.
**Lemma 4.2**.: _The function \(u_{*}\) belongs to \(u_{0}+W(B_{1})\). In particular_
\[r_{2}\coloneqq\int_{B_{1}}|\nabla u_{*}(x)|^{p}\,dx<\infty\,.\]
Proof.: We start observing that \(\operatorname{supp}a\subset V\) and \(\nabla u_{*}\equiv 0\) in \(\operatorname{supp}a\), i.e.,
\[\int_{B_{1}}|\nabla u_{*}(x)|^{p}+a(x)|\nabla u_{*}(x)|^{q}\,dx=\int_{B_{1}}| \nabla u_{*}(x)|^{p}\,dx=r_{2}\,.\]
To justify that \(r_{2}\) is finite, we notice that using spherical coordinates for \(n=2\) one gets
\[r_{2}=\int_{0}^{1}\rho\,d\rho\left[\int_{0}^{\frac{\pi}{4}}|2\cos(2\theta)|^{p}\, d\theta+\int_{\frac{3\pi}{4}}^{\frac{5\pi}{4}}|2\cos(2\theta-\pi)|^{p}\,d \theta+\int_{\frac{7\pi}{4}}^{2\pi}|2\cos(2\theta)|^{p}\,d\theta\right]<\infty\,,\]
whereas when \(n>2\), then
\[r_{2}=\int_{0}^{1}\int_{0}^{2\pi}\int_{0}^{\pi}|\det J|\,d\rho\,d\theta\prod_{i =1}^{n-3}\,d\theta_{i}\int_{0}^{\pi}|2\cos(2\theta_{n-2}-\pi)|^{p}\,d\theta_{n -2}<\infty\,,\]
where \(J\) is the Jacobian matrix of the spherical coordinate transformation. Now, since \(p<n\) we can apply the Sobolev embedding theorem to obtain \(u_{*}\in L^{p}(B_{1})\). Then \(u_{*}\in W^{1,1}(B_{1})\) and \(\mathcal{F}[u_{*}]<\infty\), namely \(u_{*}\in W(B_{1})\).
We take
\[t_{0}>\left[r_{2}\left(\frac{q}{r_{3}}\right)^{q}\left(\frac{r_{1}}{q-1} \right)^{q-1}\right]^{\frac{1}{q-p}}\,, \tag{44}\]
with \(r_{1}\) from Lemma 4.1, \(r_{2}\) from Lemma 4.2, and
\[r_{3}\coloneqq\mathcal{H}^{n-1}(\overline{V}\cap\partial B_{1})\,. \tag{45}\]
Now let us state the following observation made in [23]. The proof consists of calculations with the spherical coordinates in which Fubini's theorem and Jensen's inequality are used, see [23, p. 17] for details.
**Lemma 4.3**.: _For any function \(w\in u_{0}+C_{0}^{\infty}(B_{1})\) it holds_
\[t_{0}\mathcal{H}^{n-1}(\overline{V}\cap\partial B_{1})\leq\int_{V}\frac{1}{|x |^{n-1}}\left|\left\langle\frac{x}{|x|},\nabla w(x)\right\rangle\right|\,dx\,,\]
_for \(t_{0}\) as in (44) and \(u_{0}\) as in (43)._
Now we are ready to prove the theorem.
Proof of Theorem 6.: Bearing in mind the definition of \(u_{*}\) in (41)-(42) and of \(u_{0}\) in (43) we start observing that
\[\inf_{v\in u_{0}+W(B_{1})}\mathcal{F}[v]\leq\mathcal{F}[t_{0}u_{ *}] =t_{0}^{p}\int_{B_{1}}|\nabla u_{*}(x)|^{p}\,dx+t_{0}^{q}\int_{B_{1 }}a(x)|\nabla u_{*}(x)|^{q}\,dx\] \[=t_{0}^{p}\int_{B_{1}}|\nabla u_{*}(x)|^{p}\,dx=t_{0}^{p}r_{2}\,, \tag{46}\]
which is finite by Lemma 4.2. Let us fix arbitrary \(w\in u_{0}+C_{0}^{\infty}(B_{1})\) and \(\lambda>0\). In order to estimate from below \(\mathcal{F}[w]\) we notice that Lemma 4.3 together with Young's inequality Lemma 4.1 leads to
\[r_{3}\lambda t_{0} \leq\int_{V}\left(\frac{\lambda}{|x|^{n-1}}\frac{1}{a(x)}\right) \left|\left\langle\frac{x}{|x|},\nabla w(x)\right\rangle\right|a(x)\,dx\] \[\leq r_{1}\lambda^{\frac{q}{q-1}}+\int_{V}a(x)|\nabla w(x)|^{q}\, dx\,.\]
where \(\lambda>0\) is fixed. Consequently,
\[r_{3}\lambda t_{0}\leq r_{1}\lambda^{\frac{q}{q-1}}+\mathcal{F}[w].\]
Then for any \(w\in u_{0}+C_{0}^{\infty}(B_{1})\) it holds
\[\mathcal{F}[w]\geq r_{1}\sup_{\lambda>0}\left(\lambda t_{0}\frac{r_{3}}{r_{1}}- \lambda^{\frac{q}{q-1}}\right)=r_{1}\sup_{\lambda\in\mathbb{R}}\left(\lambda t _{0}\frac{r_{3}}{r_{1}}-|\lambda|^{\frac{q}{q-1}}\right)=r_{1}\left(\frac{(q-1 )t_{0}r_{3}}{qr_{1}}\right)^{q}\frac{1}{q-1}.\]
Now, bearing in mind (44) and using (46) we get
\[\inf_{w\in u_{0}+C_{0}^{\infty}(B_{1})}\mathcal{F}[w]\geq\left(\frac{r_{3}}{q} \right)^{q}\left(\frac{q-1}{r_{1}}\right)^{q-1}t_{0}^{q}>r_{2}t_{0}^{p}\geq \inf_{v\in u_{0}+W(B_{1})}\mathcal{F}[v]. \tag{47}\]
Hence the occurrence of the Lavrentiev's phenomenon, that is (35), is proven.
## 5 Generalizations
In this section, we describe how results presented in Section 3 may be generalized to consider a wider class of functionals.
### Variable exponent double phase functionals
We can consider a variable exponent double phase functional, given by
\[\mathcal{E}_{1}[u]=\int_{\Omega}b(x,u)\left(|\nabla u(x)|^{p(x)}+a(x)|\nabla u (x)|^{q(x)}\right)\,dx\,, \tag{48}\]
where functions \(p,q,a\) are such that \(1\leq p<q\in L^{\infty}(\Omega)\), \(0\leq a\in L^{\infty}(\Omega)\), and \(b\) is continuous with respect to the second variable and \(0<\nu<b(\cdot,\cdot)<L\) for some constants \(\nu,L\). The natural energy space for minimizers is
\[W_{1}(\Omega):=\left\{\varphi\in W_{0}^{1,1}(\Omega):\quad\int_{\Omega}| \nabla\varphi(x)|^{p(x)}+a(x)|\nabla\varphi(x)|^{q(x)}\,dx<\infty\right\}\,.\]
Typical assumption imposed on the variable exponent is log-Holder continuity. A function \(p\) is said to be log-Holder continuous (denoted \(p\in\mathcal{P}^{\log}(\Omega)\)), if there exists \(c>0\), such that for \(x,y\) close enough it holds that
\[|p(x)-p(y)|\leq\frac{c}{\log\left(1/|x-y|\right)}\,.\]
The results from [8; 11] state that \(p,q\in\mathcal{P}^{\log}\) and \(a\in C^{0,\alpha}\) for \(\alpha\geq\sup_{x}(q(x)-p(x))\) guarantees the absence of the Lavrentiev's phenomenon for functional (48). This condition is meaningful only provided that \(q(x)\leq p(x)+1\) for every \(x\in\Omega\). However, as for double-phase functional (1), we can assume only \(a\in\mathcal{Z}^{\varkappa}\) for \(\varkappa\geq\sup_{x}(q(x)-p(x))\) instead of \(a\in C^{0,\alpha}\). That is, we have the following counterpart of Theorem 4.
**Theorem 7**.: _Let \(\Omega\subset\mathbb{R}^{n}\) be a bounded Lipschitz domain and let the functional \(\mathcal{E}_{1}\) be given by (48) for \(p,q\in\mathcal{P}^{\log}\), \(1<p<q\) and \(a:\Omega\to[0,\infty)\). Suppose that \(a\in\mathcal{Z}^{\varkappa}(\Omega)\) for \(\varkappa>0\). Let \(u_{0}\) be such that \(\mathcal{E}_{1}[u_{0}]<\infty\). The following assertions hold true._
1. _If_ \(\varkappa\geq\sup_{x}(q(x)-p(x))\)_, then_ \[\inf_{v\in u_{0}+W_{1}(\Omega)}\mathcal{E}_{1}[v]=\inf_{w\in u_{0}+C_{c}^{ \infty}(\Omega)}\mathcal{E}_{1}[w]\,.\] (49)
_._
2. _Let_ \(\gamma\in(0,1]\)_. If_ \(\varkappa\geq\left(\sup_{x}(q(x)-p(x))\right)(1-\gamma)\)_, then_ \[\inf_{v\in u_{0}+W_{1}(\Omega)\cap C^{0,\gamma}(\Omega)}\mathcal{E}_{1}[v]=\inf_ {w\in u_{0}+C^{\infty}_{c}(\Omega)}\mathcal{E}_{1}[w]\,.\] (50)
One may also formulate corresponding counterparts of Theorem 3 and Theorem 5. Note that the proof of Theorem 3 requires only modification of (19).
### Orlicz multi phase functionals
As in [2; 4; 8] we can consider Orlicz multi phase functional, that is
\[\mathcal{E}_{2}[u]=\int_{\Omega}b(x,u)\left(\phi(|\nabla u(x)|)+\sum_{i=1}^{k} a_{i}(x)\psi_{i}(|\nabla u(x)|)\right)\,dx\,, \tag{51}\]
where \(\phi,\psi_{i}:[0,\infty)\to[0,\infty)\) are Young functions that satisfy \(\Delta_{2}\) condition, \(\lim_{t\to\infty}\frac{\psi_{i}(t)}{\phi(t)}=\infty\), \(0\leq a_{i}\in L^{\infty}(\Omega)\), for every \(i=1,2,\ldots,k\), and \(b\) is continuous with respect to the second variable and \(0<\nu<b(\cdot,\cdot)<L\) for some constants \(\nu,L\). The natural energy space for minimizers is
\[W_{2}(\Omega):=\left\{\varphi\in W_{0}^{1,1}(\Omega):\quad\int_{\Omega}\phi(| \nabla\varphi(x)|)+\sum_{i=1}^{k}a_{i}(x)\psi_{i}(|\nabla\varphi(x)|)\,dx< \infty\right\}\,.\]
To get the absence of the Lavrentiev's phenomenon in this case, we can modify our definition of the space \(\mathcal{Z}^{\varkappa}\). For an arbitrary increasing and continuous function \(\omega:[0,\infty)\to[0,+\infty)\) satisfying \(\omega(0)=0\), one can define the space \(\mathcal{Z}_{\omega}\) such that
\[a\in\mathcal{Z}^{\omega}(\Omega)\iff\exists_{C>0}\quad\forall_{x,y}\quad a( x)\leq C(a(y)+\omega(|x-y|))\,. \tag{52}\]
If \(\omega\) defines appropriate modulus of continuity, i.e., \(\omega\) is concave and \(\omega(0)=0\), then the fact that \(a\in\mathcal{Z}_{\omega}\) is equivalent to the existence of a function \(\widetilde{a}\), being comparable with \(a\), and having modulus of continuity \(\omega\), that is, for some \(C>0\) it holds that
\[\forall_{x,y}\quad|\widetilde{a}(x)-\widetilde{a}(y)|\leq C\omega(|x-y|)\,.\]
If \(\omega\in\Delta_{2}\) is not necessarily concave, then from the fact that \(\omega^{-1}(a)\) is comparable to some Lipschitz function, we can infer that \(a\in\mathcal{Z}^{\omega}\).
Using the definition (52), we can obtain the counterpart of Theorem 4 for the functional of type (51).
**Theorem 8**.: _Let \(\Omega\subset\mathbb{R}^{n}\) be a bounded Lipschitz domain and let the functional \(\mathcal{E}_{2}\) be given by (51) for \(\phi,\psi_{i}\in\Delta_{2}\), and \(a_{i}:\Omega\to[0,\infty)\), for every \(i=1,2,\ldots,k\). Suppose that \(a_{i}\in\mathcal{Z}^{\omega_{i}}(\Omega)\), where \(\omega_{i}:[0,\infty)\to[0,\infty)\) is increasing and such that \(\omega_{i}(0)=0\) for every \(i\). Let \(u_{0}\) be such that \(\mathcal{E}_{2}[u_{0}]<\infty\). The following assertions hold true._
1. _If_ \(\omega_{i}(t)\leq\frac{\phi(t^{-1})}{\psi_{i}(t^{-1})}\) _for every_ \(i\)_, then_ \[\inf_{v\in u_{0}+W_{2}(\Omega)}\mathcal{E}_{2}[v]=\inf_{w\in u_{0}+C^{\infty} _{c}(\Omega)}\mathcal{E}_{2}[w]\,.\] (53)
_._
2. _Let_ \(\gamma\in(0,1]\)_. If_ \(\omega_{i}(t)\leq\frac{\phi(t^{\gamma-1})}{\psi_{i}(t^{\gamma-1})}\) _for every_ \(i\)_, then_ \[\inf_{v\in u_{0}+W_{2}(\Omega)\cap C^{0,\gamma}(\Omega)}\mathcal{E}_{2}[v]=\inf_ {w\in u_{0}+C^{\infty}_{c}(\Omega)}\mathcal{E}_{2}[w]\,.\] (54)
One may also formulate counterparts of Theorem 3.1 and Theorem 5.2. Note that, similarly as in the previous section, the proof of Theorem 3.1 requires only modification of (19). Note that Theorem 8 improves the result of [2, Theorem 3.1] by the use of the scale \(\mathcal{Z}_{\omega}\) instead of \(C^{\omega}\). In particular this allows to take into account Orlicz multi phase problems with \(\limsup_{t\to\infty}t^{\sigma}\phi(t)/\psi_{i}(t)>0\) for arbitrary \(\sigma>0\), which for \(\sigma>1\) is excluded from the framework of [2].
### Orthotropic case
Our results may also be generalized to cover some types of orthotropic functionals. In particular, let us consider the orthotropic double phase functional given by
\[\mathcal{E}_{3}[u]=\sum_{i=1}^{n}\int_{\Omega}b_{i}(x,u)\left(|\partial_{i}u( x)|^{p_{i}}+a_{i}(x)|\partial_{i}u(x)|^{q_{i}}\right)\,dx\,, \tag{55}\]
where for every \(i\) it holds that \(1<p_{i}<q_{i}\) and \(0\leq a_{i}\in L^{\infty}(\Omega)\), and \(b_{i}\) is continuous with respect to the second variable and \(0<\nu<b(\cdot,\cdot)<L\) for some constants \(\nu,L\), for every \(i\). The natural energy space for minimizers is
\[W_{3}(\Omega):=\left\{\varphi\in W_{0}^{1,1}(\Omega):\quad\int_{\Omega}| \partial_{i}\varphi(x)|^{p_{i}}+a_{i}(x)|\partial_{i}\varphi(x)|^{q_{i}}\,dx< \infty\right\}\,.\]
We point out that for functionals satisfying such a decomposition, it is sufficient for the absence of the Lavrentiev's gap to look at each coordinate separately. In particular, we have the following theorem.
**Theorem 9**.: _Let \(\Omega\subset\mathbb{R}^{n}\) be a bounded Lipschitz domain and let the functional \(\mathcal{E}_{3}\) be given by (55), where for every \(i\) we have \(1<p_{i}<q_{i}<\infty\) and \(a_{i}:\Omega\to[0,\infty)\) that is allowed to vanish. Suppose that for every \(i\) it holds that \(a_{i}\in\mathcal{Z}^{\varkappa_{i}}(\Omega)\) and \(\varkappa_{i}>0\). Moreover, let \(u_{0}\) be such that \(\mathcal{E}_{3}[u_{0}]<\infty\). The following assertions hold true._
1. _If_ \(\varkappa_{i}\geq q_{i}-p_{i}\) _for every_ \(i\)_, then_ \[\inf_{v\in u_{0}+W_{3}(\Omega)}\mathcal{E}_{3}[v]=\inf_{w\in u_{0}+C^{\infty} _{c}(\Omega)}\mathcal{E}_{3}[w]\,.\] (56)
2. _Let_ \(\gamma\in(0,1]\)_. If_ \(\varkappa_{i}\geq(q_{i}-p_{i})(1-\gamma)\) _for every_ \(i\)_, then_ \[\inf_{v\in u_{0}+W_{3}(\Omega)\cap C^{0,\gamma}(\Omega)}\mathcal{E}_{3}[v]=\inf _{w\in u_{0}+C^{\infty}_{c}(\Omega)}\mathcal{E}_{3}[w]\,.\] (57)
Corresponding counterparts of Theorem 3.1 and Theorem 5.2 may be formulated as well, together with counterparts for orthotropic versions of functionals (48) and (51).
## Appendix
Proof of Proposition 1.3.: We concentrate on _(i)_. Our reasoning is inspired by the proof of Glaeser-type inequality, see [21]. Suppose by contradiction that \(a\not\in\mathcal{Z}^{1+\alpha}\). This implies that there exist sequences \((x_{k}),(y_{k})\subset\Omega\) and \(C_{k}\in\mathbb{R}\) with \(\lim_{k\to\infty}C_{k}=\infty\) such that
\[a(x_{k})\geq C_{k}(a(y_{k})+|x_{k}-y_{k}|^{1+\alpha})\,. \tag{58}\]
As \(\overline{\Omega}\) is compact, by taking subsequences if necessary, we may assume that \(x_{k}\to\bar{x},y_{k}\to\bar{y}\), where \(\bar{x},\bar{y}\in\overline{\Omega}\). Observe that taking limits in (58), we obtain that for every \(C>0\) we have \(a(\bar{x})>C\cdot(a(\bar{y})+|\bar{x}-\bar{y}|^{1+\alpha})\). As \(a\) is bounded, we have that \(a(\bar{y})+|\bar{x}-\bar{y}|^{1+\alpha}=0\). That is, we have \(\bar{x}=\bar{y}\) and \(a(\bar{x})=0\). We shall denote \(x_{0}:=\bar{x}=\bar{y}\). As \(a(x_{0})=0\), by assumption, we have \(x_{0}\in\Omega\) and there exist \(R>0\) such that \(B(x_{0},R)\subseteq\Omega\).
Let us fix any \(\nu\in\mathbb{R}^{n}\) such that \(|\nu|=1\). By Lagrange Mean Value Theorem, for arbitrary \(z\in B(x_{0},R)\) and \(h\in\mathbb{R}\) such that \(z+h\nu\in B(x_{0},R)\), we have
\[a(z+h\nu)=a(z)+h\frac{\partial a}{\partial\nu}(z+\varsigma\nu), \tag{59}\]
where \(\varsigma\in[-|h|,|h|]\). Using that \(a\in C^{1,\alpha}(\Omega)\), we get that for some constant \(C\), independent of \(\nu\), we have
\[\left|\frac{\partial a}{\partial\nu}(z+\varsigma\nu)-\frac{\partial a}{ \partial\nu}(z)\right|\leq C|\varsigma|^{\alpha}\leq C|h|^{\alpha}\,,\]
and, consequently,
\[\frac{\partial a}{\partial\nu}(z)-C|h|^{\alpha}\leq\frac{\partial a}{ \partial\nu}(z+\varsigma\nu)\leq\frac{\partial a}{\partial\nu}(z)+C|h|^{ \alpha}\,.\]
Thus, for \(h\geq 0\) it holds that
\[h\frac{\partial a}{\partial\nu}(z+\varsigma\nu)\leq h\frac{\partial a}{ \partial\nu}(z)+Ch|h|^{\alpha}=h\frac{\partial a}{\partial\nu}(z)+C|h|^{ \alpha+1}\,,\]
while for \(h<0\) we have
\[h\frac{\partial a}{\partial\nu}(z+\varsigma\nu)\leq h\frac{\partial a}{ \partial\nu}(z)-Ch|h|^{\alpha}=h\frac{\partial a}{\partial\nu}(z)+C|h|^{ \alpha+1}\,.\]
By (59) and the last two displays, it means that
\[a(z+h\nu)\leq a(z)+h\frac{\partial a}{\partial\nu}(z)+C|h|^{1+\alpha}\,.\]
As \(a\geq 0\), we have
\[0\leq a(z)+h\frac{\partial a}{\partial\nu}(z)+C|h|^{1+\alpha}\,, \tag{60}\]
as long as \(h\in\mathbb{R}\) and \(z,z+h\nu\in B(x_{0},R)\).
For any \(z\in B(x_{0},R)\), let us now denote
\[h_{z}\coloneqq-c\left|\frac{\partial a}{\partial\nu}(z)\right|^{1/\alpha} \mathrm{sgn}\left(\frac{\partial a}{\partial\nu}(z)\right)\ \ \mathrm{with}\ \ c=(2C)^{-1/\alpha}\,.\]
Note that as \(a(x_{0})=0\), we also have \(\nabla a(x_{0})=0\), as \(x_{0}\in\Omega\) is a minimum of \(a\). Since \(a\in C^{1,\alpha}(\overline{\Omega})\), for any \(z\in B(x_{0},R)\) we have \(\big{|}\frac{\partial a}{\partial\nu}(z)\big{|}\leq C|z-x_{0}|^{\alpha}\), which gives us
\[|z+h_{z}\nu-x_{0}|\leq|z-x_{0}|+|h_{z}|=|z-x_{0}|+c\left|\frac{\partial a}{ \partial\nu}(z)\right|^{1/\alpha}\leq(1+cC^{1/\alpha})|z-x_{0}|\,.\]
Therefore, if we take \(r\coloneqq\frac{R}{1+cC^{1/\alpha}}\), for any \(z\in B(x_{0},r)\) we have \(z+h_{z}\nu\in B(x_{0},R)\). Hence, by (60) we obtain
\[0\leq a(z)+h_{z}\frac{\partial a}{\partial\nu}(z)+C|h_{z}|^{1+\alpha}=a(z)- \tfrac{1}{2}c\left|\frac{\partial a}{\partial\nu}(z)\right|^{1+1/\alpha}\,,\]
which means that for some constant \(C_{a}>0\) it holds that
\[\left|\frac{\partial a}{\partial\nu}(z)\right|\leq C_{a}a(z)^{\frac{\alpha}{1 +\alpha}}\,. \tag{61}\]
Note that by ambiguity of \(\nu\), estimate (61) holds for arbitrary \(\nu\in\mathbb{R}^{n}\) such that \(|\nu|=1\).
Let us take any \(x,y\in B(x_{0},r)\). Note that we can always find \(\tilde{y}\in[y,x]\) such that \(a(\tilde{y})\leq a(y)\) and \(a>0\) on the segment \((y,x)\). Indeed, if \(a>0\) on \((y,x)\), then we can take \(\tilde{y}=y\). In other case, we may define
\[\tilde{t}\coloneqq\sup\{t\in[0,1]:a(y+t(x-y))=0\}\]
and set \(\tilde{y}\coloneqq y+\tilde{t}(x-y)\). We see by the definition that \(a(\tilde{y})=0\leq a(y)\) and \(a\) is positive on \((y,x)\). Therefore, if we set \(\nu=\frac{x-\tilde{y}}{|x-\tilde{y}|}\), the function \(t\mapsto a(\tilde{y}+t\nu)^{\frac{1}{1+\alpha}}\) is differentiable for \(t\in(0,|x-\tilde{y}|)\), with derivative equal to \(\left(\frac{\partial a}{\partial\nu}(\tilde{y}+t\nu)\right)(a(\tilde{y}+t\nu) )^{-\frac{\alpha}{1+\alpha}}\). By the definition of \(\tilde{y}\) and (61), we have
\[a(x)^{\frac{1}{1+\alpha}}-a(y)^{\frac{1}{1+\alpha}}\leq a(x)^{\frac{1}{1+ \alpha}}-a(\tilde{y})^{\frac{1}{1+\alpha}}=\int_{0}^{|x-\tilde{y}|}\left( \frac{\partial a}{\partial\nu}(\tilde{y}+t\nu)\right)(a(\tilde{y}+t\nu))^{- \frac{\alpha}{1+\alpha}}\,dt\leq C_{a}|x-\tilde{y}|\leq C_{a}|x-y|\,,\]
which by symmetry means that \(a^{\frac{1}{1+\alpha}}\) is Lipschitz on \(B(x_{0},r)\). By Remark 1.2, we have that \(a\in\mathcal{Z}^{1+\alpha}(B(x_{0},r))\), which contradicts (58), as \((x_{k})_{k}\) and \((y_{k})_{k}\) converge to \(x_{0}\). Hence, \(a\in\mathcal{Z}^{1+\alpha}(\Omega)\).
For _(ii)_ it is enough to consider \(x_{0}\in\Omega\subset\mathbb{R}^{n}\) and \(a(x)=|x-x_{0}|^{2}\), which is smooth, but only in \(\mathcal{Z}^{2}\).
## Acknowledgement
We would like to express gratitude to Pierre Bousquet for fruitful discussions that in particular drawn our attention to issues solved in Proposition 1.3.
The project started with discussions of all authors during Thematic Research Programme Anisotropic and Inhomogeneous Phenomena at University of Warsaw in 2022.
|
2306.06578 | Long-Term Autonomous Ocean Monitoring with Streaming Samples | In the autonomous ocean monitoring task, the sampling robot moves in the
environment and accumulates data continuously. The widely adopted spatial
modeling method - standard Gaussian process (GP) regression - becomes
inadequate in processing the growing sensing data of a large size. To overcome
the computational challenge, this paper presents an environmental modeling
framework using a sparse variant of GP called streaming sparse GP (SSGP). The
SSGP is able to handle streaming data in an online and incremental manner, and
is therefore suitable for long-term autonomous environmental monitoring. The
SSGP summarizes the collected data using a small set of pseudo data points that
best represent the whole dataset, and updates the hyperparameters and pseudo
point locations in a streaming fashion, leading to high-quality approximation
of the underlying environmental model with significantly reduced computational
cost and memory demand. | Weizhe Chen, Lantao Liu | 2023-06-11T03:59:26Z | http://arxiv.org/abs/2306.06578v1 | # Long-Term Autonomous Ocean Monitoring with Streaming Samples
###### Abstract
In the autonomous ocean monitoring task, the sampling robot moves in the environment and accumulates data continuously. The widely adopted spatial modeling method -- standard Gaussian process (GP) regression -- becomes inadequate in processing the growing sensing data of a large size. To overcome the computational challenge, this paper presents an environmental modeling framework using a sparse variant of GP called streaming sparse GP (SSGP). The SSGP is able to handle streaming data in an online and incremental manner, and is therefore suitable for long-term autonomous environmental monitoring. The SSGP summarizes the collected data using a small set of pseudo data points that best represent the whole dataset, and updates the hyperparameters and pseudo point locations in a streaming fashion, leading to high-quality approximation of the underlying environmental model with significantly reduced computational cost and memory demand.
## I Introduction and Related Work
To autonomously monitor our aquatic environments such as the oceans, intelligent robotic platforms such as the autonomous underwater vehicles (AUVs) and unmanned surface vessels (USVs) have been increasingly utilized in scientific information gathering missions due to the attractive mobility, flexibility, and adaptivity of these platforms [12]. Fig. 1 illustrates the interaction loop between a sampling robot and the environment. Typically, the robot needs to first compute and plan a sampling path following which environmental samples can be collected. The path is computed based on some learned environmental model, which is usually a probabilistic model of targeted environmental attributes such as the non-uniform salinity of the ocean. Then, the robot samples the environment and uses the newly obtained samples to update its estimated model, which in turn influences the sampling path computation in the next round.
The sampling path computation for environmental sensing and monitoring is usually formulated as the _informative path planning_, the objective of which is to maximize the "informativeness" of collected samples. Such informative path navigates the robot to collect samples from locations of greater importance for better estimating the underlying environmental model. Oftentimes, the informativeness of a new observation (sampling) is quantified by its entropy, variance reduction of prediction, or mutual information between the sampled and un-sampled spaces. Representative informative planning methods include, e.g., recursive-greedy algorithms that exploit the submodularity property of mutual information [15, 18, 26, 32], dynamic programming algorithms that select a set of waypoints with maximum information [8, 21, 22, 27], sampling-based motion planning algorithms [16, 19], Bayesian optimization [2, 20, 24], evolutionary methods [29], temporal difference reinforcement learning [10], and Monte Carlo tree search [1, 3, 9, 28].
Most existing informative planners have been built upon probabilistic regression models. This also requires the environmental modeling component to be efficient and accurate. Gaussian process (GP) regression has become the de-facto standard for spatial modeling due to its well-calibrated uncertainty estimate, modeling flexibility, and robustness to overfitting. Nevertheless, there are still many challenges to be overcome. For instance, one of the big challenges is the limited scalability to large dataset. The time complexity and the storage complexity of a vanilla GP regression with \(N\) collected data samples are \(\mathcal{O}(N^{3})\) and \(\mathcal{O}(N^{2})\), respectively. Another challenge is the lack of a principled method to deal with streaming data. As the robot collects data sequentially, we would like to update prediction and (hyper) parameters of our model in a real-time and incremental fashion. However, most existing work tackled the problem by initializing the parameters through a pilot survey or prior data, and keeping them fixed during the sampling process [4, 13, 14, 17].
Fig. 1: A typical loop between the sampling robot and the environment. The example here is to estimate the bathymetry of ocean floor.
Unfortunately, fixed parameters will undoubtedly limit the adaptivity of the sampling robot.
This paper presents a sparse variant of GP which is able to handle streaming data in an online and incremental manner, and is therefore suitable for long-term autonomous environmental monitoring. Specifically, the predictive distribution, hyperparameters, and the locations of pseudo points (a small set of points summarizing the training data) are updated in real-time, leading to an accurate estimation of the environmental state with reasonable computational cost and memory demand. This work relates to sparse online GP (SOGP) [11, 21] and sparse pseudo-inputs GP (SPGP) [27, 33]. The key difference between this work and our former SOGP-based environmental sampling work [21, 23] is that, instead of employing crafted heuristics for optimizing and selecting a set of representative sparse samples, we present a principled framework for learning hyperparameters and optimizing the representative pseudo inputs. Also, the SPGP framework treats the pseudo inputs as additional kernel hyperparameters, which might lead to overfitting when we jointly optimize hyperparameters and pseudo inputs [34]. Moreover, in SPGP, there is no discrepancy measure between the approximate model and the exact model. In contrast, the SSGP [5] models the sparse pseudo points as variational parameters which are optimized by minimizing the Kullback-Leibler (KL) divergence between the approximate GP and the exact posterior GP (see Section III). Such a mechanism prevents the learning algorithm from overfitting and leads to high-quality approximation.
## II Background
In this section, we briefly review GP regression and a variational framework for sparse GP regression.
### _Gaussian Process Regression_
Assume we have \(N\) training inputs (i.e. sampling locations) and the corresponding real-value outputs (namely, observations) \(\{\mathbf{x}_{n},y_{n}\}_{n=1}^{N}\). In the vanilla GP regression, we assume \(y_{n}=f(\mathbf{x}_{n})+\epsilon_{n}\), where \(f\) is an unknown function and the observation noise is drawn from a Gaussian distribution \(\epsilon\sim\mathcal{N}(0,\sigma_{y}^{2})\). For notational simplicity, \(f\) is typically assumed to be drawn from a zero-mean GP prior \(f\sim\mathcal{GP}(\mathbf{0},k(\cdot,\cdot|\mathbf{\theta}))\), where \(k(\cdot,\cdot|\mathbf{\theta})\) is the covariance function with a set of hyperparameters \(\mathbf{\theta}\). After taking the observed data into account, our prior GP can be updated to a posterior GP which is specified by a posterior mean function and a posterior covariance function: 1
Footnote 1: We have aggregated the observations into a vector \(\mathbf{y}=\{y_{n}\}_{n=1}^{N}\).
\[m_{\mathbf{y}}(\mathbf{x}) =K_{\mathbf{x}N}(K_{NN}+\sigma_{y}^{2}I)^{-1}\mathbf{y}\] \[k_{\mathbf{y}}(\mathbf{x},\mathbf{x}^{\prime}) =k(\mathbf{x},\mathbf{x}^{\prime})-K_{\mathbf{x}N}(K_{NN}+\sigma_{y}^{2}I)^{- 1}K_{N\mathbf{x}^{\prime}}.\]
Here, \(K_{\mathbf{x}n}\) is a \(1\times n\) covariance vector between the test input \(\mathbf{x}\) and the \(N\) training inputs, and \(K_{N\mathbf{x}}=K_{\mathbf{x}N}^{\top}\). \(K_{NN}\) is an \(N\times N\) covariance matrix on the training inputs, and \(I\) is an \(N\times N\) identity matrix. The posterior GP depends on the values of the hyperparameters which can be optimized by maximizing the log marginal likelihood given by
\[\log p(\mathbf{y})=-\frac{1}{2}\mathbf{y}^{\top}K^{-1}\mathbf{y}-\frac{1}{2}\log|K|-\frac {n}{2}\log(2\pi).\]
Here we use the shorthand \(K=K_{NN}+\sigma_{y}^{2}I\).
However, the standard GP suffers from poor scalability because it requires matrix inverse operations with time complexity of \(\mathcal{O}(N^{3})\) and memory (storage space) complexity of \(\mathcal{O}(N^{2})\). This computational challenge has led to many sparse approximation paradigms [6, 30]. Next, we will introduce a variational framework for the sparse GP.
### _Variational Sparse Gaussian Process_
Variational sparse GP (VSGP) approximates the intractable posterior \(p(f|\mathbf{y},\mathbf{\theta})\) through an approximate posterior \(q(f)\). We measure the discrepancy between the approximate posterior and the true posterior using Kullback-Leibler (KL) divergence. Then we can minimize this quantity so that the approximate posterior is as "close" as possible to the exact posterior. Since \(p(f|\mathbf{y},\mathbf{\theta})\) is unknown, computing the KL divergence \(\mathbb{KL}[q(f)||p(f|\mathbf{y},\mathbf{\theta})]\) is infeasible. Fortunately, using Bayes' theorem, we have the following important equation:
\[\mathbb{KL}[q(f)||p(f|\mathbf{y},\mathbf{\theta})]=\log p(\mathbf{y}|\mathbf{\theta})-\mathbb{ ELBO}[q(f),\mathbf{\theta}], \tag{1}\]
where \(\mathbb{ELBO}[q(f),\mathbf{\theta}]=\int q(f)\log\frac{p(\mathbf{y},f|\mathbf{\theta})}{q(f )}\,\mathrm{d}f\) is the Evidence Lower BOund (ELBO). Since \(\log p(\mathbf{y}|\mathbf{\theta})\) does not depend on \(q(f)\), maximizing the ELBO w.r.t. \(q(f)\) is equivalent to minimizing the KL divergence, which implies that the approximate posterior gets _closer_ to the true posterior. Furthermore, given the fact that the KL divergence is non-negative, ELBO lower bounds the marginal likelihood \(\log p(\mathbf{y}|\mathbf{\theta})\) so that it can be used for learning the hyperparameters \(\mathbf{\theta}\).
We assume that the approximate posterior has the form \(q(f)=p(f_{\neq\mathbf{u}}|\mathbf{u},\mathbf{\theta})q(\mathbf{u})\), where \(\mathbf{u}\) - the pseudo points - is a small subset of \(f\), \(q(\mathbf{u})\) is a variational distribution over \(\mathbf{u}\), and \(p(f_{\neq\mathbf{u}}|\mathbf{u},\mathbf{\theta})\) is the conditional prior of \(f_{\neq\mathbf{u}}\). This assumption induces cancellation of the uncountly infinite parts in the equation and provides a computationally tractable lower bound:
\[\mathbb{ELBO}=\int q(f)\log\frac{p(\mathbf{y}|f,\mathbf{\theta})p(\mathbf{u}|\mathbf{\theta})p (\underline{f_{\neq\mathbf{u}}}|\mathbf{\pi};\mathbf{\theta})}{\underline{p(f_{\neq\mathbf{u}} |\mathbf{\pi};\mathbf{\overline{u}})}{\underline{p(f_{\neq\mathbf{u}}|\mathbf{\overline{u}} ;\mathbf{\overline{u}})}q(\mathbf{u})}}\,\mathrm{d}f.\]
The closed-form expression for the optimal variational distribution can be obtained by maximizing \(\mathbb{ELBO}[q(\mathbf{u}),\mathbf{\theta}]\) w.r.t. \(q(\mathbf{u})\). Hyperparameters can also be computed by maximizing \(\mathbb{ELBO}[q(\mathbf{u}),\mathbf{\theta}]\) w.r.t. \(\mathbf{\theta}\).
This framework assumes that the data arrives in batches. In such streaming setting, we need to append newly acquired data to a continuously-growing dataset, and then re-train the model with the combined dataset. In the following section, we shall discuss how to update the approximate posterior and hyperparameters by integrating the information from old approximation and the new data in a mathematically sound way.
## III Streaming Sparse Gaussian Process
Our goal is to derive the new approximate posterior and marginal likelihood using the old approximation and the new data. Formally, let \(q_{\text{old}}(f)\) be the approximate posterior obtained at the previous step. According to Bayes' rule, we have
\[q_{\text{old}}(f) \approx p(f|\mathbf{y}_{\text{old}})=\frac{p(f|\mathbf{\theta}_{\text{old}})p( \mathbf{y}_{\text{old}}|f)}{Z_{1}(\mathbf{\theta}_{\text{old}})} \tag{2}\] \[p(f|\mathbf{y}_{\text{old}},\mathbf{y}_{\text{new}}) =\frac{p(f|\mathbf{\theta}_{\text{new}})p(\mathbf{y}_{\text{old}}|f)p(\bm {y}_{\text{new}}|f)}{Z_{2}(\mathbf{\theta}_{\text{new}})}, \tag{3}\]
where \(Z_{1}(\mathbf{\theta}_{\text{old}}),Z_{2}(\mathbf{\theta}_{\text{new}})\) are the normalizing constants. Since the new posterior should only rely on the new data \(\mathbf{y}_{\text{new}}\) and the old approximation, we rearrange Eq. (2) and substitute \(p(\mathbf{y}_{\text{old}}|f)\) into Eq. (3), which yields
\[\hat{p}(f|\mathbf{y}_{\text{old}},\mathbf{y}_{\text{new}})=\frac{Z_{1}(\mathbf{\theta}_{ \text{old}})}{Z_{2}(\mathbf{\theta}_{\text{new}})}p(f|\mathbf{\theta}_{\text{new}})p( \mathbf{y}_{\text{new}}|f)\frac{q_{\text{old}}(f)}{p(f|\mathbf{\theta}_{\text{old}})}.\]
This new posterior fuses the old approximation \(q_{\text{old}}(f)\), the new likelihood \(p(\mathbf{y}_{\text{new}}|f)\), and our priors. However, we cannot use this as the new approximate posterior, \(q_{\text{new}}(f)=\hat{p}(f|\mathbf{y}_{\text{old}},\mathbf{y}_{\text{new}})\), because this recovers exact GP regression and it is intractable [5]. Therefore, we consider approximation by minimizing the KL divergence between \(q_{\text{new}}(f)\) and \(p(\mathbf{y}_{\text{new}}|f)\).
Let \(\mathbf{a}=f(\mathbf{z}_{\text{old}})\) and \(\mathbf{b}=f(\mathbf{z}_{\text{new}})\) be the function values at the pseudo-inputs before and after seeing new data. Note that the number of pseudo points \(M_{\mathbf{a}}=|\mathbf{a}|\) and \(M_{\mathbf{b}}=|\mathbf{b}|\) are not necessarily the same, and the new pseudo inputs might be different from the old ones. This is required when new regions of input space are gradually explored in the environmental monitoring scenario. The forms of the approximate posteriors are assumed to be \(q_{\text{old}}(f)=p(f_{\neq\mathbf{a}}|\mathbf{a},\mathbf{\theta}_{\text{old}})q_{\text{old }}(\mathbf{a})\) and \(q_{\text{new}}(f)=p(f_{\neq\mathbf{b}}|\mathbf{b},\mathbf{\theta}_{\text{new}})q_{\text{new }}(\mathbf{b})\) as that of VSGP.
\[\underbrace{\text{KL}\Big{[}q_{\text{new}}(f)\Big{|}\hat{p}(f| \mathbf{y}_{\text{old}},\mathbf{y}_{\text{new}})\Big{]}}_{\text{non-negative}}= \underbrace{\log\frac{Z_{2}(\mathbf{\theta}_{\text{new}})}{Z_{1}(\mathbf{\theta}_{\text {old}})}}_{\text{constant}}\] \[-\underbrace{\int q_{\text{new}}(f)\left[\log\frac{p(\mathbf{b}|\mathbf{ \theta}_{\text{new}})q_{\text{old}}(\mathbf{a})p(\mathbf{y}_{\text{new}}|f))}{p(\mathbf{a} |\mathbf{\theta}_{\text{old}})q_{\text{new}}(\mathbf{b})}\right]\text{d}f}.\]
Since the constant term does not depend on \(q(\mathbf{b})\), maximizing \(\mathbb{ELBO}(q(\mathbf{b}),\mathbf{\theta}_{\text{new}})\) w.r.t. \(q(\mathbf{b})\) guarantees that \(q_{\text{new}}(f)\) gets _closer_ to \(\hat{p}(f|\mathbf{y}_{\text{old}},\mathbf{y}_{\text{new}})\)2. Setting the derivative of \(\mathbb{ELBO}(q(\mathbf{b}),\mathbf{\theta}_{\text{new}})\) w.r.t. \(q(\mathbf{b})\) equal to \(0\) gives us the optimal new approximate distribution \(q(\mathbf{b})\)\({}^{*}\). Also, \(\mathbb{ELBO}(q(\mathbf{b}),\mathbf{\theta}_{\text{new}})\) lower bounds the online log marginal likelihood3 since the KL divergence is non-negative. This lower bound can be used to learn the hyperparameters and optimize pseudo inputs. This gives us a principled framework for deploying sparse GP in the streaming setting, providing online hyperparameter learning and pseudo-input optimization. See [5] for more details.
Footnote 2: We omitted the subscript of \(q_{\text{new}}(\mathbf{b})\)
Footnote 3: \(Z_{2}/Z1\approx p(\mathbf{y}_{\text{new}}|\mathbf{y}_{\text{old}})\)
## IV Experiments
Our experimental evaluations aim at answering the following questions about SSGP:
1. Is SSGP able to achieve competitive accuracy with improved computational complexity and memory usage?
2. Can it effectively characterize the environment by learning hyperparameters?
3. How does the number of pseudo points influence the accuracy and efficiency?
### _Experiment Setup_
**(Setting)** We have conducted extensive evaluations with both synthetic and real-world data, and compared the SSGP with the following baseline methods:
1. standard GP regression (GPR) with the whole collected dataset [31],
2. standard GP regression with the most recent \(500^{4}\) (GPR500) training data points,
3. variational sparse GP (VSGP) with the whole dataset [34],
4. sparse pseudo-inputs GP (SPGP) with the whole dataset [27, 33].
The covariance function used throughout the experiments is the squared exponential (SE) kernel with learned lengthscales \(\ell_{d}\) (as a hyperparameter):
\[k(\mathbf{x},\mathbf{x}^{\prime})=\sigma_{f}^{2}\exp\left[-\frac{1}{2}\sum_{d=1}^{D} \left(\frac{x_{d}-x_{d}^{\prime}}{\ell_{d}}\right)^{2}\right],\]
where the number of input dimensions \(D\) is \(2\) in our case. The hyperparameters and pseudo inputs are initialized with the same values, and then optimized using L-BFGS-B with the same stopping criteria. All the methods were implemented on GPflow [25] and run on a standard desktop with a 3.6GHz Intel i7 processor and 16GB of RAM.
**(Data)** Both the synthetic data and the real-world data are simulated on a \(100\text{m}\times 100\text{m}\) grid map (discrete scalar field) with the grid resolution of 1m in each dimension, thus the test set contains \(10000\) data points. The synthetic environment is drawn from a two-dimensional GP with hyperparameters \(\{\sigma_{y}^{2}:0.01,\ \sigma_{f}^{2}:1,\ \ell_{1}:0.3,\ \ell_{2}:0.7\}\). We use the real field Sea Surface Temperature (SST) data provided by National Oceanic and Atmospheric Administration (NOAA). We adopt a naive lawnmower sampling method to best demonstrate the idea and reduce the impact of the sampling mechanism on the modeling and learning results. The sampling path allows us to collect \(4312\) training data points which are observed sequentially along the path. Specifically, the sampling robot follows the planned lawnmower path and gathers a small batch of data before updating the GP model. In this way we can split the entire lawnmower path into \(98\) small batches, each of which contains \(44\) samples. All the GP models of different batches will update their predictions, hyperparameters, and the pseudo inputs after receiving each
batch of data. The optimized hyperparameters and pseudo inputs estimated from the previous step will be used as initialization for the next round.
**(Metrics)** To measure the learning performance, we use the Root Mean Squared Error (RMSE) and the average Negative Log Predictive Density (NLPD) on the test data:
RMSE \[= \sqrt{\frac{1}{T}\sum_{t=1}^{T}(f_{t}-m_{t})^{2}},\] \[NLPD = \frac{1}{T}\sum_{t=1}^{T}-\log p(y_{t}|\mathbf{x}_{t}),\]
where \(T\) is the number of the test data points, \(f_{t}\) and \(y_{t}\) are the underlying function value and its corresponding noisy observation, respectively; \(m_{t}\) is the posterior mean value, and \(p(y_{t}|\mathbf{x}_{t})\) is the probability density of the predictive distribution. RMSE only takes a point prediction into account while NLPD penalizes over-confident predictions and under-confident ones. We also show the numbers of on-board data including training data points and pseudo points to compare the memory usage. We are revealing that, the training runtime, prediction runtime, and overall runtime demonstrate the computational efficiency of the SSGP method.
### _Results_
We first evaluate the algorithms' accuracy, practical runtimes, and memory usage (**Q1**). Fig. 2 shows the results of performance evaluation. Although GPR500 has the fastest runtime, it throws away useful historical information, leading to low accuracy in terms of RMSE and NLPD. The SSGP, GPR, VSGP, and SPGP have similar RMSEs5, but the computational cost of SSGP is much less than the other three. The overall runtime of SSGP is slightly more than that of GPR500. (Note, the runtimes of GPR are plotted in log scale.)
Footnote 5: We moved RMSE and NLPD of SPGP down a bit for better visualization.
The number of on-board data is a reflection of the memory usage. The SSGP only needs to store the current batch of data and a small set of pseudo points, hence it entails the lowest memory consumption. This feature is extremely important for long-term autonomous environmental monitoring because the robot has limited computation and memory resources while the data size continuously grows.
As noted by [5], there is a discrepancy between the NLPD of SSGP and that of the other three as more and more data have been collected. This phenomenon is more obvious on the real-world data as shown in Fig. 3. In the next section, we will show that using more pseudo points can mitigate this issue without increasing much computational cost and memory consumption.
To investigate the effectiveness of environmental characterization by learning hyperparameters (**Q2**), we visualize the learned hyperparameters of all the algorithms together with the ground truth (Fig. 4). The SSGP is able to capture the underlying amplitude (\(\sigma_{f}^{2}\)) and lengthscale along x-axis, though it underestimates the lengthscale along y-axis. Through the lengthscales of SSGP, we can also infer that the synthetic data varies rapidly along the x direction. Some snapshots of the modeling process of both datasets are shown in Fig. 6 and Fig. 7.
We then investigate the relationship between the number of pseudo points and the accuracy of the SSGP (**Q3**). We summarize the training data using \(M\) pseudo points, reducing the computational cost to \(\mathcal{O}(NM^{2})\). However, as seen in Fig. 2 and Fig. 3, the quality of approximation will be limited if the number of pseudo points is not sufficient. (See the increased performance discrepancy between the SSGP and other methods along with the growth of data points where the number of pseudo points keeps unchanged.) This implies that the number of pseudo points \(M\) implicitly depends on the number of training data \(N\). For example, if such dependence is linear, i.e., \(M\) scales linearly with \(N\), then the computational complexity is still \(\mathcal{O}(N^{3})\). Fortunately, a recent work [7] shows that \(M=\mathcal{O}(\log^{D}N)\) suffices for regression with normally distributed inputs in \(D\)-dimensions with the SE kernel. To better demonstrate the trends, we ran SSGP with \(M=\lceil\alpha\log^{2}N+1\rceil\) pseudo inputs and a set of differing \(\alpha\) values to control the number \(M\), i.e., \(\alpha=0.5,1,2,4\). We then plotted the RMSE and NLPD of GPR for comparison. As shown in Fig. 5, when \(\alpha\) is \(0.5\) or \(1\), there are significant gaps between the RMSE and NLPD of SSGP and those of GPR. When \(\alpha=2\), the performance of SSGP matches to that of GPR whilst the computational times grow with a small margin. When \(\alpha=4\), it brings a slight improvement in precision at the cost of more runtime.
## V Conclusions
This paper presents an environmental model learning framework using streaming sparse Gaussian process (SSGP). The SSGP updates the predictive distribution, the hyperparameters, and the pseudo points which summarize the historical data, in a streaming manner. We have evaluated the performances of SSGP by comparing it against other baseline methods. Our evaluations show that the SSGP produces competitive prediction accuracy with dramatically reduced computational cost and memory demand, making it suitable for long-term autonomous environmental monitoring. We have also empirically investigated how the number of pseudo points influences the learning accuracy and efficiency. Our experimental result reveals that \(\lceil 2\log^{2}N+1\rceil\) pseudo points are sufficient for regression with \(N\) 2D inputs and SE ARD kernel.
|
2307.09562 | Rethinking Intersection Over Union for Small Object Detection in
Few-Shot Regime | In Few-Shot Object Detection (FSOD), detecting small objects is extremely
difficult. The limited supervision cripples the localization capabilities of
the models and a few pixels shift can dramatically reduce the Intersection over
Union (IoU) between the ground truth and predicted boxes for small objects. To
this end, we propose Scale-adaptive Intersection over Union (SIoU), a novel box
similarity measure. SIoU changes with the objects' size, it is more lenient
with small object shifts. We conducted a user study and SIoU better aligns than
IoU with human judgment. Employing SIoU as an evaluation criterion helps to
build more user-oriented models. SIoU can also be used as a loss function to
prioritize small objects during training, outperforming existing loss
functions. SIoU improves small object detection in the non-few-shot regime, but
this setting is unrealistic in the industry as annotated detection datasets are
often too expensive to acquire. Hence, our experiments mainly focus on the
few-shot regime to demonstrate the superiority and versatility of SIoU loss.
SIoU improves significantly FSOD performance on small objects in both natural
(Pascal VOC and COCO datasets) and aerial images (DOTA and DIOR). In aerial
imagery, small objects are critical and SIoU loss achieves new state-of-the-art
FSOD on DOTA and DIOR. | Pierre Le Jeune, Anissa Mokraoui | 2023-07-17T07:26:58Z | http://arxiv.org/abs/2307.09562v1 | # Rethinking Intersection Over Union for Small Object Detection in Few-Shot Regime
###### Abstract
In Few-Shot Object Detection (FSOD), detecting small objects is extremely difficult. The limited supervision cripples the localization capabilities of the models and a few pixels shift can dramatically reduce the Intersection over Union (IoU) between the ground truth and predicted boxes for small objects. To this end, we propose Scale-adaptive Intersection over Union (SIoU), a novel box similarity measure. SIoU changes with the objects' size, it is more lenient with small object shifts. We conducted a user study and SIoU better aligns than IoU with human judgment. Employing SIoU as an evaluation criterion helps to build more user-oriented models. SIoU can also be used as a loss function to prioritize small objects during training, outperforming existing loss functions. SIoU improves small object detection in the non-few-shot regime, but this setting is unrealistic in the industry as annotated detection datasets are often too expensive to acquire. Hence, our experiments mainly focus on the few-shot regime to demonstrate the superiority and versatility of SIoU loss. SIoU improves significantly FSOD performance on small objects in both natural (Pascal VOC and COCO datasets) and aerial images (DOTA and DIOR). In aerial imagery, small objects are critical and SIoU loss achieves new state-of-the-art FSOD on DOTA and DIOR.
## 1 Introduction
Object detection is a fundamental task in industry and has applications in many domains such as medical imaging, agriculture, and autonomous driving. However, it is often impracticable or too expensive to build sufficiently large annotated datasets to train detection models. It is therefore crucial to improve data-efficient approaches and particularly Few-Shot Object Detection (FSOD) methods. However, the limited number of examples provides poor supervision and prevents the model to learn accurate localization, which is especially problematic for small objects. Besides, the difficulty of detecting small objects was already reported in many object detectors [4, 8, 23, 26, 27, 34, 18]. Numerous attempts partially solved this issue by proposing various improvements such as pyramidal features [21, 23, 46] or multiscale training [30, 31]. However, this difficulty greatly intensifies in the few-shot regime as shown by [24]. One of the reasons for the poor FSOD performance on small objects is the extensive use of Intersection over Union (IoU). Most detection (and so FSOD) pipelines employ IoU as a regression loss [34, 43]; for example selection [23, 26, 27]; or an evaluation criterion, but IoU is not an optimal choice when dealing with small objects.
IoU has a remarkable property: scale invariance. It means that scaling all coordinates of two bounding boxes by the same amount will not change their IoU. At first glance, this seems a desirable property, all objects will be treated identically no matter their size. In practice, it has a fundamental drawback: small boxes are prone to large IoU changes from only small position or size modifications. To clarify, let us consider a simple example. Two square boxes
Figure 1: **(Left)** Evolution of IoU, NWD [40], the proposed SIoU and \(\alpha\)-IoU [12] when a box is shifted from the ground truth box by \(\varepsilon_{\text{loc}}\) pixels for various box sizes \(\omega\in\{4,16,64,128\}\). **(Right)** Ratio between pixel localization error \(\varepsilon_{\text{loc}}\) and object size \(\omega\) for a trained detection model on DOTA dataset. Each point represents the localization error of one object in DOTA test set.
of width \(\omega\) are shifted diagonally by \(\varepsilon_{\text{loc}}\) pixels. In this setup, a 1-pixel shift leads to a larger decrease in IoU when boxes are small. This comes from the scale invariance property, IoU stays constant as the ratio \(\frac{\varepsilon_{\text{loc}}}{\omega}\) remains fixed. However, this ratio is not constant for trained detection models, it increases as objects get smaller (see Fig. 1 right), leading to lower IoU values for smaller objects. Hence, small objects are much more likely to fall under the IoU thresholds which decide if a box is a true or false detection, even though being satisfactory from a human perspective (see the user study in Sec. 4.3). Secs. 4.1 and 4.2 explore the resilience of various criteria to localization inaccuracies and confirm that IoU is not an optimal box similarity measure.
Only a handful of works question the adequation of IoU for object detection. Among those, [28] proposed a generalization of IoU when boxes do not overlap, [40] introduced a novel loss function to target small objects and [32] showed that human perception and IoU are not fully aligned. This lack of interest in new criterion design is explained by the great detection performance in the regular setting (_i.e_. natural images with sufficient annotations). In the few-shot regime, and when targets are small, the flaws of IoU become critical. Therefore, we revisit IoU to improve FSOD methods and focus on aerial images which mostly contain small objects. We propose Scale-adaptive Intersection over Union (SIoU), a novel criterion that can replace IoU for training and evaluating detection models. To demonstrate the superiority of the proposed SIoU, Sec. 4 compares it with various existing criteria. This section analyzes the criteria's distributions when exposed to randomly shifted boxes. To our knowledge, this is the first attempt to study empirically and theoretically the distributions of these criteria. The conclusions of this analysis are then compared with human perception through a user study which shows that SIoU aligns better with human appraisal than IoU (see Sec. 4.3). The comparison of the criteria also highlights that SIoU as a loss function can guide training towards small objects better than other criteria and in a more controlled fashion. SIoU loss can be tuned to improve the detection of small objects just as it can be tuned to align with human perception. Finally, these analyses are confirmed by extensive experiments on both aerial images (DOTA [38] and DIOR [19] datasets) and natural images (Pascal VOC [9] and COCO [22] datasets).
The main contributions of this paper are as follows:
* A novel scale-adaptive criterion called SIoU that can be tuned to detect objects of various sizes.
* An empirical and theoretical analysis of existing criteria that help to understand the required properties for designing regression loss functions and evaluation criteria.
* A user study that demonstrates the misalignment between IoU and human perception for the detection task.
* Extensive experiments to support the superiority of SIoU for detecting small objects in the few-shot regime.
## 2 Related Works
### Intersection over Union and its Variants
To begin, let us review the definition of existing criteria for set similarity. First, the IoU is defined as the intersection area of two sets divided by the area of their union:
\[\text{IoU}(A,B)=\frac{|A\cap B|}{|A\cup B|}, \tag{1}\]
where \(A\) and \(B\) are two sets. When \(A\) and \(B\) are rectangular boxes, IoU can be computed easily with simple operations on box coordinates. This explains why IoU is such a widespread criterion for object detection. It is used as a loss function (\(\mathcal{L}_{\text{reg}}=1-\text{IoU}\)) by several well established detection frameworks (_e.g_. [43, 34]). IoU is also involved in the process of example selection during training of most detection methods, _i.e_. all the ones inspired either by Faster R-CNN [27] or YOLO [26]. In these frameworks, regression loss is computed from the coordinates of proposed boxes and ground truth. Not all pairs of proposals and ground truth are kept for the computation. Only proposals with a sufficient IoU with a ground truth box are selected. Finally, IoU is also used at the heart of the evaluation process. A proposed box is considered a positive detection if it meets two conditions: 1) an IoU greater than a given threshold with a ground truth box, and 2) the same label as this ground truth.
Several attempts were made to improve IoU but existing works mostly focus on the regression loss part, disregarding the other IoU uses in the detection task. First, [28] proposed a generalized version of IoU which yields negative values when boxes do not overlap:
\[\text{GIoU}(A,B)=\text{IoU}(A,B)-\frac{|C\backslash(A\cup B)|}{|C|}, \tag{2}\]
where \(C\) is the convex hull around \(A\) and \(B\). This criterion is employed as a loss function by several detection frameworks [3, 45, 3]. It is sometimes also combined with other regression loss as in [4, 20], which both combine it with an L1 regression on box coordinates. Combining IoU loss with other regression terms was also proposed by [47]. They introduce two losses Distance-IoU (DIoU) and Complete-IoU which respectively add an L2 regression term and an aspect ratio penalty to the IoU loss. Recently, \(\alpha\)-IoU [12] extends DIoU [47] by proposing a family of losses following the same structure as DIoU with the IoU term raised to the power \(\alpha\). Alternatively, Bounded IoU [35] computes an IoU upper bound between a proposal and a ground truth.
All previous IoU improvements were made to tackle the regression part of the models. However, IoU is involved in other parts of the framework including example selection,
Non-Maximal Suppression, and evaluation. A recent user study [32] indicates that IoU does not completely align with human perception. Humans have strong positional and size preferences based on conceptual information contained in the boxes. It suggests that IoU is not an optimal choice either for example selection or for evaluation as it will lead to detections that do not satisfy human users.
### Object Detection
Object Detection is a problem that has been studied for decades. It witnessed rapid progress with the rise of deep learning methods [23, 26, 27]. Recent methods achieve very satisfactory results when provided with sufficient data. However, there remain some challenges to mastering object detection. Most object detectors still struggle with small objects, and when data is scarce.
**Small Object Detection** is a challenging task. There has been plenty of attempts to improve it based on pyramidal features [21, 23, 46], multiscale training [30, 31], data-augmentation [17] or super-resolution [1, 7, 10, 25, 29]. But only a few works tackle this problem by changing the loss function. Normalized Wasserstein Distance [40] (NWD) proposes an alternative to IoU loss specifically designed for detecting small objects. It consists in computing the Wasserstein distance between two Gaussian distributions fitted on the two compared bounding boxes. Moreover, NWD is also used as an example selection criterion.
**Few-Shot Object Detection (FSOD)** is the task of detecting objects only provided with a handful of examples per class. Many approaches were proposed in the literature to address this problem: metric learning [13, 16, 33, 42], simple fine-tuning [36, 37, 41, 5] and attention-based methods [6, 11, 15, 39, 44]. The similarity between all these methods is that they learn generic knowledge from a set of _base classes_ with plenty of annotations and adapt to _novel classes_ from the few available examples. Recently, it has been shown [24] that FSOD is even more sensitive to small objects than regular object detection. Extracting information from small objects is hard and produces spurious features that do not condition well the detection. Some solutions are proposed to overcome this issue with augmentation and careful example cropping [24] or with dedicated attention mechanisms [14]. Nevertheless, this is not enough to solve the issue of small objects in FSOD.
## 3 Novel Scale-Adaptive Intersection over Union
Before introducing the proposed criterion, let us define two bounding boxes \(b_{1}=[x_{1},y_{1},w_{1},h_{1}]^{T}\) and \(b_{2}=[x_{2},y_{2},w_{2},h_{2}]^{T}\) (the prediction box and ground truth respectively), where \(x_{i}\) and \(y_{i}\) are the center coordinates of the box \(b_{i}\), while \(w_{i}\) and \(h_{i}\) denote its width and height respectively. In the following section, the adjectives small, medium, and large will be used extensively. They have a precise meaning for object detection, defined in COCO dataset [22]. The box \(b_{i}\) is _small_ if \(\sqrt{w_{i}h_{i}}\leq 32\) pixels, _medium_ if \(32<\sqrt{w_{i}h_{i}}\leq 96\), and _large_ if \(\sqrt{w_{i}h_{i}}>96\).
IoU is scale-invariant, hence if \(\text{IoU}(b_{1},b_{2})=u\), scaling all coordinates of both boxes by the same factor \(k\) will produce the same IoU: \(\text{IoU}(b_{1},b_{2})=\text{IoU}(kb_{1},kb_{2})=u\). However, detection models are not scale-invariant, they do not localize equally well small and large objects. Fig. 1 (right) clearly shows that the ratio between the localization error (\(\varepsilon_{\text{loc}}=\|b_{1}-b_{2}\|_{1}\)) and the object size (\(\omega=\sqrt{w_{2}h_{2}}\)) increases as the objects become smaller. This figure is made with a model trained on DOTA with all annotations. Each point represents the ratio \(\frac{\varepsilon_{\text{loc}}}{\omega}\) for one object in the test set. Hence, because of the scale-invariance property, IoU scores are lower for small objects. A way to alleviate this issue is by relaxing the invariance property of the IoU so it favors more small objects without penalizing large ones. To this end, we propose a novel criterion called Scale-adaptive Intersection over Union (SIoU):
\[\text{SIoU}(b_{1},b_{2}) =\text{IoU}(b_{1},b_{2})^{p} \tag{3}\] \[\text{with}\qquad p =1-\gamma\exp\left(-\frac{\sqrt{w_{1}h_{1}+w_{2}h_{2}}}{\sqrt{2} \kappa}\right),\]
\(p\) is a function of the object sizes, thus, the scores are rescaled according to the objects' size. \(\gamma\in]-\infty,1]\) and \(\kappa>0\) are two parameters that control how the rescaling occurs (hence, \(p\geq 0\)). \(\gamma\) governs the scaling for small objects while \(\kappa\) controls how fast the behavior of regular IoU is recovered for large objects. Fig. 4 (left) in App. A shows the evolution of \(p\) with object size for various \(\gamma\) and \(\kappa\).
This new criterion follows the same structure as \(\alpha\)-IoU [12], but differs greatly as it sets different powers for different object sizes. SIoU provides a solution for small object detection while \(\alpha\)-IoU only aims to improve general detection. However, SIoU inherits a few properties from \(\alpha\)-IoU.
**Property 1 (SIoU Relaxation)**
_Let \(b_{1}\) and \(b_{2}\) be two bounding boxes and introduce \(\tau=\frac{w_{1}h_{1}+w_{2}h_{2}}{h}\) their average area. SIoU preserves the behavior of IoU in certain cases such as:_
* \(\text{IoU}(b_{1},b_{2})=0\Rightarrow\text{SIoU}(b_{1},b_{2})=\text{IoU}(b_{1},b _{2})=0\)__
* \(\text{IoU}(b_{1},b_{2})=1\Rightarrow\text{SIoU}(b_{1},b_{2})=\text{IoU}(b_{1},b _{2})=1\)__
* \(\lim\limits_{\tau\rightarrow+\infty}\text{SIoU}(b_{1},b_{2})=\text{IoU}(b_{1},b _{2})\)__
* \(\lim\limits_{\kappa\to 0}\text{SIoU}(b_{1},b_{2})=\text{IoU}(b_{1},b _{2})\)__
Property 1 shows that SIoU is sound: it equals IoU when boxes have no intersection and when they perfectly overlap. Therefore, the associated loss function (see Property 2) will take maximal values for boxes that do not overlap and minimum ones for identical boxes. In addition, SIoU behaves
similarly to IoU when dealing with large objects (_i.e_. when \(\tau\rightarrow\infty\)). When boxes are large, the power \(p\) that rescales the IoU is close to 1. Hence, this change of criterion only impacts small objects. However, when discussing the properties of SIoU, the limit between small/medium/large objects is relative to the choice of \(\kappa\). If \(\kappa\gg\sqrt{wh}\), even large objects will be rescaled. On the contrary, when \(\kappa\to 0\), all objects are treated as large and are not rescaled. In practice, \(\kappa\) and \(\gamma\) are chosen empirically, but Sec. 4 provides useful insights for the choice of these parameters.
**Property 2 (Loss and gradients reweighting)**
_Let \(\mathcal{L}_{\text{IoU}}(b_{1},b_{2})=1-\text{IoU}(b_{1},b_{2})\) and \(\mathcal{L}_{\text{SIoU}}(b_{1},b_{2})=1-\text{SIoU}(b_{1},b_{2})\) be the loss functions associated respectively with IoU and SIoU. Let denote the ratio between SIoU and IoU losses by \(\mathcal{W}_{\mathcal{L}}(b_{1},b_{2})=\frac{\mathcal{L}_{\text{SIoU}}(b_{1}, b_{2})}{\mathcal{L}_{\text{IoU}}(b_{1},b_{2})}\). Similarly, \(\mathcal{W}_{\nabla}(b_{1},b_{2})=\frac{|\nabla\mathcal{L}_{\text{SIoU}}(b_{1 },b_{2})|}{|\nabla\mathcal{L}_{\text{IoU}}(b_{1},b_{2})|}\) denotes the ratio of gradients generated from SIoU and IoU losses:_
\[\mathcal{W}_{\mathcal{L}}(b_{1},b_{2}) =\frac{1-\text{IoU}(b_{1},b_{2})^{p}}{1-\text{IoU}(b_{1},b_{2})}, \tag{4}\] \[\mathcal{W}_{\nabla}(b_{1},b_{2}) =p\text{IoU}(b_{1},b_{2})^{p-1}, \tag{5}\]
\(\mathcal{W}_{\mathcal{L}}\) _and \(\mathcal{W}_{\nabla}\) are increasing (resp. decreasing) functions of IoU when \(p\geq 1\) (resp. \(p<1\)) which is satisfied when \(\gamma\leq 0\) (resp. \(\gamma>0\)). As the IoU goes to 1, \(\mathcal{W}_{\mathcal{L}}\) and \(\mathcal{W}_{\nabla}\) approaches \(p\):_
\[\lim_{\text{IoU}(b_{1},b_{2})\to 1}\mathcal{W}_{ \mathcal{L}}(b_{1},b_{2})=p, \tag{6}\] \[\lim_{\text{IoU}(b_{1},b_{2})\to 1}\mathcal{W}_{ \nabla}(b_{1},b_{2})=p. \tag{7}\]
We employ the same tools as in [12] to analyze how SIoU affects the losses and associated gradients. We show in property 2 that their results hold for a non-constant power \(p\) as well. From this, it can be observed that when IoU is close to 1, losses and gradients are both rescaled by \(p\). Hence, the gradients coming from objects of different sizes will be rescaled differently. The setting of \(\gamma\) and \(\kappa\) allows focusing the training on specific object sizes. Experimental results are provided in Sec. 5 to support these findings. Proofs for properties 1 and 2 are available in App. B
However, _order preservingness_ is not satisfied by using power value changing with the size of the objects. This property ensures that the order given by the IoU is preserved with the novel criterion, _e.g_. IoU\((b_{1},b_{2})<\text{IoU}(b_{1},b_{3})\Rightarrow\alpha\text{-IoU}(b_{1},b_{2})< \alpha\text{-IoU}(b_{1},b_{3})\). \(\alpha\text{-IoU}\) preserves the order of IoU, but SIoU does not. We show in App. B that even though this property is not always satisfied, a large proportion of boxes meet the conditions for the order to hold.
#### Extensions and generalization
Finally, SIoU can very well be extended as IoU was with GIoU or DIoU. We provide here an extension following GIoU as it appears especially well-designed for small object detection. When detecting small targets, it is easier for a model to completely miss the object, producing an IoU of 0 no matter how far the predicted box is. On the contrary, GIoU yields negative values for non-intersecting boxes. This produces more relevant guidance during the early phase of training when the model outputs poorly located boxes. Therefore, we extend SIoU by raising GIoU to the same power \(p\) as in Eq. (3):
\[\text{GSIoU}(b_{1},b_{2})=\begin{cases}\text{g}(b_{1},b_{2})^{p}&\text{if }\text{g}(b_{1},b_{2})\geq 0\\ -|\text{g}(b_{1},b_{2})|^{p}&\text{if }\text{g}(b_{1},b_{2})<0\end{cases}, \tag{8}\]
where \(g(b_{1},b_{2})=\text{GIoU}(b_{1},b_{2})\).
## 4 Scale-Adaptive Criteria Analysis
This section analyzes both empirically and theoretically the behaviors of IoU, GIoU [28], \(\alpha\text{-IoU}\)[12], NWD [40], SIoU and GSIoU. We investigate the desirable properties of such criteria for model training and performance evaluation.
### Response Analysis to Box Shifting
As mentioned in Sec. 3, IoU drops dramatically when the localization error increases for small objects. Shifting a box a few pixels off the ground truth can result in a large decrease in IoU, without diminishing the quality of the detection from a human perspective. This is depicted in Fig. 1 (left), plain lines represent the evolution of IoU for various object sizes. These curves are generated by diagonally shifting a box away from the ground truth. Boxes are squares, but similar curves would be observed otherwise. In this plot, boxes have the same size, therefore, when there is no shift in between (\(\varepsilon_{\text{loc}}=0\)), IoU equals 1. However, if the sizes of the boxes differ by a ratio \(r\), IoU would peak at \(1/r^{2}\). Other line types represent other criteria. SIoU decreases slower than IoU when \(\varepsilon_{\text{loc}}\) increases, this is especially true when boxes are small. This holds because \(\gamma=0.5\), if it was negative, SIoU would adopt the opposite behavior. In addition, the gap between IoU and SIoU is even larger when objects are small. Only NWD shares this property, but it only appears when boxes have different sizes (all lines coincide for NWD). Hence, SIoU is the only criterion that allows controlling its decreasing rate, _i.e_. how much SIoU is lost for a 1-pixel shift. As GIoU and GSIoU values range in \([-1,1]\), they were not included in Fig. 1, but the same analysis holds for them as well (see App. C).
### Resilience Analysis to Detector Inaccuracy
Knowing how a criterion responds to shifts and size variations is important to understand what makes a sensible box similarity measure. Pushing beyond the shift analysis, we
study empirically and theoretically the criteria's distributions when exposed to detector inaccuracies, _i.e_. randomly shifted boxes. This setting mimics the inaccuracy of the model either during training or at test time.
#### 4.2.1 Empirical Protocol
To simplify, let us suppose that all boxes are squares of the same size \(\omega\) and can be shifted only horizontally. Similar results are observed by relaxing these constraints, see App. C. A box is then entirely defined by its position \(x\) and its width \(\omega\). If a detector is not perfect, it will produce bounding boxes slightly shifted horizontally from the ground truth. To model the detector's inaccuracy, we suppose that the box position is randomly sampled from a centered Gaussian distribution: \(X\sim\mathcal{N}(0,\sigma^{2})\) where \(\sigma\) controls how inaccurate the model is. We are interested in the distribution of \(\mathcal{C}\in\{\text{IoU},\text{GIoU},\text{SIoU},\text{GSIoU},\alpha\text{ -IoU},\text{NWD}\}\) and how it changes with \(\omega\). To this end, let \(Z=\mathcal{C}(X)\). More precisely, we are interested in the probability density function (pdf) of \(Z\) and its two first moments (which exist because \(\mathcal{C}\) is continuous and bounded).
Fig. 2 gathers the results of this analysis. It shows the pdf of each criterion for various box sizes (left) along with the evolution of the expectation and standard deviation of \(Z\) against \(\omega\) (middle and right). From this, it can be noted that the size of the boxes has a large influence on the distributions of all criteria. The expected values of all criteria are monotonically increasing with object size. In particular, small objects have lower expected IoU values than larger ones. This is consistent with the initial assessment from Fig. 1 (right) and it validates the choice of \(\sigma\) constant for this study (although App. C discusses this assumption).
When building detection models, we hope to detect equally well objects of all sizes, this means having a constant expected IoU, no matter the objects' size. This would require the localization error to be an affine function of \(\omega\). Of course, the localization error of the detector is likely to depend on \(\omega\). However, it cannot be an affine function, otherwise, small objects would be perfectly detected, which is not observed (see Fig. 1, right). As SloU has larger expected values than IoU for small objects, it can compensate for their larger localization errors. The setting of \(\gamma\) and \(\kappa\) allows controlling how much small objects are favored (see Fig. 4 in App. A). NWD is not included in these plots as its expected value and variance are constant when dealing with same-size boxes.
**Influence Analysis on the Performance Evaluation**
If the expected value of a criterion is too small, it is likely that the boxes will be considered negative detections during evaluation and therefore reduce the performance. Therefore, having a criterion with larger expected values for small objects would better reflect the true performance of a detector. One might think that it would be equivalent to scale-adaptive IoU thresholds during the evaluation, but this is not completely true as the variances of the criteria also differ.
Having an accurate criterion (_i.e_. with low variance) is crucial for evaluation. Let us take a detector that produces well-localized boxes on average, _i.e_. on average the crite
Figure 2: Analysis of the distribution of IoU, SIoU, GIoU, GSIoU and \(\alpha\)-IoU when computed on inaccurately positioned boxes. This is done by observing the probability distribution functions for various \(\omega\) values **(left)**, the expectation **(middle)** and standard deviation **(right)** for all criteria. For SIoU and GSIoU, we fixed \(\gamma=0.5\) and \(\kappa=64\), for \(\alpha\)-IoU, \(\alpha=3\) (as recommended in the original paper [12]). The inaccuracy of the detector is set to \(\sigma=16\). Note that the empirical pdfs were smoothed using a Kernel Density Estimator method. This affects particularly IoU, SIoU, and \(\alpha\)-IoU for the actual pdf is defined only on \([0,1]\). For the sake of visualization, GIoU and GSIoU were rescaled between 0 and 1 for the expectation and standard deviation plots.
rion computed between the boxes and their corresponding ground truths is above a certain threshold. As the detector is not perfect, it will randomly produce boxes slightly better or slightly worse than the average. If the criterion has a high variance, it will be more likely that poor boxes get scores below the criterion threshold and therefore will be considered negative detections. This will reduce the performance of the detector even though on average, it meets the localization requirements. In addition, a criterion with a higher variance will be less reliable and would produce more inconsistent evaluations of a model. The fact that the IoU variance is high for small objects partly explains why detectors have much lower performance on these objects. Hence, SIoU seems more adapted for evaluation. Of course, using this criterion for evaluation will attribute higher scores for less precise localization of small objects. However, this aligns better with human perception as demonstrated in Sec. 4.3. Employing SIoU as a metric also allows tweaking that metric for the needs of a specific application.
#### Influence Analysis on Training
All criteria discussed above are employed as regression losses in the literature. The loss associated with each criterion \(\mathcal{C}\) is \(\mathcal{L}_{\mathcal{C}}(b_{1},b_{2})=1-\mathcal{C}(b_{1},b_{2})\). Therefore, the expected value of the criterion determines the expected value of the loss and thus the magnitude of the gradients. Large values of the criterion give low values of the loss. Now, as the expected values of the criteria change with the objects size, the expected values of the losses also change. Small objects generate greater loss values than larger ones on average. However, this is not enough as performance on small objects is poor when training with IoU. To achieve better detection, training must focus even more on small objects. One way to do this is to set larger loss values for small objects. That way, the equilibrium is shifted toward smaller objects and gradients will point to regions where the loss of small objects is lower. As shown in Fig. 5 (App. A), with the right parameters, SIoU can do that. It attributes lower values for small objects while keeping similar values for large ones. The contrast between small and large objects is accentuated and optimization naturally focuses on smaller objects. SIoU's parameters control which object size gets more emphasis. This is closely linked to Property 2 which states that employing SIoU (compared to IoU) reweights the loss and the gradient by \(p\). If \(\gamma<0\), \(p\) decreases with the size of the objects and thus the optimization focuses on small objects. This also explains why generalizations of existing criteria (_i.e_. with negative values for non-overlapping boxes) often outperform their vanilla version. Taking IoU and GIoU as an example, the gap between their expected values for small and large objects is greater with GIoU. It nudges the optimization towards small objects.
#### 4.2.2 Theoretical study of GIoU
Criteria pdfs and first moments can also be derived theoretically. We provide such results for GIoU in Proposition 1.
**Proposition 1** (GIoU's distribution): _Let \(b_{1}=(0,y_{1},w_{1},h_{1})\) be a bounding box horizontally centered and \(b_{2}=(X,y_{2},w_{2},h_{2})\) another bounding box randomly positioned, with \(X\sim\mathcal{N}(0,\sigma^{2})\) and \(\sigma\in\mathbb{R}^{*}_{+}\). Let's suppose that the boxes are identical squares, shifted only horizontally (i.e. \(w_{1}=w_{2}=h_{1}=h_{2}\) and \(y_{1}=y_{2}\)). Let \(Z=\mathcal{C}(X)\), where \(\mathcal{C}\) is the generalized intersection over union. The probability density function of \(Z\) is given by:_
\[d_{Z}(z)=\frac{2\omega}{(1+z)^{2}\sqrt{2\pi}\sigma}\exp\left(- \frac{1}{2}\left[\frac{\omega(1-z)}{\sigma(1+z)}\right]^{2}\right). \tag{9}\]
_The two first moments of \(Z\) exist and are given by:_
\[\mathbb{E}[Z] =\frac{2}{\pi^{3/2}}G_{3,2}^{2,3}\left(2a^{2}\begin{vmatrix}0& \frac{1}{2}&\frac{1}{2}\\ \frac{1}{2}&0\end{vmatrix}\right), \tag{10}\] \[\mathbb{E}[Z^{2}] =1-\frac{8a}{\sqrt{2\pi}}+\frac{16a^{2}}{\pi^{3/2}}G_{3,2}^{2,3} \left(2a^{2}\begin{vmatrix}-1&\frac{1}{2}&-\frac{1}{2}\\ \frac{1}{2}&0\end{vmatrix}\right), \tag{11}\]
_where \(G\) is the Meijer G-function [2] and \(a=\frac{\sigma}{\omega}\)._
The proof of this proposition and derivations for other criteria are available in App. C. The theoretical expressions completely agree with empirical results, which confirms the soundness of our simulations.
### SIoU Alignment with Human Perception
As discussed in Sec. 4.2.1, having an accurate criterion _i.e_. one with low variance, is crucial for evaluation. However, such a criterion must also align with human perception. Most image processing models are destined to assist human users. Therefore, to maximize the usefulness of such models, the evaluation process should align as closely as possible with human perception. To assess the agreement between the criteria and human perception, we conducted a user study in which participants had to rate on a 1 to 5 scale (_i.e_. from _very poor_ to _very good_) how a bounding box detects an object. Specifically, an object is designated by a green ground truth box and a red box is randomly sampled around the object (_i.e_. with random IoU with the ground truth). Then, the participants rate how well the red box localizes the object within the green one. The study gathered 75 different participants and more than 3000 individual answers. We present here the main conclusion of this study. Detailed results and protocol are available in App. D.
Human perception does not fully align with IoU. People tend to be more lenient than IoU towards small objects. Specifically, comparing a small and a large box with the same IoU with respect to their own ground truth, people
will rate the small one better. This suggests that IoU is too strict for small objects in comparison with human perception. From a human perspective, precise localization seems less important for small objects. Fig. 3 represents the relative gap of IoU (left) and SIoU (right) values for each object size and rating. The relative differences \(c_{s,r}\) are computed against the average IoU (or SIoU) value per rating:
\[c_{s,r}=\frac{\mathcal{C}_{s,r}-\sum\limits_{s}\mathcal{C}_{s,r}}{\sum\limits_{ s}\mathcal{C}_{s,r}}, \tag{12}\]
where \(\mathcal{C}_{s,r}\) is the average criterion value (\(\mathcal{C}\in\{\text{IoU},\text{SIoU}\}\)) for objects of size \(s\) and rating \(r\). IoU values for small objects (in orange) are lower than for large objects (in red) for all rating \(r\). For a human to give a rating \(r\) to a box, it requires that a box overlaps less with the ground truth (according to IoU) if the boxes are small. SIoU compensates for this trend (see Fig. 3 (right)): SIoU differences between objects of different sizes but the same rating are smaller than for IoU. This means that SIoU process objects independently of their size. Similar charts with absolute IoU and SIoU values can be found in App. D (Fig. 10), it also includes charts for \(\alpha\)-IoU, NWD and other SIoU parameters. Here \(\gamma=0.2\) and \(\kappa=64\) for SIoU. Choosing a higher \(\gamma\) value would reverse the trend and produce a criterion even more lenient than humans for small objects. It will also decrease further SIoU's variance. However, this setting has been chosen to maximize the alignment with human perception. SIoU correlates better with human rating compared with other criteria. As the rating is an ordered categorical variable, we choose Kendall rank correlation to make the comparison. The correlation between the human rating \(r\) and each criterion can be found in Tab. 1 (correlations between criteria are available in Tab. 9 in App. D). SIoU with \(\gamma=0.2\) and \(\kappa=64\) aligns best with human perception and has a low variance. This showcases the superiority of SIoU over existing criteria. It should be preferred over IoU to assess the performance of models on all visual tasks that commonly employ IoU within their evaluation process. It supports recent findings that show misalignment between IoU and human preference [32].
## 5 Experimental Results
To support our analysis from Sec. 4, we conduct various experiments, mainly on aerial images with DOTA [38] and DIOR [19] datasets. To showcase the versatility of SIoU, we also experiment with natural images on Pascal VOC [9] and COCO [22]. Detecting small objects in the few-shot regime is extremely challenging and could have much more applications than regular detection pipelines. Therefore, most of our experiments focus on the few-shot setting. However, we also report results in regular object detection to display the potential of SIoU. For the few-shot experiments, we choose a recently proposed FSOD method: Cross-Scale Query Support Alignment (XQSA) [14]. A comparison with other methods is available in App. E.2. Since the FSOD training is relatively complex, we defer the implementation details in App. E.1. In few-shot literature, it is common to evaluate models separately on base and novel classes, however novel classes performance is what matters the most as it assesses the generalization capabilities of the models. Performance is computed using mean Average Performance (mAP) with a 0.5 IoU threshold.
**Comparison with Existing Criteria**
To begin, we compare the few-shot performance on DOTA with various loss functions designed with the criteria discussed in Sec. 4. The result of these experiments is available in Tab. 2. The criteria are divided into two groups, generalized (which includes NWD) and vanilla criteria. As discussed in Sec. 4.2.1, the generalized versions of the criteria outperform their original counterparts and therefore should be compared separately. Scale-adaptive criteria (SIoU and GSIoU) largely outperforms other criteria on novel classes and especially on small objects. For SIoU and GSIoU, we choose \(\gamma=-3\) and \(\kappa=16\) according to a series of experiments conducted on DOTA to determine their optimal values (see App. A). It is important to point out the relatively good performance of NWD despite not checking all the desirable properties highlighted in Sec. 4.
**FSOD on Aerial and Natural Images**
As the previous set of experiments was only carried out on DOTA, we showcase the versatility of GSIoU on three other datasets: DIOR, Pascal VOC and COCO. As it is clear that generalized criteria outperform other methods, the comparison here is only done between GIoU and GSIoU. For DOTA and DIOR, current state-of-the-art is achieved by XQSA
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & **IoU** & **SIoU** & \(\alpha\)**-IoU** & **NWD** \\ \hline \(r\) & 0.674 & **0.701** & 0.674 & 0.550 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Kendall’s \(\tau\) correlation between various criteria and human rating \(r\). For SIoU, \(\gamma=0.2\) and \(\kappa=64\), for \(\alpha\)-IoU, \(\alpha=3\).
Figure 3: Average IoU (left) and SIoU (right) scores for different object sizes and human ratings \(r\in\{1,2,3,4,5\}\). Values are reported as the relative gap with the average value per rating.
[14], which employs GIoU as regression loss. Therefore, we replace it with GSIoU and achieve significantly better performance on these two datasets (see Tab. 3).
For Pascal VOC and COCO, similar gains are observed, but it requires a different tuning of SIoU. \(\gamma=-3\) and \(\kappa=16\) produce mitigated results these datasets, and \(\gamma=-1\) and \(\kappa=64\) is a more sensible choice. This was predictable as the objects in Pascal VOC and COCO are substantially larger than in DOTA and DIOR. This can also explain the slightly smaller gains on DIOR compared to DOTA. Finding optimal values of \(\gamma\) and \(\kappa\) could yield slightly better performance on DIOR. The tuning of SIoU is quite straightforward, as lower values of \(\gamma\) and \(\kappa\) skew the training towards smaller objects. The right balance depends on the proportion of small, medium and large objects in the datasets. With natural images which contain fewer small objects, the training balance does not need to be shifted as much as for aerial images. In addition to these results, we also conducted several experiments with various FSOD methods to demonstrate the plug-and-play nature of GSIoU. These results are available in App. E and show consistent improvements when replacing GIoU with GSIoU.
**Regular Object Detection on DOTA and DIOR**
Of course, GSIoU is not only beneficial for FSOD, but it also improves the performance of regular object detection methods. Tab. 4 compares the performance of FCOS [34] trained on DOTA and DIOR with GIoU and GSIoU. The same pattern is visible, we get better performance with GSIoU. However, the gain for small objects is not as large as for FSOD. Nevertheless, it suggests that other tasks relying on IoU could also benefit from GSIoU.
**Discussions and Limitations**
As mentioned in Sec. 4.2.1 SIoU is a better choice for performance analysis. However, as IoU is almost the only choice in literature for evaluation, we must use it as well for a fair comparison with existing works. Nonetheless, we provide results from previous tables using SIoU as the evaluation criterion in App. F. They agree with the IoU evaluation and strengthen the conclusions of our experiments. While these results are promising, we must emphasize a few limitations of SIoU. First, SIoU requires a slight tuning to get the best performance, but that tuning is quite straightforward and mostly depends on the size distribution in the target images. SIoU allows being more lenient with small objects for evaluation (\(\gamma\geq 0\)), and stricter for training (\(\gamma\leq 0\)) to prioritize the detection of small targets. Although they are not always part of the detection pipeline, it would be relevant to investigate the replacement of IoU by SIoU for example selection and Non-Maximal Suppression (in our case, the example selection of FCOS does not rely on IoU). Finally, even though SIoU aligns better than IoU with human perception, it does not match completely with it. IoU and SIoU do not account for object content whereas humans heavily do, as highlighted by [32].
## 6 Conclusion
SIoU is a more suitable alternative than IoU both for object detection model evaluation and training, especially in the few-shot regime. It better aligns with human perception and our theoretical analysis confirms sounder properties for evaluation. To our knowledge, this is the first statistical analysis of bounding boxes similarity measure and hopefully, this will lead to more reflections on object detection criteria. As a loss function, SIoU can incline the training toward small objects and therefore greatly improve FSOD performance. Its flexibility allows to easily focus the detection on specific target sizes and adapt to various tasks. Extensive experiments on aerial and natural images demonstrate the superiority of SIoU on small objects, without performance loss on medium and large objects. On aerial images, which contain a lot of small objects, SIoU even achieves state-of-the-art FSOD performance.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{6}{c}{**DOTA**} & \multicolumn{6}{c}{**DIOR**} \\
**FCOS** & **All** & **S** & **M** & **L** & **All** & **S** & **M** & **L** \\ \hline
**w/ GIoU** & \(34.9\) & \(17.4\) & \(36.6\) & \(43.3\) & \(48.1\) & \(10.1\) & \(40.3\) & \(63.2\) \\
**w/ GSIoU** & **36.8** & **17.5** & **40.4** & **45.2** & **49.2** & **11.0** & **41.2** & **66.1** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Regular Object Detection performance on DOTA and DIOR datasets with GIoU and GSIoU (\(\gamma=-3\) and \(\kappa=16\)) losses. mAP is computed with several IoU thresholds (0.5 to 0.95) as it is commonly done in regular detection.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{6}{c}{**Base classes**} & \multicolumn{6}{c}{**Novel Classes**} \\
**Loss** & **All** & **S** & **M** & **L** & **All** & **S** & **M** & **L** \\ \hline
**IoU** & 50.67 & **25.83** & 57.49 & 68.24 & 32.41 & 10.06 & 47.87 & 67.09 \\ \(\alpha\)-IoU & 46.72 & 13.24 & 55.21 & **69.94** & 33.95 & 12.58 & 46.58 & **74.50** \\
**SIoU** & **53.62** & 24.07 & **61.91** & 67.34 & **39.05** & **16.59** & **54.42** & **74.49** \\ \hline
**WWD** & 50.79 & 19.19 & 58.90 & 67.90 & 41.65 & 28.26 & 50.16 & 65.06 \\
**GIoU** & 52.41 & **26.94** & 61.17 & 63.00 & 41.03 & 24.01 & **52.13** & 69.78 \\
**GSIoU** & **52.91** & 22.14 & **61.19** & **66.02** & **45.88** & **34.83** & 51.26 & **70.78** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Few-shot performance comparison between several criteria: IoU, \(\alpha\)-IoU, SIoU, NWD, GIoU, and GSIoU trained on DOTA. mAP is reported with a 0.5 IoU threshold for small (S), medium (M), large (L), and all objects.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{6}{c}{**Base classes**} & \multicolumn{6}{c}{**Novel Classes**} \\
**XQSA** & **All** & **S** & **M** & **L** & **All** & **S** & **M** & **L** \\ \hline
**DOTA** & w/ GIoU & 52.41 & **26.94** & 61.17 & 63.00 & 41.03 & 24.01 & **52.13** & 69.78 \\ & w/ GSIoU & **52.91** & 22.14 & **61.19** & **66.02** & **45.88** & **34.83** & 51.26 & **70.78** \\ \hline
**DIOR** & w/ GIoU & 58.90 & 10.38 & 40.76 & 80.44 & 47.93 & 9.85 & 47.61 & 68.40 \\ & w/ GSIoU & **60.29** & **11.28** & **43.24** & **81.63** & **52.85** & **13.78** & **53.73** & **71.22** \\ \hline
**Pasel** & w/ GIoU & 51.09 & **13.93** & **40.26** & 62.01 & 48.42 & 18.44 & 36.06 & 59.99 \\ & w/ GIoU & **54.47** & 13.88 & 40.13 & **66.62** & **55.16** & **22.94** & **30.24** & **67.40** \\ \hline
**COCO** & w/ GIoU & 19.15 & **8.72** & 22.50 & 30.59 & 26.25 & 11.96 & 23.95 & 38.60 \\ & w/ GSIoU & **19.57** & 8.41 & **23.02** & **31.07** & **27.11** & **12.81** & **26.02** & **39.20** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Few-shot performance on three datasets: DOTA, DIOR, Pascal VOC and COCO. GIoU and GSIoU losses are compared. mAP is reported with a 0.5 IoU threshold for all object sizes. |
2307.10033 | Non-Parametric Self-Identification and Model Predictive Control of
Dexterous In-Hand Manipulation | Building hand-object models for dexterous in-hand manipulation remains a
crucial and open problem. Major challenges include the difficulty of obtaining
the geometric and dynamical models of the hand, object, and time-varying
contacts, as well as the inevitable physical and perception uncertainties.
Instead of building accurate models to map between the actuation inputs and the
object motions, this work proposes to enable the hand-object systems to
continuously approximate their local models via a self-identification process
where an underlying manipulation model is estimated through a small number of
exploratory actions and non-parametric learning. With a very small number of
data points, as opposed to most data-driven methods, our system self-identifies
the underlying manipulation models online through exploratory actions and
non-parametric learning. By integrating the self-identified hand-object model
into a model predictive control framework, the proposed system closes the
control loop to provide high accuracy in-hand manipulation. Furthermore, the
proposed self-identification is able to adaptively trigger online updates
through additional exploratory actions, as soon as the self-identified local
models render large discrepancies against the observed manipulation outcomes.
We implemented the proposed approach on a sensorless underactuated Yale Model O
hand with a single external camera to observe the object's motion. With
extensive experiments, we show that the proposed self-identification approach
can enable accurate and robust dexterous manipulation without requiring an
accurate system model nor a large amount of data for offline training. | Podshara Chanrungmaneekul, Kejia Ren, Joshua T. Grace, Aaron M. Dollar, Kaiyu Hang | 2023-07-14T02:19:07Z | http://arxiv.org/abs/2307.10033v1 | # Non-Parametric Self-Identification and Model Predictive Control of Dexterous In-Hand Manipulation
###### Abstract
Building hand-object models for dexterous in-hand manipulation remains a crucial and open problem. Major challenges include the difficulty of obtaining the geometric and dynamical models of the hand, object, and time-varying contacts, as well as the inevitable physical and perception uncertainties. Instead of building accurate models to map between the actuation inputs and the object motions, this work proposes to enable the hand-object systems to continuously approximate their local models via a self-identification process where an underlying manipulation model is estimated through a small number of exploratory actions and non-parametric learning. With a very small number of data points, as opposed to most data-driven methods, our system self-identifies the underlying manipulation models online through exploratory actions and non-parametric learning. By integrating the self-identified hand-object model into a model predictive control framework, the proposed system closes the control loop to provide high accuracy in-hand manipulation. Furthermore, the proposed self-identification is able to adaptively trigger online updates through additional exploratory actions, as soon as the self-identified local models render large discrepancies against the observed manipulation outcomes. We implemented the proposed approach on a sensorless underactuated Yale Model O hand with a single external camera to observe the object's motion. With extensive experiments, we show that the proposed self-identification approach can enable accurate and robust dexterous manipulation without requiring an accurate system model nor a large amount of data for offline training.
## I Introduction
Dexterous in-hand manipulation is a system-level problem consisting of an array of sub-problems, ranging from the modeling of contacts and hand-object dynamics [1, 2], to the perception, planning, and control of the task-oriented hand-object coordination [3, 4, 5]. At the core of all these problems, almost all existing approaches are challenged by the gaps between the required prior knowledge and online feedback, and the actual limited information available to the system [6]. For example, a very common assumption made to contact-based manipulation systems is that the object model is perfectly known. In practice, however, this is rarely possible even if many sensing modalities are available. As such, in-hand manipulation systems are often limited either in their capability of handling complex dynamics or in their generalizability across similar variations of task setups. Although learning-based approaches have been extensively investigated and shown the capabilities of acquiring complex manipulation skills [7, 8], the data, which is the enabling factor for such systems, is also a major limitation in more general and dynamic tasks.
To bridge the aforementioned gaps while not shifting more burden to the sensing or data collection sides, we previously proposed the idea of _self-identification_[9]. For hand-object systems modeled with a number of known and missing parameters, the missing ones were iteratively self-identified by the hand-object system through exploratory actions without adding any additional sensors. The self-identified system then showed great performance in precise dexterous manipulation while tracking the real-time changes of the missing parameters. This approach was inspired by human manipulation where we do not have accurate models of everything _a priori_. Rather, humans often use a strategy that shifts the system paradigm from "sense, plan, and act" to " act, sense, and plan". However, [9] still modeled the hand-object system analytically and required a number of parameters to be self-identified and tracked in real-time, rendering the approach not easily generalizable and computationally very expensive.
To this end, this work proposes to replace the analytic parameter-based models with a non-parametric model to be self-identified. We consider a challenging setup with an encoderless underactuated robot hand, as shown in Fig. 1.
Fig. 1: An encoderless underactuated Yale Model O hand is tasked to manipulate an unknown object to trace a reference path (green trajectory). Enabled by the proposed non-parametric self-identification, and with no prior knowledge assumed, the hand is able to self-identify a system model using only \(15\) data points and then accurately complete the task (blue trajectory).
Given an unknown grasp on an unknown object, the proposed system first collects a small number of manipulation data points through exploratory actions, for which the grasp stability is passively secured by the hand's compliance. The system then learns a non-parametric model to map from the hand control directly to the object motion, yielding a self-identified local model of the hand-object system. In this work, the non-parametric model was learned by a Gaussian Process Regressor. By integrating the self-identified model into a Model Predictive Control (MPC) framework, we show that in-hand manipulation can be precisely achieved, while the self-identified model can be updated through additional exploratory actions as needed. A system diagram is illustrated in Fig. 2.
## II Related work
_Hand-Object Models:_ Traditional models of in-hand manipulation systems often assume that precise geometric and physical models of the hand, object, contacts, etc., are available [2]. Since such approaches are very sophisticated in modeling every detailed aspect of the system, they are limited in scalability and normally focus on specific sub-problems of hand-object systems, including contact modeling [10], force control [11], stability maintenance [3]. More importantly, model-based methods are inherently limited as the assumptions of model availability often do not hold in reality. On another hand, simplified models such as action primitives have been designed to model the manipulation mappings [12, 13]. However, as the primitives are handcrafted, they are not generalizable, nor scalable. This work proposes self-identifying a non-parametric model of the hand-object system, aiming at avoiding sophisticated modeling, model availability assumption, and unnecessary model simplification.
_Data-Driven Approaches:_ With sufficient data and training, learning-based methods have shown unprecedented performance in acquiring complex manipulation skills [8]. In an end-to-end manner, data has filled in the gap traditionally formed by the lack of _a priori_ system information and perception uncertainties [7, 14]. However, as such methods are sensitive to the amount and diversity of the training data, they are often not generalizable, even to similar task variations. Unlike those data-demanding approaches, the non-parametric Gaussian Process model employed in this work is lightweight and known to work with a minimal amount of data [15]. As such, it enables the self-identification of hand-object models online through only a few exploratory actions.
_Interactive Perception:_ Leveraging proactive manipulation actions to unveil hidden system information can greatly improve the robot perception under limited sensing [16]. Particularly for hand-object systems, interactive perception can enable grasping and in-hand manipulation under large uncertainties [17, 18]. While most interactive perception methods focus on estimating specific parameters of a system, our non-parametric self-identification aims to directly build a mapping, approximated locally, from the hand's actuation input to the object motions to enable the MPC control of precise dexterous manipulation.
## III Hand-object systems and problem formulation
In this work, we aim to address the in-hand dexterous manipulation problem, where an object grasped by an underactuated robot hand needs to be reconfigured to certain poses. We consider the hand and the object as a whole discrete-time dynamical system. The underlying state of the hand-object system at time \(t\) is \(s_{t}\in\mathbb{R}^{N}\), where \(N\) is the number of all the physical properties necessary for uniquely identifying a system state, such as hand joint configurations and hand-object contact locations. The control of the system, \(u_{t}\in\mathbb{R}^{C}\), is the actuation input to the hand at time \(t\). For an underactuated hand, the dimension of controls, \(C\), is less than the degree of freedom of the hand. The dynamics of the system can be represented by a transition function \(g:\mathbb{R}^{N}\times\mathbb{R}^{C}\mapsto\mathbb{R}^{N}\) such that
\[s_{t+1}=g(s_{t},u_{t}) \tag{1}\]
Additionally, we select a fixed point on the object's surface and use this point's motion to represent the object's motion. We term this point as the Point of Manipulation (POM), whose position at time \(t\) is denoted by \(z_{t}\in\mathbb{R}^{3}\). As such, the object's motion at time \(t\) can be represented by the finite difference of POM's positions, i.e., \(\delta z_{t}=z_{t+1}-z_{t}\).
However, building an analytical model for such hand-object systems is impossible due to the following challenges: 1) the system state \(s_{t}\) is not fully observable as it contains parameters not obtainable due to the limited sensing capability of the system, such as the hand-object contact locations and the joint angles of the underactuated hand; and 2) the system dynamics \(g\) requires accurate geometric models of the hand and the object, which are in general unavailable. Moreover, even if \(g\) is solvable, it is hard to generalize across different object shapes or different types of contacts. Instead, assuming a neglectable change of the hand-object system state after applying a small enough control, we can locally approximate the system model without exactly knowing the current system state \(s_{t}\). For this, we use another function
Fig. 2: System diagram of the proposed non-parametric self-identification for dexterous in-hand manipulation.
\(\Gamma:\mathbb{R}^{C}\rightarrow\mathbb{R}^{3}\) to represent the locally approximated system transitions, which maps from the control input to the object's motion:
\[\Gamma(u_{t})=\delta z_{t} \tag{2}\]
In addition, the inverse of the approximated system model is defined by \(\Gamma^{-1}:\mathbb{R}^{3}\rightarrow\mathbb{R}^{C}\). We name \(\Gamma\) and its inverse \(\Gamma^{-1}\) as _local manipulation models_. To precisely manipulate the object without prior knowledge of the system dynamics, the system needs to self-identify the local manipulation models \(\Gamma\) and \(\Gamma^{-1}\) and adapt them to different hand-object configurations when necessary.
In this work, the hand-object system is tasked to find and execute a sequence of control inputs to gradually move the object, such that the POM will trace through a reference trajectory represented by a sequence of \(T\) desired positions of POM: \(X=\{x_{1},\cdots,x_{T}\}\), where \(x_{1},\cdots,x_{T}\in\mathbb{R}^{3}\) are called _keypoints_ of the trajectory. We formulate such dexterous in-hand manipulation as a self-identification and control problem, and approach the problem through non-parametric learning and Model Predictive Control (MPC), as illustrated in Fig. 2 and summarized in Alg. 1. The details of the approach will be described in Sec. IV and Sec. V.
Starting with the POM positioned at \(z_{0}\in\mathbb{R}^{3}\), the system identifies the models \(\Gamma\) and \(\Gamma^{-1}\) through a small number of initial exploratory actions consisting of \(d\) randomly sampled and \(a\) calculated controls, as will be detailed in Sec. IV-A and Alg. 2. Then, the self-identified models will be integrated into an MPC framework to generate real-time control \(u_{t}\) to move POM toward targeted keypoints of the reference trajectory. Meanwhile, the system observes the outcome position of POM \(z_{t+1}\) after each generated control \(u_{t}\) has been executed. If a large deviation from the desired reference trajectory has been detected, as will be described in Sec. IV-B, the system will update the models \(\Gamma\) and \(\Gamma^{-1}\) by performing \(b\) more exploratory actions. As such, being self-identified and updated in real-time, the models are used to generate controls to precisely move the POM on the object to reach each keypoint of the reference trajectory sequentially.
```
0: Reference trajectory \(X\), a distance threshold \(\alpha\), number of initial exploratory actions \(d\) and \(a\), number of adapting actions for model update \(b\)
1:\(t\gets 0\), \(z_{0}\leftarrow\textsc{ObservePOM}()\)
2:\(x_{0}\leftarrow z_{0}\)
3:\(\Gamma,\Gamma^{-1}\leftarrow\textsc{SelfIdentification}(z_{0},d,a)\)\(\triangleright\) Alg. 2
4:for\(x_{i}\in X\), \(i=1,\cdots,n\)do\(\triangleright\) Waypoints in \(X\)
5:while\(\|x_{i}-z_{t}\|>\alpha\)do
6:\(u_{t}\leftarrow\textsc{MPC}(z_{t},x_{i-1},x_{i})\)\(\triangleright\) Alg. 3
7:\(z_{t+1}\leftarrow\textsc{Execute}(u_{t})\)\(\triangleright\) Observe POM
8:if\(\epsilon_{t}>\gamma\)then\(\triangleright\) Sec. IV-B
9:\(\mathbf{I},\Gamma^{-1}\leftarrow\textsc{SelfIdentification}(z_{t+1},0,b)\)
10:endif
11:\(t\gets t+1\)
12:endwhile
13:endfor
```
**Algorithm 1** Dexterous Manipulation via Self-Id and MPC
## IV Non-Parametric Self-Identification
In this section, we present a non-parametric approach based on Gaussian Process Regression to facilitate the self-identification of the local manipulation models \(\Gamma\) and \(\Gamma^{-1}\). Such a non-parametric learning approach does not require a parametric form of the models, which is challenging to specify and difficult to generalize. Moreover, as an efficient nonlinear function approximator that works well with a small amount of data, Gaussian Process Regression alleviates the burden of heavy online data collection, which is time-consuming for a real-world system. Specifically, as described in Sec. IV-A, with a set of data points collected by the system through online exploratory actions, the manipulation models \(\Gamma\) and \(\Gamma^{-1}\) can be learned efficiently to find the inherent relation between the control inputs and the object's motion.
### _Exploratory Actions and Self-Identification_
To self-identify the manipulation models with data collected online, we dynamically maintain a training dataset, \(\mathcal{D}=\{(\hat{u}_{i},\delta\hat{z}_{i})\}_{i=1}^{P}\), consisting of \(P\) data points the system has observed. Each data point is a pair \((\hat{u}_{i},\delta\hat{z}_{i})\), where \(\hat{u}_{i}\in\mathbb{R}^{C}\) is a control the system has executed and \(\delta\hat{z}_{i}\in\mathbb{R}^{3}\) is the object's motion observed after executing \(\hat{u}_{i}\). The dataset \(\mathcal{D}\) is initially empty but will be updated to have more data points as the system keeps executing to manipulate the object. We use _exploratory actions_ to name the data points in the training dataset \(\mathcal{D}\), as such actions are performed for exploring the system models. We illustrate how such exploratory actions are generated and used for self-identification in Alg. 2.
Without prior knowledge about the hand-object configuration, the system begins by randomly generating a number of \(d\) controls. Each control is randomly sampled from a \(C\)-dimensional uniform distribution within the range \([-l,l]\), where \(l\) is chosen to be arbitrarily small while not being overwhelmed by the system's physical uncertainties. The system will execute each of these \(d\) controls, observe the object's motion after each control execution, and add them to the training dataset \(\mathcal{D}\).
However, certain patterns of the object's motions might not be present in \(\mathcal{D}\) since the size \(P\) of the dataset is kept small in practice. Therefore, to have more representative data to effectively learn the manipulation models, we intend to increase the local density of the dataset \(\mathcal{D}\) by selecting additional \(a\) controls to explore. For that, we define the local density of the dataset \(\mathcal{D}\) at its \(i\)-th data point to be the reciprocal of the distance between \(\delta\hat{z}_{i}\) and its nearest neighbor in \(\mathcal{D}\):
\[\rho_{\mathcal{D}}(i)=\frac{1}{\min_{j\neq i}\lVert\delta\hat{z}_{i}-\delta \hat{z}_{j}\rVert} \tag{3}\]
The data point with the lowest local density will be used to calculate a new control \(\hat{u}_{s}\), to be added into the training dataset \(\mathcal{D}\) with its corresponding observation of the object's motion \(\delta\hat{z}_{s}\). This new control \(\hat{u}_{s}\) is determined by the average
of this data point and its nearest neighbor:
\[\begin{split}\hat{u}_{s}&=\frac{\hat{u}_{p}+\hat{u}_{p^{ \prime}}}{2}\\ p&=\operatorname*{arg\,min}_{j\in\{1,\cdots,|\mathcal{D} |\}}\rho_{\mathcal{D}}(j)\\ p^{\prime}&=\operatorname*{arg\,min}_{j\in\{1, \cdots,|\mathcal{D}|\}\setminus\{p\}}\lVert\delta\hat{z}_{p}-\delta\hat{z}_{j }\rVert\end{split} \tag{4}\]
Using this method, we would approach a uniform distribution as the number of exploratory actions increases. With the dataset \(\mathcal{D}\) generated by exploratory actions, the manipulation models \(\Gamma\) and \(\Gamma^{-1}\) can be efficiently self-identified by Gaussian Process Regression (Alg. 2). It is worth noting that both models \(\Gamma\) and \(\Gamma^{-1}\) are regressed with the same dataset \(\mathbf{D}\), but with a different domain and codomain of the data. As \(\Gamma\) and \(\Gamma^{-1}\) are independently learned, we cannot guarantee a closed loop between them. In other words, for the self-identified models, \(\Gamma^{-1}(\Gamma(u))\neq u\).
### _Model Update_
While manipulating the object, the underlying system dynamics can vary over time due to changes in hand configuration and contacts. This can cause the failure of the self-identified models as they are locally approximated. Therefore, we introduce a mechanism in our framework that adaptively updates the model online when needed. As demonstrated with Fig. 3, if a large discrepancy between the observed system state and the prediction of self-identified models has been detected, the model will be updated with \(b\) additional actions calculated by Eq. (4). We particularly name such exploratory actions _adapting actions_, as they are used to adapt the model to a new locality.
For that, we need to define an indicator, to determine when the model update should be triggered. Consider the in-hand manipulation task defined in Sec. III. Suppose that the POM has already reached the first \(i-1\) keypoints of the reference trajectory \(X\) through in-hand manipulation. In other words, the system currently targets the next keypoint \(x_{i}\in X\). We linearly interpolate from \(x_{i-1}\) to \(x_{i}\) to create an intermediate trajectory \(W_{i}=\{w_{i}^{1},w_{i}^{2},\cdots,w_{i}^{M}\}\) of \(M\) waypoints, where \(w_{i}^{1}=x_{i-1}\) and \(w_{i}^{M}=x_{i}\). Given POM's position \(z_{t}\in\mathbb{R}^{3}\) at the current time step, the nearest waypoint \(w_{i}^{j}\) in the intermediate trajectory \(W_{i}\) is found by
\[j^{*}=\operatorname*{arg\,min}_{j\in\{1,\cdots,M\}}\lVert z_{t}-w_{i}^{j}\rVert \tag{5}\]
With this, we define the manipulation error \(\epsilon_{t}\) at time \(t\) to be the distance between POM's position and its nearest waypoint in the intermediate trajectory:
\[\epsilon_{t}=\lVert z_{t}-w_{i}^{j^{*}}\rVert \tag{6}\]
The manipulation error \(\epsilon_{t}\) measures how much the POM deviates from its desired trajectory. If \(\epsilon_{t}\) is greater than a threshold \(\gamma\), the framework will update the model. This mechanism can be found in lines 8 and 9 in Alg. 1.
### _Model Transfer_
The underlying system dynamics become different when the object's geometry or grasp configuration has changed. Intuitively, however, the patterns of manipulation can render some similarities across such geometric and physical variations. In other words, the self-identified manipulation models should feature generalizability on different objects or different contact locations, facilitating model transfer between different hand-object setups.
As such, when manipulating a new object, our framework has the option to initialize the manipulation models \(\Gamma\) and
Fig. 3: An example demonstration of model update with the setup in Fig. 1, where the system is tasked to trace the reference trajectory (green triangle): (A) At the beginning, the system performs \(d+a\) initial exploratory actions (red) to learn the non-parametric models. (B) The model update request is triggered during manipulation, according to the sensory feedback. (C) The manipulation error \(\epsilon_{t}\) (yellow) exceeds the threshold \(\gamma\), triggering the model update. (D) Additional \(b\) data points obtained from adapting actions (red) are used to update the models. Then, the system continues with the trajectory tracing tasks (magenta).
\(\Gamma^{-1}\) by the models that have been learned previously with a different object. By such model transfer, the system can skip the data-gathering step for initial exploratory actions in line 3 of Alg. 1, to speed up the manipulation of a new object. We will show the benefits of using model transfer in our framework through the experiments in Sec. VI-D.
## V Model Predictive Control
As the manipulation models \(\Gamma\) and \(\Gamma^{-1}\) can be efficiently self-identified through exploratory actions in Sec. IV, we can use them as predictive models to develop a control scheme to generate controls based on the desired motion of the object. To this end, we integrate the models \(\Gamma\) and \(\Gamma^{-1}\) in a Model Predictive Control (MPC) framework to iteratively generate controls at each time step. Our MPC-based control scheme is presented in Alg. 3 and the details will be described below. Benefitting from the efficient inference of \(\Gamma\) and \(\Gamma^{-1}\), the MPC effectively meets the requirement of real-time executions.
As some definitions in Sec. IV-B are useful for MPC, we briefly recall them here: between the last reached keypoint \(x_{i-1}\) and its next \(x_{i}\) in the reference trajectory \(X\), we create an intermediate trajectory \(W_{i}=\{w_{i}^{1},w_{i}^{2},\cdots,w_{i}^{M}\}\) of \(M\) waypoints by linear interpolation. In this intermediate trajectory \(W_{i}\), we find the nearest \(w_{i}^{j^{*}}\), with its index \(j^{*}\), to POM's position \(z_{t}\) at the current time step.
Then, we use the intermediate trajectory \(W_{i}\) as a local reference to guide the MPC in searching for optimal control. Concretely, MPC uses the self-identified models \(\Gamma\) and \(\Gamma^{-1}\) to predict the behavior of the controlled hand-object system up to a prediction horizon \(L\). By adding stochasticity into the prediction with some random \(\xi\) of control, it can simulate \(Q\) independent trajectories, as illustrated in Fig. 4. Each simulated trajectory \(U^{q}=\{(\tilde{u}_{t}^{q},\tilde{z}_{t}^{q}),\cdots,(\tilde{u}_{t+L}^{q}, \tilde{z}_{t+L}^{q})\}\), where \(q=1,\cdots,Q\), is generated by the following iterative process starting with \(k=0\):
\[\begin{split}\tilde{u}_{t+k}^{q}&=\Gamma^{-1}(w_{i }^{j^{*}+k}-\tilde{z}_{t+k}^{q})+\xi\\ \tilde{z}_{t+k+1}^{q}&=\Gamma(\tilde{u}_{t+k}^{q})+ \tilde{z}_{t+k}^{q}\end{split} \tag{7}\]
where \(\tilde{u}_{t_{k}}^{q}\in\mathbb{R}^{C}\) and \(\tilde{z}_{t+k}^{q}\in\mathbb{R}^{3}\) are the predicted control and state (i.e., POM's pose) at time \(t+k\) in the \(q\)-th simulated trajectory, and \(\xi\sim\mathcal{N}(0,\sigma\mathbb{I}_{C})\) is a multivariate Gaussian random variable. The scale \(\sigma\) of this Gaussian random variable is named _MPC optimization scale_.
Over the \(Q\) simulated trajectories, MPC searches for the optimal trajectory \(U^{q^{*}}\) (the blue one in Fig. 4) such that the accumulated distance between it and the intermediate trajectory \(W_{i}\) is minimized:
\[q^{*}=\operatorname*{arg\,min}_{q\in\{1,\cdots,Q\}}\left(\sum_{k=0}^{L}\| \tilde{z}_{t+k}^{q}-w_{i}^{j^{*}+k}\|\right) \tag{8}\]
where \(L=\min\{K,M-j^{*}\}\) is the prediction horizon (i.e., the length of the simulated trajectories) not greater than a hyperparameter \(K\). The first control \(\tilde{u}_{t}^{q^{*}}\) in the optimal trajectory \(U^{q^{*}}\) is then sent to the hand actuators for execution. While the entire procedure of MPC is performed at each time step, the system will precisely control the object's motion, guided by the self-identified manipulation models.
```
0: Observed POM \(z_{t}\), last reached keypoint \(x_{i-1}\), targeted keypoint \(x_{i}\)
0: Optimized control for execution \(u_{t}\)
1:\(\{w_{i}^{1},\cdots,w_{i}^{M}\}\leftarrow\textsc{LinearInterpolate}(x_{i-1},x_{i})\)
2:\(j^{*}\leftarrow\operatorname*{arg\,min}_{j}\|z_{t}-w_{i}^{j}\|\)\(\triangleright\) Nearest Waypoint by Eq. (5)
3:\(L\leftarrow\min\{K,M-j^{*}\}\)\(\triangleright\) Prediction Horizon
4:for\(q=1,\cdots,Q\)do
5:\(\tilde{z}_{t}^{q}\gets z_{t}\)
6:for\(k=0,\cdots,L-1\)do
7:\(\xi\leftarrow\mathcal{N}(0,\sigma\mathbb{I}_{C})\)
8:\(\tilde{u}_{t+k}^{q}\leftarrow\Gamma^{-1}(w_{i}^{j^{*}+k}-\tilde{z}_{t+k}^{j} )+\xi\)\(\triangleright\) Predicted Control
9:\(\tilde{z}_{t+k+1}^{q}\leftarrow\Gamma(\tilde{u}_{t+k}^{q})+\tilde{z}_{t+k}^{q}\)\(\triangleright\) Predicted State
10:endfor
11:endfor
12:\(q^{*}\leftarrow\operatorname*{arg\,min}_{q}\left(\sum_{k=0}^{L}\|\tilde{z}_{t+k}^{q}-w_{i}^{j^{*}+k}\|\right)\)\(\triangleright\) Eq. (8)
13:return\(u_{t}^{q^{*}}\)
```
**Algorithm 3** Model Predictive Control (MPC)
## VI Experimental Evaluation
In this section, we evaluate and study the performance of the proposed framework under a real-world setting that requires precise in-hand manipulation. As shown in Fig. 5, We deployed our proposed framework on a Yale Model O underactuated hand [19]. This hand has three identical fingers, each finger of which has one motor to actuate two spring-loaded joints through the tendon. While the tendon is pulled by the motor, the joint configuration of each finger will change accordingly. The spring in each finger joint enables compliance that facilitates stable contact between the hand and the grasped object. For our experimental setup, we restricted two fingers to be parallel to each other and always take the same actuation input, while the third finger was configured to the opposite side. As such, the object's motion was physically constrained in a horizontal plane.
The POM was selected to be on the top of the object, which was tracked by a camera mounted above the object through AprilTag [20]. The camera has a resolution of
Fig. 4: Simulated trajectories in MPC, by the self-identified models \(\Gamma\) and \(\Gamma^{-1}\). In this figure, \(Q=50\) trajectories (gray) are simulated and each one has a horizon of \(L=5\). The optimal trajectory (blue), closest to the reference trajectory (black), is selected to extract the optimal control.
\(1024\times 512\) and the tracker's frequency is \(30\) fps. Note that the tag-based POM tracker can be replaced by other vision-based frameworks. At the beginning of each experiment trial, the experimenter needed to hand one object in Fig. 6 to the underactuated hand by a stable grasp.
### _Experiment Design_
As defined in Sec. III, we tasked our framework to trace a reference trajectory of POM through in-hand manipulation. The reference trajectories we used in experiments are shown by the green lines in Fig. 7, including a triangle, a square, a \(\pi\) letter, and a spiral line. Each trajectory is represented by a sequence of desired positions of POM (i.e., keypoints, shown by the red dots in Fig. 7) in the camera's frame, therefore enabling our system to work without the necessity of hand-eye calibration. To trace a reference trajectory, the POM on the object must reach each keypoint in the correct order, with a tolerance of \(\alpha=1mm\) in Alg. 1. The blue lines in Fig. 7 showcase POM's actual trajectories while the system executes controls generated by MPC.
Besides the requirement for precise motion control, such an in-hand manipulation task challenges our framework from three other aspects: 1) With a lack of sensing capability for joint configuration and contact information, our framework needs to be effective in approximating the actual system transitions through self-identification, which is crucial for precise manipulation. 2) The real-world data collection through online manipulation is time-consuming, demanding high data efficiency of the model self-identification. 3) The contacts between the hand and the object change during manipulation, due to unpredictable sliding or rolling. This requires the adaptability of our framework to such changes. As will be shown with experiments in Sec. VI-B, VI-C, and VI-D, our framework is effective in addressing these challenges and is able to precisely control the object's motion with self-identified models under various real-world settings.
To quantitatively evaluate the performance of our framework on in-hand manipulation, we selected two metrics:
1. _Manipulation error_, as defined in Sec. IV-B, averaged over the entire trajectory. This reflects how far the actual execution deviates from the desired reference trajectory. A small manipulation error is a direct indication of accurate motion control of the object, which is highly affected by the quality of the self-identified models and the robustness of our MPC control policy.
2. _The accumulated number of adapting actions_. As described in Sec. IV-B, whenever the manipulation error exceeds a threshold \(\gamma=2mm\), our system will perform more exploratory actions to update the self-identified models. A small value of this metric means fewer times of model updates, thus reflecting the high data efficiency and good adaptability of our framework.
Note that the object was constrained to move in a horizontal plane by our setup. The reference trajectories were always given on this plane with the fixed height, and we evaluated the manipulation errors only in this plane as well.
### _Analysis on Initial Exploratory Actions_
In this experiment, we study how many initial exploratory actions are needed for a decent model of self-identification. With the experiment results, we intend to show that the self-identified models, even learned with only a small amount of training data, can enable precise in-hand manipulation.
For this, we varied the number of initial exploratory actions (i.e., \(d+a\) in Alg. 1) to be \(10,15,20,25\) and \(30\), and tasked the framework to manipulate all the objects in Fig. 6 and trace all the four trajectories given in Fig. 7. For each setting, we repeated the experiment \(5\) times. To ensure the manipulation performance is not dominated by a bad control policy, we set the MPC optimization scale not too small with \(\sigma=0.1\). The results are summarized in Fig. 8. From the results, we find fewer initial exploratory actions cause a worse quality of the self-identified system models reflected by a higher manipulation error and demand for more adapting actions. This is because the underlying system transitions can be hardly approximated via self-identification, if without sufficient exploration. However, by slightly increasing the number of initial exploratory actions, higher manipulation precision was achieved with lower manipulation errors; and
Fig. 5: The experimental setup: The Yale Model O underactuated hand is tasked to manipulate an object (an orange cube (obj #4)), whose POM is tracked by a top camera with AprilTag.
Fig. 6: The objects used in the experiments: 1) a pill bottle, 2) a mustard bottle (YCB dataset #006), 3) an apple (YCB dataset #013), 4) an orange cube, and 5) a toy airplane (YCB dataset #072a) [21].
the non-parametric models were better approximated via self-identification, indicated by a smaller number of adapting actions for the model update. After \(20\) initial exploratory actions, the manipulation performance of our framework roughly converged, and an average manipulation error of less than \(0.8mm\) could be achieved for all the objects and reference trajectories. Importantly, this has demonstrated the high data efficiency of our framework, which in general only needs less than \(30\) data points to achieve dexterous in-hand manipulation with high precision.
### _MPC Performance Analysis_
The self-identified models \(\Gamma\) and \(\Gamma^{-1}\) by our framework are locally approximated, thereby can never be perfect due to the lack of prior knowledge about the system and the limited sensing capability. However, for precise manipulation, we require the controls generated by MPC to be robust enough against imperfect model self-identification. In this experiment, we evaluate the robustness of the self-identified models when the control loop is closed by MPC, and analyze how it is affected by the optimization scale \(\sigma\).
Intuitively, a small \(\sigma\) will enforce the MPC policy to be more confident about the self-identified models, thus becoming more greedy but potentially not robust to imperfect self-identifications; whereas a large \(\sigma\) will increase the search space of MPC for the optimal control.
In this specific experiment, we only used an orange cube (obj #4) in Fig. 6 as the object for manipulation. For MPC, the maximum prediction horizon was set to \(K=5\), and the number of simulated trajectories for optimization was set to \(Q=50\). For each different \(\sigma\), we repeated the manipulation \(5\) times for each of the four reference trajectories in Fig. 7. The results are summarized in Fig. 9. From the results, we observed a large number of adapting actions with a small \(\sigma\) less than \(0.02\). This is expected since MPC is too greedy using the self-identified models with a small \(\sigma\), resulting in fewer optimal executions and more instances of model update requests. By slightly increasing \(\sigma\) to introduce more stochasticity into the predictions, MPC could search more extensively for finding the optimal control and the average number of adapting actions was immediately reduced to less than \(5\), rendering a significant performance improvement. In general, from the experiments, we found that the self-identified models are sufficiently reliable to make predictions about control with appropriate \(\sigma\); furthermore, our MPC-based control policy was able to achieve precise manipulation by closing the control loop with approximated non-parametric models.
### _Model Generalizability_
In real-world applications, the object being manipulated and the grasp configuration are likely to be different every time. Therefore, good generalizability of the self-identified models to such variations is desirable, as it helps save the time and cost of retraining the models every time.
In this experiment, we challenged the generalizability of the self-identified models in our framework, by manipulating a new object (_target object_) with models learned through manipulating a different object (_source object_). Specifically, we saved the models \(\Gamma\) and \(\Gamma^{-1}\) learned from manipulating an orange cube (obj #4) with \(25\) initial exploratory actions. Similar to Sec. IV-C, we directly used this saved model to initialize the manipulation of a new object, without any initial exploratory actions on the new object. For each object, we
Fig. 8: Self-identification performance in terms of the number of initial exploratory actions. For different numbers of initial actions, the result is averaged over all the objects and reference trajectories.
Fig. 7: Real-world trajectory tracing tasks by in-hand manipulation: a triangle, a square, a \(\pi\) letter, and a spiral line. Green: reference trajectories. Blue: real trajectories executed by our system. Red dots: keypoints that POM needs to sequentially go through.
Fig. 9: MPC performance evaluation in terms of its optimization scale \(\sigma\) and the number of adapting actions. |
2305.06028 | Statistical Plasmode Simulations -- Potentials, Challenges and
Recommendations | Statistical data simulation is essential in the development of statistical
models and methods as well as in their performance evaluation. To capture
complex data structures, in particular for high-dimensional data, a variety of
simulation approaches have been introduced including parametric and the
so-called plasmode simulations. While there are concerns about the realism of
parametrically simulated data, it is widely claimed that plasmodes come very
close to reality with some aspects of the "truth'' known. However, there are no
explicit guidelines or state-of-the-art on how to perform plasmode data
simulations. In the present paper, we first review existing literature and
introduce the concept of statistical plasmode simulation. We then discuss
advantages and challenges of statistical plasmodes and provide a step-wise
procedure for their generation, including key steps to their implementation and
reporting. Finally, we illustrate the concept of statistical plasmodes as well
as the proposed plasmode generation procedure by means of a public real RNA
dataset on breast carcinoma patients. | Nicholas Schreck, Alla Slynko, Maral Saadati, Axel Benner | 2023-05-10T10:27:42Z | http://arxiv.org/abs/2305.06028v1 | # Statistical Plasmode Simulations - Potentials, Challenges and Recommendations
###### Abstract
Statistical data simulation is essential in the development of statistical models and methods as well as in their performance evaluation. To capture complex data structures, in particular for high-dimensional data, a variety of simulation approaches have been introduced including parametric and the so-called plasmode simulations. While there are concerns about the realism of parametrically simulated data, it is widely claimed that plasmodes come very close to reality with some aspects of the "truth" known. However, there are no explicit guidelines or state-of-the-art on how to perform plasmode data simulations. In the present paper, we first review existing literature and introduce the concept of statistical plasmode simulation. We then discuss advantages and challenges of statistical plasmodes and provide a step-wise procedure for their generation, including key steps to their implementation and reporting. Finally, we illustrate the concept of statistical plasmodes as well as the proposed plasmode generation procedure by means of a public real RNA dataset on breast carcinoma patients.
This is a preprint and has not been peer reviewed yet.
## 1 Introduction
Data availability is a crucial issue that arises in the context of statistical model development and validation, inference derivation, introduction of statistical concepts and many others [1, 2, 3]. In some cases, especially for high-dimensional data (HDD) where the number of features is substantially larger than the number of observations, the number of data samples and data sets is not large enough to properly and reliably perform all required tasks. To overcome that deficiency, alternative data generation approaches such as the generation of artificial data are required. The generated data should match as close as possible the real-life data underlying the research question of interest, in particular with respect to its probabilistic structure as well as possible dependencies.
A common approach for data generation is statistical data simulation. In this paper, we interpret data simulation as a data generation procedure that follows a data-generating process (DGP), with marginal distributions and a dependence structure as its basic components. For explanatory and prediction models being in focus of the present paper, data generation procedures should necessarily include some steps on the generation of outcome variable(s). To this end, outcome-generating models (OGMs) are usually utilized to generate the outcomes using the available covariate information. Parameters of the OGM are to be estimated from real data, taken from literature or just set based on investigator's choice. Examples for outcome data generation can be found in Reeb and Steibel [4], Franklin et al. [5], Schulz et al. [6], Atiquzzaman et al. [7] and many others. Shmueli [8] depicts the application of OGMs both for explaining and prediction purposes.
For simulated data, the "truth" is assumed to be known a priori, at least to a some extent [9]. That "truth" can be represented by simulation parameters or prespecified effect sizes, and is used to reliably evaluate the obtained estimates or predictions.
The most detailed practical introduction to simulation studies that includes structural approaches for their planning and reporting as well as the discussion on the appropriate performance measures is presented in Morris et al. [2]. In particular, the authors introduce a coherent terminology on simulation studies, data-generating mechanisms, and provide guidance on coding simulation studies. Most prominently, they introduce the **A**ims, **D**ata-generating mechanisms, **E**stimands, **M**ethods, and **P**erformance (ADEMP) criteria as a guidance for the planning and performing of simulation studies.
For those researchers who use the results of simulation studies without being familiar with the entire simulation process, the discussion presented in Boulesteix et al. [10] can be of great assistance. The paper not only contains many useful examples and applications, but also describes basic principles of simulation studies, gives insights into sampling variability and data generating processes, and demonstrates the role of statistical simulations in health research.
Two common approaches for data generation are parametric and plasmode simulation, with the first approach being the most extensively studied and widely used one. Parametric simulations assume that the parametric stochastic model used to generate data is realistic and representative, with parameters of interest estimated from real data, derived from the literature or even set up by the user, in order to model specific scenarios [1, 2]. Plasmode data generation usually begins by resampling covariate information from the original real data [11, 12]. External (parametric) "truth" such as effect sizes or model parameter values in explanatory or prediction models can then be added to the covariable data sets to define the relationship between the covariables and the outcome. With the OGM being a part of the plasmode data generation procedure, the resulting plasmode simulation can then be viewed as a semi-parametric data generation procedure. Of note, parametrically simulated data may often be considered to be purely artificial, whereas plasmode data is claimed to reflect reality in the most close way [9, 11, 13].
In the present paper, we provide an extensive literature overview of parametric and plasmode simulations. In particular, we discuss the difference between biological and _statistical plasmodes_. We address advantages and challenges of parametric and statistical plasmode simulation approaches in various contexts and provide step-by-step recommendations for the generation of statistical plasmodes.
At this point, we have no intention to demonstrate the superiority of a plasmode simulation over a parametric simulation or vice versa. We aim to analyze advantages and challenges of both data generation methods, and to illustrate the usefulness of plasmode simulations as a complement to and possible extension of parametric simulation studies.
The paper is organized as follows: Section 2 discusses parametric and plasmode simulations, compares their characteristics and provides an extensive literature review on both data generation methods. Section 3 analyzes statistical plasmodes in more detail by discussing their challenges. Section 4 provides recommendations for planning, performing and reporting of statistical plasmode simulations. In Section 5, we present a numerical example to illustrate the application of such a data generation approach. Section 6 concludes with a discussion.
## 2 From Parametric to Plasmode Simulation Studies
This section provides a comparative introduction to parametric and plasmode simulations. In particular, we start with a description of the main properties of parametric simulations as one of the most well-established types of data generation. Then we move on to plasmode simulations which are often claimed to be a more close-to-reality approach for data generation.
### Parametric Data Simulation
In cases where an underlying DGP is defined in closed form and represented by a parametric stochastic model, we speak of parametric simulations.
The main asset of parametric simulations is their flexibility in terms of the chosen DGP. That is, one can easily generate a variety of independent data sets by varying the assumptions imposed on the DGP and its crucial parameters. As a result, the simulated data sets can cover many complex but relevant scenarios as well as extreme situations that do not reflect reality. This feature becomes particularly important whenever we aim to analyze the behaviour and performance of different statistical methods. In addition, the knowledge of the DGP makes the corresponding parametric simulation more transparent and plausible.
Parametric simulations depreciate the sample size issue as an unlimited amount of data can be generated by means of a particular DGP. For instance, when applied to simulation of continuous random variables, an "infinite" number of distinct data points can be generated without much effort.
The corresponding DGP represents a cornerstone of any parametric simulation. However, the existence of an appropriate model for the DGP that best fits some underlying real data cannot be taken for granted. On the other hand, even with a DGP available, the conclusions based on parametric simulations might be limited or even biased by the parameters of the chosen DGP model.
Obviously, the quality of a parametric simulation depends on the level of our comprehension of the underlying processes, distributions and possible dependencies. For instance, Vaughan et al. [13] state that simulations might not fully reflect the complexity of the biological data that originates from nonrandom mating, recombination, hot spots, and other genetic mechanisms. Boulesteix et al. [10] provides a similar statement and claims that many simulation studies are too simplified to describe the complexity of the real life data and thus may lead to inaccurate or even misleading findings.
When generating new data, we strive to preserve not only the marginal distributions but also the underlying dependence structure. In this context, the specification of appropriate dependence metrics, such as correlation, emerge. One of the options for such a specification is estimation of the dependence structure from the data at hand. Computationally, such an estimation can be very expensive and time-consuming, especially in case of large data sets. For parametric simulations, this computational issue is one of possible reasons for making independence assumptions on certain variables that then leads to a block diagonal structure for the corresponding correlation matrix [11]. Furthermore, a multivariate normal distribution provides the simplest model for multivariate covariate data with a pre-specified mean and correlation structure [1]. For generating non-normal correlated data, diverse copulas or the extended Fleischman power method can be utilized [14]. Obviously, certain concerns about the accuracy and realism of the underlying modelling assumptions emerge for all such approaches.
One of the assumptions imposed in the context of parametric simulations is that the "truth" must be known a priori. Such an assumption may not hold in both low- and high-dimensional situations. However, high-dimensionality may even exacerbate this issue making parametric simulations inapplicable. For instance, the "truth" about the set of biological markers truly associated with a given outcome may be unknown [15].
Within the framework of a parametric simulation, the underlying dependence structure has to be completely specified in advance. The specification of such a dependence structure for high-dimensional data may become a challenge. Possible
reasons can be not only computational efficiency issue, but also spurious correlations [16], sparsity of the data, some nonlinear or even hidden dependencies and other issues related to large covariance matrices; for more discussion and examples see Fan and Li [17], Johnstone and Titterington [18] and the references therein. Pitfalls in the specification of the dependence structure may then lead to false research discoveries and incorrect statistical inferences.
Dimensionality reduction and feature extraction play pivotal roles and are often fundamental in many high-dimensional settings [17]. However, it is not obvious how a parametric simulation may impact the findings of those procedures considering possible non-representativeness of parametrically generated covariate data sets.
Altogether, in many cases parametric simulations may turn out to be infeasible for high-dimensional data generation as it is not obvious how such simulations would cope with features of high-dimensionality.
A number of papers share our concerns on the applicability of parametric simulations for the generation of high-dimensional data sets. For instance, Gadbury et al. [12] question the applicability of standard simulations performed in a high-dimensional experiment where hundreds of hypotheses are to be tested. Also Franklin et al. [5] sees the application of ordinary simulation methods as an issue when comparing high-dimensional variable selection strategies. In particular, the authors point out that the performance of those strategies depend "[...] on the information richness and complexity of the underlying empirical data source" and it is doubtful whether a parametric simulation is able to capture the richness and complexity of the information.
To overcome, at least partially, the limitations of parametric simulations outlined above, plasmode simulations have been introduced [11, 13, 5].
### Plasmode Data Simulation
The term "plasmode" has been first introduced in Cattell and Jaspers [19], with a plasmode data set defined as " [...] a set of numerical values fitting a mathematico-theoretical model". In their seminal work, the authors emphasized that the certainty for the produced plasmode data set to fit the model comes either because there is a real life experiment producing the data of that kind or because the simulated data is produced mathematically to fit the functions. Two different approaches for plasmode generation, performed either in a lab experiment or by resampling, were also mentioned in Mehta et al. [11]. In the present paper, we refine the discussions provided by these authors, introduce the concept of statistical plasmodes and analyze their properties. From our perspective, the classification of plasmodes in biological and statistical depends on the procedure used for their generation.
Biological plasmodes are those generated " [...] by natural biological processes, under experimental conditions that allow some aspects of the truth to be known" [13]. Such plasmodes may be created, e.g., in a wet lab by manipulating biological samples as in case of a "spike in" experiment [11, 13]. The latter paper provides a very illustrative introduction to biological plasmodes. In their detailed definition, the authors state that " [...] a plasmode can be defined as a collection of data that (i) is the result of a real biological process and not merely the result of a computer simulation; (ii) has been constructed so that at least some aspect of the "truth" of the DGP is known".
A number of research papers such as Mehta et al. [9], Vaughan et al. [13] deal with "spike in" experiments in microarray expression analysis as an example for a biological plasmode data set. As part of that experiment, real cases from one population are randomly assigned to two groups. Then, a known amount of transcript is added to serve as a positive control. As a result, distributions and correlations in the generated data are viewed as most realistic since being taken directly from real data. Besides others, Mehta et al. [11] discusses application of plasmode data sets in high- dimensional biology. Vaughan et al. [13] use plasmodes for the estimation of admixture, or the proportion of an individual's genome that originated from different founding populations and thus illustrates the utility of plasmodes in the evaluation of statistical genetics methodologies. Several authors such as Sokal et al. [20], Mehta et al. [11] provide helpful insights into generation and application of biological plasmodes as those to be expected to incorporate valuable information on biological variation and capture biological reality. Biological plasmode data sets have also been utilized to evaluate the
performance of statistical methods [21] and their validity [9]. Plasmode data sets were also used to investigate the validity of multiple factor analysis in a known biological model [20].
Despite their ability to create new and more advanced biological set-ups, e. g., by crossing mice [13], sometimes biological plasmodes may become not only very time-consuming but also require high experimental costs. Researchers might eventually do not have a lab available to construct biological plasmodes. In some cases, ethical reasons may also speak against the construction of biological plasmodes. In all such situations, _statistical plasmodes_ offer an advantageous alternative.
_Statistical plasmodes_, being in focus of the present paper, begin with generation of covariate information performed by applying resampling-based methods to real data set [22, 4, 5]; note that no biologically new samples are created in the context of such resampling procedure. Further, an appropriate OGM has to be applied to generate outcomes based on the resampled covariates [5, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32]. In cases where the exposure modeling is also a part of the study [5, 23, 24, 27, 29], some known "truth" such as e. g. treatment effects can be added manually [5, 28, 26, 7, 29]. For instance, in Franklin et al. [5] the authors create statistical plasmode data sets by " [...] resampling from the observed covariate and exposure data without modification to preserve the empirical associations among the variables." In that paper, the "true" treatment effect and the baseline hazard function are estimated from the empirical data. Further, the associations between outcome and the covariates as well as between censoring times and the covariates have been described by means of two Cox proportional hazard models. Such modeling approach corresponds to the application of a OGM in our terminology.
According to our interpretation, statistical plasmode simulations utilize aspects of resampling (when generating the covariate information) as well as parametric modeling (e.g., application of OGM, modeling of exposure etc.) and thus can be interpreted as semi-parametric methods.
There are numerous applications of plasmodes generated by certain methods of data modifications in the literature. Some of those applications are based on statistical approaches in the sense of our definition. For instance, Tibshirani [22] utilize plasmodes to assess sample size requirements in microarray experiments when estimating the false discovery rate and false negative rate for a list of genes. Gadbury et al. [12] illustrates use of plasmodes by comparing the performance of 15 statistical methods for estimating the false discovery rate in data from an high-dimensional experiment. Elobeid et al. [33] employs plasmode data sets to analyze the performance of several statistical methods used to handle missing data in obesity randomized controlled trials. Reeb and Steibel [4] suggest an interesting application of plasmode data sets to complement the evaluation of statistical models for RNA-seq data. In their subsequent paper, Reeb et al. [34] then use plasmode data sets to assess dissimilarity measures for sample-based hierarchical clustering of RNA sequencing data. In Franklin et al. [5], plasmode-based studies are used for the evaluation of pharmacoepidemiologic methods in complex healthcare databases. Resampling in combination with outcome generation by a logistic model to compare the HDD propensity score method with ridge regression and lasso is used by Franklin et al. [23]. Franklin et al. [24] use plasmode-based studies to compare the performance of propensity score methods in the context of rare outcomes. In Desai et al. [26], the authors utilize plasmode data sets to analyze the uncertainty in using bootstrap methods for propensity score estimation whereas Liu et al. [28] conducts a plasmode-based study to compare the validity and precision of marginal structural models estimates using complete case analysis, multiple imputation, and inverse probability weighting in the presence of missing data on time-independent and time-varying confounders. The issue of data imputation has also been addressed in Atiquzzaman et al. [7] where the authors used plasmodes to compare two imputation techniques when imputing body mass index variable in osteoarthritis-cardiovascular disease relationship. In Ejima et al. [35], the authors use statistical plasmodes to assess type I and type II error rates of analyses commonly used in murine genetic models of obesity. Similarly, Alfaras et al. [36] resample from the empirical distributions to create plasmode data sets for murine aging data. Those plasmodes are then utilized to compute type I error rates and power for commonly used statistical tests without assuming a normal distribution of residuals. In their most recent study, Hafermann et al. [31] designs a plasmode simulation study to investigate how random forest and machine learning methods may benefit from external information provided by prior variable selection studies. Rodriguez et al. [32] evaluate plasmodes as being useful for
preserving the underlying dependencies among hundreds of variables in real-world data used to evaluate the potential utility of novel risk prediction models in clinical practice; the authors generate plasmodes when studying lung transplant referral decisions in cystic fibrosis.
To our understanding, two central steps in the _statistical plasmode_ generation procedure can be derived, namely:
1. **Generation of the covariate structure** by resampling from an original data set
2. **Outcome generation** that includes 1. Choice of an appropriate outcome generating model (OGM) 2. Choice of covariate effects either by individual specification or by estimation based on the original data 3. Generation of new outcomes by applying the OGM chosen in (ii.1), with the effects specified in (ii.2) and applied to the covariates generated in (i)
The discussion performed in this section is summarized in Table 1 that provides a comparative summary of parametric and plasmode simulation studies.
That discussion as well as the supporting literature imply that plasmodes provide an attractive supplement to parametric simulations in data-based research. In particular, it is expected that plasmode datasets resemble the reality most closely, especially regarding dependency structures. In the following, we will analyze plasmodes in detail to examine their strengths and weaknesses in more detail.
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**Feature** & **Parametric Simulations** & **Statistical Plasmodes** \\ \hline \hline Data-generating process (DGP) & DGP is to be specified in advance & No DGP specification is required \\ \hline Outcome-generating model (OGM) & Parameters of a chosen OGM to be estimated from data or derived from literature or set manually & Parameters of a chosen OGM to be estimated from the original data or derived from literature or set manually \\ \hline Range of possible scenarios & Arbitrary scenarios, in particular, extreme and rare scenarios, can be generated & Only reality bounded to the sample at hand can be generated \\ \hline Knowledge of ”truth” & ”Truth” must be completely known in advance & At least some ”truth” such as effect sizes should be known a priori \\ \hline Data availability and representativeness & Irrelevant for simulations based on literature results or previous knowledge & Crucial, as the simulated data is always limited to the sample at hand \\ \hline Reality reflection & Parametric simulations may not be able to capture the complexity of real life data & Plasmodes are expected to resemble the reality in the most accurate way \\ \hline High-dimensional data simulations & Usually time - and cost - consuming. Latent dependencies may also become an issue & Mostly straightforward, as no estimation of distributions and/or dependencies is required \\ \hline Small sample sizes & Essentially uncomplicated, but may become an issue in cases when simulation parameters are to be estimated from the real data at hand & Difficult due to resampling simulation parameters are to be estimated from the real data at hand \\ \hline Dependence structure & Becomes a challenge with complex dependencies & No modeling/estimation of dependence structure is required \\ \hline \end{tabular}
\end{table}
Table 1: Parametric simulations versus statistical plasmodes: Similarities and differences
## 3 Challenges of Statistical Plasmode Simulations
Simulations studies are, at least in the scope of the present work, designed to enable the practical analysis of statistical methods. To this end, data generation should satisfy several criteria, such as, amongst others, to provide the basis for subsequent undistorted model comparisons or to enable a specific covariate dependence structure. Constructing, reporting and comprehending a parametric simulation study is mostly straightforward and transparent, as the resulting data is artificially generated in a target-oriented way. Critical steps in the construction of parametric simulations include, for instance, the investigator's choice of the outcome-covariable association. While this ambiguity is shared by statistical plasmode simulations, many of the properties of statistical plasmodes are typically less obvious and verifiable because statistical plasmodes are designed with the complex task to mimic reality in the closest way while simultaneously specifying some aspects of the truth. The main advantage of statistical plasmodes lies in their ability to generate data with specific distributions and dependence structures without the need for explicit assumptions. The assumption that statistical plasmodes can faithfully generate data that closely resemble reality has rarely been questioned. In practice, however, this assumption can present challenges. For instance, the lack of statistical analyses or simulations to verify the preservation of dependence structures can undermine the reliability of the generated data. Consequently, the advantages attributed to statistical plasmodes can also transform into challenges. Further potentially critical steps in the construction of plasmode data include the representativeness of the underlying data and the choice of the resampling scheme. Below, we theoretically discuss these potential pitfalls in more detail while also providing corresponding examples from literature.
### Resampling of Covariate Information
In our concept of statistical plasmodes, the simulation is based on the generation of covariable information by resampling from a real data set. This has the intention to preserve the characteristics of the original underlying dataset such as, amongst others, the number and type of covariables and the corresponding dependence structure, see e. g. Franklin et al. [23], Atiquzzaman et al. [7], Conover et al. [29]. This preservation is primarily achieved through the use of appropriate resampling techniques. Consequently, the applied resampling scheme, which consists of specifying the number of generated datasets (\(N\)) and the resampling technique, has central importance for the generated plasmode (covariable) datasets.
Of note, while utilizing resampling, statistical plasmodes are arguably even more complicated to analyze because of the additional artificial outcome generation. Consequently, not all established theoretical results concerning resampling might be transferable to the full plasmode dataset but only to the plasmode covariable datasets. We use the terminology resampling and bootstrap interchangeably and indicate the concrete resampling/bootstrapping technique if necessary. The analysis of resampling methods is almost exclusively formulated in terms of the asymptotic performance of the bootstrap distribution \(L^{*}\) of an estimator \(T\) (e. g. variance, confidence interval) applied to the empirical distribution of the resampled data [37] e.g.. For statistical plasmode covariable datasets, the estimator \(T\) could be, for example, some function of the covariance matrix of the covariables (preservation of correlation structure). When considering the statistical plasmode procedure as a whole, the estimator of interest \(T\) typically utilizes the artificial outcomes, e. g. \(T\) could be the linear predictor in ridge regression when investigating its performance compared to other models. The resampling is said to "have worked", if \(L^{*}\) convergences weakly to \(L\) (the theoretical distribution of \(T\)) for increasing sample size \(n\) of the underlying data [38] e.g.. Otherwise, one speaks of "bootstrap failure". In the following, we discuss the influence of the chosen resampling scheme on the generated plasmodes in more detail, focusing mainly on the preserveness of the covariable information.
Number of Plasmode Datasets \(N\).The specification of the number of resampled plasmode datasets \(N\) is often performed ad-hoc and potentially leads to different answers to the same question, in particular if \(N\) is specified as too small [39]. In the framework of bootstrap tests, Davidson and MacKinnon [40] propose a pretest procedure for choosing the number of bootstrap samples to minimize the loss of power due to \(N\) being finite. A more general, data-dependent
three-step procedure is proposed by Andrews and Buchinsky [39] who estimate \(N\) to achieve a desired accuracy of the approximation of the bootstrap to the ideal (\(N\to\infty\)) distribution of the estimator of interest. However, to the best of our knowledge, there is no general guideline to theoretically specify the number \(N\) of datasets to be generated in a data-independent way (i. e. without already performing the resampling scheme) such that asymptotic resampling results hold with sufficient accuracy. Moreover, existing results might not be valid for statistical plasmodes due to the additional artificial outcome-generation procedure.
In the plasmode literature, \(N=500^{5,26,7}\) e.g. and \(N=1000\)[28, 27, 31] e.g. seem to be popular ad-hoc choices. We have not seen any application where the choice of \(N\) was explicitly justified or the convergence or stability of the subsequent analyses applied to the plasmode data have been checked for increasing \(N\). In summary, the number of datasets can be critical aspect in the generation of _statistical plasmodes_, in particular if convergence of \(T\) is not reached. In Section 4 we provide some recommendations for determining \(N\) which we further illustrate in Section 5.
**Resampling Technique.** Resampling can be performed without replacement as in the \(n\)-over-\(m\) bootstrap (subsampling with \(m<n\)) and sample-splitting (cross-validation) bootstrap. Subsampling draws from the data-generating process of the original data [41] and has been shown to lead to consistent estimators under minimal conditions, see Theorem 1 in Bickel et al. [37], as long as the subsampling size \(m\) and the size of the original dataset \(n\) are appropriately specified. Alternatively, resampling with replacement such as the \(n\)-out-of-\(n\) bootstrap (also called nonparametric bootstrap) can be utilized. Resampling schemes based on drawing with replacement draw from empirical probability distribution derived from the underlying data [41] and require additional assumptions for consistent estimation, but are more efficient if the bootstrap "works" [37]. However, the nonparametric bootstrap can fail, for example when the limiting distribution of the estimator has discontinuities, when estimating extrema and when setting critical values for some test statistics [37, 42, 43]. As a remedy, the \(m\)-out-of-\(n\) bootstrap (sampling \(m\leq n\) with replacement) has been introduced to prevent bootstrap failure while losing efficiency if the nonparametric bootstrap was consistent. Sampling fewer than \(n\) observations has since been treated as a "cure-all" method (being asymptotically valid under weak assumptions and not failing) which has been critically discussed, for instance, in Andrews and Guggenberger [43]. A comprehensive overview of resampling techniques is provided, e. g., in Bickel et al. [37].
For increasing sample size \(n\), estimators based on subsampling and the \(m\)-out-of-\(n\) bootstrap become more similar as the probability of repeating observations decreases. Note that, contrary to subsampling, resampling with replacement allows for \(m=n\). The additional requirements for the consistency of sampling with replacement compared to sampling without replacement mainly state, informally speaking, that the influence of tied observations on the bootstrap estimator should be small [38].
In the majority of literature concerning plasmode generation, the \(m\)-out-of-\(n\) bootstrap has been used [5, 25, 30] e.g., whereas the nonparametric bootstrap has been used by Rodriguez et al. [32], the sample-split bootstrap by Gerard [44] and subsampling by Hafermann et al. [31]. In some publications we did not find indications whether resampling was performed with or without replacement, e. g. in Ju et al. [45], Ripollone et al. [27]. All in total, the type of resampling technique influences asymptotic properties of the covariables and hence of the plasmode datasets, effects whether the resampling "has worked" and consequently impacts subsequent analyses on the generated plasmodes datasets. However, to the best of our knowledge, we have not seen any application in the literature concerning plasmode data generation in which the choice of a particular resampling technique has been explicitly justified.
**Resampling Size \(m\).** Using resampling with replacement of size \(m\), with \(m\to\infty\) and \(m/n\to 0\), typically resolves failure of the \(n\)-out-of-\(n\) bootstrap, but requires the specific choice of \(m\) as a key issue [38]. An adaptive rule for the choice of \(m\) for subsampling and the \(m\)-out-of-\(n\) bootstrap in the case of independent observations has been proposed by Bickel and Sakov [38] and is further illustrated in our example in Section 5. Informally speaking, if \(m\) is in the right range of values, the bootstrap distributions of the estimator for the similar \(m\)'s are close to each other, indicating consistency of the estimator. The rule provides an adaptive estimator \(m^{*}(n)\) and leads to optimal convergence rates of the estimator
irrespective whether the nonparametric bootstrap would work in the example (then \(m^{*}(n)/n\to 1\) as \(n\to\infty\)) or would fail (then \(m^{*}(n)/n\to 0\)).
To the best of our knowledge, only fixed resampling sizes \(m\) have been chosen in the plasmode literature, and we have not observed any explicit justification of the specific value of \(m\). In other words, \(m\) appeared to be chosen arbitrarily. For instance, Hafermann et al. [31] used a selection of \(m^{\prime}s\) (\(250,500,1,000,2,000,4,000\)) which are small compared to the number of observations \(n=198,895\) while Liu et al. [28] chose \(m=500\) for a dataset with \(n=646\) and Atiquzzaman et al. [7] sampled \(m=75,000\) out of \(n=84,452\). Interestingly, different authors used different values of \(m\) which have been chosen without justification for the same underlying dataset, as exemplified for the NSAID dataset with \(n=49,653\). While \(m=30,000\) has been picked by Franklin et al. [5] and Ripollone et al. [27] used a comparably large \(m=25,000\) as well, Ju et al. [45] chose a much smaller value in \(m=1,000\) and Wyss et al. [30] set \(m=10,000\). In summary, when applying subsampling or the \(m\)-out-of-\(n\) bootstrap, the value of \(m\) matters for the consistency of \(T\), and should be properly justified and adapted to the underlying data and estimator(s) of interest.
**Covariable Dependence Structure and HDD.** Resampling the covariable information has, amongst others, the aim of preserving the covariable dependence structure of the underlying dataset; see e. g. Franklin et al. [5], Karim et al. [25], Conover et al. [29]. Under some assumptions such as i. i. d. observations and finite fourth moments of the covariables, Beran and Srivastava [46] have shown that the resampled covariance matrix converges to the original covariance matrix for the nonparametric bootstrap when \(n\) increases and the number of covariables \(p\) is fixed (i. e. most HDD situations excluded, see also below). However, for other resampling schemes similar results have, to the best of our knowledge, not been shown. For the \(m\)-out-of-\(n\) bootstrap, the optimal \(m\) could be estimated with the estimator \(T\) specified to represent the covariable covariance matrix in order to investigate and ensure that the resampling scheme works (at least for that certain aspect of the data), see also our example in Section 5. However, other aspects of the covariable information, such as extreme values, might be more important in some applications.
A well-discussed issue, in particular in the context of HDD, is the occurrence of spurious correlations. Amongst others, Fan et al. [16] have shown that sampling \(p\) independent normal \(n\)-vectors leads to empirical covariance structures strongly deviating from a diagonal matrix, in particular if \(p\gg n\). However, the risk of spurious correlations is not limited to parametric simulations. An increasing number of covariates \(p\) increases the risk that the underlying data sample suffers from spurious correlations, which may be propagated to the generated plasmode datasets by resampling. Further spurious correlation is likely to distort the empirical covariance structure of the statistical plasmodes, leading to even stronger deviations to the population covariance matrix.
For a fixed number of covariables \(p\), the bootstrap has been shown to work in linear models if \(p/n\) is small [47]. If the number of covariables growths with the number of observations, Mammen [48] have shown that the bootstrap works for effect estimates in high-dimensional linear models if \(p(n)\to\infty\) and \(p(n)/n\to 0\) as \(n\to\infty\), and Karoui and Purdom [49] have shown that the confidence intervals of the pairs and residual bootstrap in linear models are too wide if \(p(n)/n\to c,c\in\mathcal{R}\).
In summary, the goal of preserving the covariable dependence structure can be used to determine an optimal resampling scheme. In particular in high-dimensions, the covariance structure could, however, be distorted by spurious correlations and whether resampling and subsequently statistical plasmodes work in these scenarios might require additional research.
### Representativity of the Underlying Data Sample
One of the main assets attributed to statistical plasmode simulations is that they are expected to preserve the complex real-world data structure by resampling the covariable information from a real dataset. Naturally, an appropriate representative dataset has to be available and constitutes the basis for the entire plasmode simulation study. Parametric simulations, on the other hand, can be artificially constructed without requiring representative data. The data sample is expected to represent the population of interest. This limits the generalisability of the results of the analyses that the
plasmode simulation study was designed for, which has been acknowledged, amongst other, by Franklin et al. [5], Liu et al. [28], Atiquzzaman et al. [7].
The data sample should satisfy the assumptions of the applied resampling technique. As a result, the choice of the resampling technique depends strongly on the underlying dataset at hand. Standard resampling techniques, such as discussed above, assume that the observations are independent [37] e.g.. This assumption is violated, for instance, if the observations show clusters, repeated measures, population structure or longitudinal measurements. In this context, more sophisticated resampling scheme including block-wise resampling have to be applied, for which most of the asymptotic results are not explicitly formulated [38].
Depending on the underlying data and the resampling scheme, the characteristics of the original dataset to be conserved might not be reflected by the generated data. This is acknowledged by Karim et al. [25] who state that "[...] it is possible that important confounders in the empirical study might not remain important in the plasmode samples".
In summary, the generated statistical plasmode datasets depend strongly on the representativeness of the underlying real data and are limited to the population represented by the data sample. The resampling scheme should be adaptive to the characteristics of the real data such as population structure, which have to be identified and reported.
### Investigator's Choice of the "Truth"
The concept of plasmode simulations is mainly based on preserving the complex but realistic structure of the underlying data while inserting some "truth" by investigator's choice. These specification can be manifold in type, and potentially distort the real-word characteristics of the generated plasmode data.
**Artificial Covariables.** Additional to resampling covariable information, important covariables such as exposure or treatment variables can be artificially created to model some aspects of the simulation study. For instance, Franklin et al. [24] and Conover et al. [29] model a binary exposure variable in relation to confounder variables via logistic regression, while Rodriguez et al. [32] simulate covariables at a later stage of a longitudinal study. Naturally, artificially generating covariables does not preserve the full real-world setting and should be performed with care.
**Artificial Outcome Generation.** With the covariable information generated (by resampling or artificially), corresponding artificial outcomes are created, in our concept of statistical plasmodes, according to some outcome-covariable association specified by the investigator. A straightforward way to create a transparent association between resampled covariates and the artificial outcomes is to utilize regression models specified by the combination of a link function (type of OGM) and the linear predictor (effect structure).
The OGM determines the type of the artificial outcome (e. g. binary, survival) and strongly influences subsequent analyses on the generated data. For instance, if the aim of the study is the performance assessement of several models, the model closest to the chosen type of OGM has an advantage induced by the investigator, leading to potentially distorted comparisons. Most commonly, logistic regression is used for binary outcomes [24, 25, 7] e.g. and the Cox model for survival endpoints [5, 23, 26] e.g., whereas Rodriguez et al. [32] apply an exponential survival model. Normal linear regression is used by Liu et al. [28].
Besides the type of OGM, the determination of the effect structure of the corresponding linear predictor is vital. Parts of the effect vector have been specified by literature review [29], by sampling from independent standard normal distributions [45], by estimation on the original dataset [28, 30] or manually by investigator's choice [5, 23, 26]. Some authors specify the treatment or exposure effect by hand while estimating the confounder effects on the original data [25] e.g.-Specifying the value of the effects of covariables might represent a strong intervention in the generation process of a realistic dataset. Potential problems include the creation of artificial outcome-covariate associations and invalidating or even nullifying existing "real" associations between the covariables and the novel, artificial outcomes, in particular if the effects are set manually by investigator's choice. If the effects are estimated, they depend on the underlying data sample and estimation uncertainty is ignored. Additionally, the specification of the type of OGM influences effect
estimates and subsequent analyses might become problematic. For example, an effect vector estimated by a sparse method will induce advantages of sparse methods in model comparisons on the generated data.
In summary, a crucial assumption for both parametric and plasmode simulations is that the chosen outcome generation reflects realistic, natural or biological associations between outcome and covariables. Whereas the choice of the OGM and the effect structure is a natural aspect of parametric simulation studies, plasmodes are often described as closely depicting reality. However, the outcome data in statistical plasmodes are also artificially created while inducing some investigator's choice "truth". These manipulations of the real-data are harder to assess and less transparent than in parametric simulations studies as the structure of the data is typically more complex. In the end, plasmode generation also leads, at least in part, to artificial data and constructed associations.
## 4 Statistical Plasmodes: Step-by-step Recommendations
We provide a hands-on overview of our recommendations for the generation and reporting of our concept of _statistical plasmodes_ in Figure 1. This summary extends the basic plasmode generation procedure described in Section 2 and addresses the critical steps discussed in the previous section. We theoretically discuss our step-by-step procedure below and illustrate its application in a real data example in Section 5.
**Step 1: Planning of the Simulation Study.** We recommend to clearly formulate the research problem and to plan the simulation study using the ADEMP criteria [2]. Fixing the aims and the data-generating processes aids in the choice whether statistical plasmodes are needed in the first place or whether a parametric simulation study might be more
Figure 1: Statistical plasmode data generation procedure step-by-step
appropriate. Additionally, it guides the choice of the population of interest in Step 2 and potentially the choice of the resampling technique in Step 4. Importantly, the methods and performance measures indicate the subsequent choice of the OGM in Step 6. For instance, if we plan to assess the prediction performance of several models, we should make sure that the chosen OGM does not bias the subsequent model comparisons. Also, the choice of the OGM and data generating process determine the scenarios in which the properties of novel statistical methods can be empirically assessed in the context of the simulation study.
**Step 2: Population of Interest.** The population of interest can be of primary interest and consequently be strongly connected to the aim of the simulation study and the data-generating processes determined in the previous step, in particular if methods are developed to deal with populations with certain characteristics (e. g. many missing values, complex covariance structure, high-dimensionality). However, we might also be mainly be interested in the analysis of statistical methods such that the population of interest serves mainly as an illustration and is not necessarily connected to Step 1. In the latter case, particular effort should be taken clarify why the chosen population covers those situations in which the methods under consideration are claimed to work. In any case, the hypothetical population should be stated and described as clearly as possible, e. g. the entity of interest, covariables and population structure. Since the datasets generated by the statistical plasmodes should be representative of the population of interest, this step influences several of the following steps, in particular Steps 3, 4 and 6.
**Step 3: Representative Sample.** It is a central aspect of statistical plasmodes that the underlying sample is representative of the population of interest clarified in the previous step, refer also to the discussion in Section 3.2. Consequently, it is vital to investigate and communicate why the utilized data sample represents the population of interest and which potential limitations arise. In particular, it should be stated how the data was sampled, which covariables are included and what the endpoint of interest is. Additionally, the sample size should be justified and potential population structures investigated, as this can influence the choice of the resampling technique, see Steps 4 and 5 and the discussion in Section 3.1. Note that even if the sample is representative, the generated plasmode datasets might not be, e. g. as a result of a poor resampling plan or a outcome generation that distorts the relationship between the covariables or the outcome-covariable association.
**Steps 4 & 5: Resampling Scheme** The resampling scheme consists of the number of bootstrap samples, the type of resampling technique used and, if applicable, the justification of the resampling size, see also Section 3.1. It determines, together with the data sample, the plasmode covariable datasets. Each of the aspects of the resampling scheme plays a crucial role for the asymptotic properties of the estimators applied to the data generated by resampling and to properties such as the preserveness of the covariable correlation structure, see the discussions in Section 3.1. Unfortunately, the resampling scheme has to be decided on for each application individually, while keeping those research aims and properties of the population of interest, that should be preserved with high priority, in mind. Ensuring the plasmode sets are drawn from the hypothetical population is only possible by applying subsampling [41], whereas sampling with replacement draws from the empirical distribution of the data sample specified in Step 3. However, if the underlying dataset is representative of the population of interest, drawing with replacement might become preferable due to its increased efficiency and second-order properties [37]. As described in Section 3.1, the nonparametric bootstrap potentially fails, although this is often impossible to know before the application. To avoid bootstrap failure, we recommend to utilize the \(m\)-out-of-\(n\) bootstrap although this might lead to efficiency losses. The optimal resampling size \(m\) can be determined by applying the algorithm introduced in Bickel and Sakov [38] while using those properties of priority as estimator in Step 2 of the optimization algorithm for \(m\). It has to be noted that the estimation of \(m\) might require high additional computational cost. In many applications it might be meaningful to opt for some function of the covariable covariance structure as an estimator, as it is often stressed that the empirical dependence structure of the original dataset should be preserved.
**Step 6: Outcome Generating Model.** The choice of the OGM includes the type of model to determine the association between the resampled covariables and the novel artificial outcomes, as well as corresponding OGM components such as effect sizes, for example. The OGM determines the artificial outcomes in type and value, and is a crucial
component for many research questions formulated in Step 1. Also, it gives the investigator the opportunity to fix some aspects of the "truth", see also the discussion in Section 3.3. Special care should be taken that the OGM does not bias the subsequent analyses that the statistical plasmode simulations are generated for. To do so, it might be helpful to investigate the models or methods to be compared in detail and contrast them with the OGM. For instance, a sparse OGM will most likely support sparse models in subsequent model comparisons. If the effect structure is chosen in a sparse way, a sparse model might be more likely to correctly estimate the effect sizes or perform valid predictions. Additionally, if important relationships between variables have been detected, the effect structure should be chosen accordingly to preserve these. For instance, in linear predictor models, the observed outcome variation depends on the (co-)variances of the covariables weighted by their corresponding effects, stressing their influence on the artifical outcomes of the plasmode datasets.
**Step 7: Outcome Generation.** Each of the \(N\) plasmode covariable sampled in Steps 4 and 5 is combined with the OGM determined in Step \(6\) to create \(N\) corresponding artificial plasmode outcome vectors.
**Step 8: Quality Checks.** The quality of the covariables can be assured by appropriate resampling as described in Steps 4 and 5. It is, however, often not feasible to compare the original covariable covariance structure with those of the \(N\) statistical plasmode datasets. More research might be necessary to judge the distance of the original and the generated data. The original outcome values of the real dataset are, if at all, only explicitly used to determine the effect structure in Step 6. The quality of the simulated data could be checked by comparing the generated outcomes of (some of) the statistical plasmode datasets with the original outcome. The type of potentially meaningful checks depends on the type of outcome. For continuous observations, the distributions of the two outcomes could be compared by the empirical densities or histograms as is done e. g. in Franklin et al. [5]. Additionally, the range of the data should be checked as well as potential outliers. For categorical (including binary) outcomes, the prevalence of the classes can be compared, see for example Franklin et al. [5].
**Step 9: Reporting.** Each step of the statistical plasmode generation should be justified and reported to enhance reproducibility and transparency of the proposed data generation procedure. Whenever appropriate, plasmode generation should follow the scheme presented in Figure 1 and the corresponding descriptions provided in the present section. Additionally, the research question determined in Step 1 should be addressed.
## 5 Statistical Plasmodes: A numerical example
The following example has been constructed to illustrate the step-by-step procedure introduced in the previous section.
**Step 1.** Assume that we are interested in the aim (A) of investigating the application of ridge regression [50] and the linear mixed model [51] e.g. in the context of high-dimensional RNA-expression data with sparse effects on a normal outcome (data-generating process, D). The estimands (E) are specified as the parameter vector and the linear predictor in the respective model. We split the sample once into training and test data (\(2:1\)), which we deem sufficient for our illustration purposes. The plasmode datasets are generated using the training data. Each plasmode dataset of size \(m\) and number of covariates \(p\) is analysed (methods, M) using ridge regression of the form
\[y=\mu 1_{m}+X\beta+\varepsilon,\quad\|\beta\|_{L_{2}}^{2}\leq\lambda,\quad \varepsilon\sim\mathcal{N}(0,\sigma^{2}I_{m\times m}) \tag{1}\]
via penalized maximum likelihood with cross-validation for \(\lambda\) as implemented in the R- package glmnet [7], as well as the linear mixed model in the variance components form
\[y=\mu 1_{m}+X\beta+\varepsilon,\quad\beta\sim\mathcal{N}(0,\sigma_{\beta}^{2}I _{p\times p}),\quad\varepsilon\sim\mathcal{N}(0,\sigma_{\varepsilon}^{2}I_{m \times m}) \tag{2}\]
with restricted maximum likelihood estimation as implemented in the R-package sommer [52]. Here, \(1_{m}\) denotes the \(m\)-column vector of ones while \(I_{p\times p}\) denotes the identity matrix of dimension \(p\). As performance measures, we utilize
the mean absolute bias
\[\mathrm{MAB}=\frac{1}{p+1}\|(\hat{\mu},\hat{\beta})-(\mu,\beta)\|_{L_{1}} \tag{3}\]
where \(\mu\) and \(\beta\) are known as part of the "truth", and the sample-split mean squared error of prediction
\[\mathrm{MSEP}=\frac{1}{m}\|\hat{y}-y_{\text{test}}\|_{L_{2}}^{2}, \quad\hat{y}=\hat{\mu}1_{m}+X_{\text{test}}\hat{\beta} \tag{4}\]
where \(y\) corresponds to the artificial outcome in the test split. We estimate both measures using the mean of the estimates (indexed by superscript \(b\)) in the generated \(N\) statistical plasmode datasets
\[\widehat{\mathrm{MAB}}=\frac{1}{N}\sum_{b=1}^{N}\frac{1}{p+1}\|( \hat{\mu},\hat{\beta})^{(b)}-(\mu,\beta)\|_{L_{1}},\quad\widehat{\mathrm{MSEP} }=\frac{1}{N}\sum_{b=1}^{N}\frac{1}{m}\|\hat{y}^{(b)}-y_{\text{test}}\|_{L_{2}} ^{2}. \tag{5}\]
and visualize the \(N\) individual measures via boxplots, see Step 9 and Figures 4 and 5.
**Step 2.** In the scope of this example, we are interested in the model choice for high-dimensional RNA-expression data with normal outcomes for female breast cancer patients which constitutes the population of interest.
**Step 3.** The data sample underlying the statistical plasmode simulation was generated by The Cancer Genome Atlas (TCGA) Research Network ([https://www.cancer.gov/tcga](https://www.cancer.gov/tcga)). The breast carcinoma (BRCA) cohort which provides a basis for the following numerical example was last updated on May 31, 2016.
We restrict the publicly available data to \(n=1098\) female patients with breast cancer with cancer tissue, i. e. excluding normal tissue and male patients. RNAseqV2 gene expression data and clinical data for BRCA were obtained from the TCGA Data Portal [53] via the R/Bioconductor package TCGAbiolinks [54, 55, 56]. For computational reasons, we choose \(p=5000\) out of the \(25828\) available genes at random. The R/Bioconductor package limma [57] has been utilized to normalize the RNA gene expression data. The expression levels can be assumed to be measured continuously and they show different shapes and ranges. This is illustrated in Figure 2 using their empirical distributions at four randomly chosen genes.
The outcome of interest is age at diagnosis date which can be considered to be approximately normally distributed, see Figure 3A. While the dataset can be considered to be representative of a female breast cancer population from the United States of America, we acknowledge that RNA expression data from other populations (e. g.different countries) might lead to different results for our research question.
**Step 4.** Before the analysis, we set the number of plasmode datasets to be generated to \(N=500\). In the final step 9, we investigate the convergence of the estimators of the performance measures, see equation (5), in the statistical plasmode datasets, see also Figure 6.
We choose the \(m\)-out-of-\(n\) bootstrap in order to prevent potential bootstrap failure but with the potential drawback of losing estimation efficiency. Performance analysis of resampling method and the estimation of the optimal \(m\) requires the specification of an estimator which is applied to the generated data. Since resampling in statistical plasmodes is primarily concerned with the covariate information, already using the performance measures defined in equation (5) as estimators is not feasible as they require the subsequent artificial outcome generation. Naturally, there are several reasonable estimators that could be considered. In this example, we opt for the covariate dependence structure as the measure of interest because the majority of publications which applied plasmodes referred to the advantage of the preserveness of the original covariable dependence structure.
We determine the resampling size \(m\) via the algorithm described in Bickel and Sakov [38]. In particular, to adopt that algorithm to our problem formulation, we specify the sequence of potential \(m\)'s by setting \(q=0.97\), choose the \(L_{2}\)-norm of the covariance matrix (of the resampled covariate data) as a metric, and calculate the resulting empirical distribution functions. We estimate the covariance matrix using the Ledoit-Wolf linear shrinkage estimator [58] to obtain a
more precise estimate which is necessary because the covariate data are high-dimensional. The optimal resampling size \(m^{*}\) is the one which minimizes the distance between the distributions of subsequent \(m\)'s, where the distance is exemplarily measured by the Wasserstein metric. The optimal resampling size based on the Wasserstein metric using \(100\) iterations resulted in \(m^{*}=711\).
We acknowledge that there is variety of optimal resampling sizes \(m^{*}\) if any of the parameters of the algorithms would be changed (such as, amongst others, estimator, distance metric for empirical distributions and sequence of potential m's).
**Step 5.** We apply resampling with replacement of size \(m^{*}=711\) to the matrix of covariable information to obtain \(N=500\) statistical plasmode covariable datasets. As we have determined the subsampling size with optimality criterion as the \(L_{2}\)-norm of the covariance matrix of the covariables, the empirical covariance structure of the original dataset should be sufficiently preserved.
**Step 6.** We choose the LASSO [59] as an appropriate OGM to represent the sparse effect structure associated with the high-dimensional data as required in Step 1. Additionally, the LASSO most likely does not distort the comparison between ridge regression and the linear mixed model as both of these methods are shrinkage methods used to model polygenic effects. The "true" effect structure for the LASSO is chosen as the vector of estimated effect sizes obtained after a LASSO had been fit to the original data. The proportion of non-zero estimated effects was \(95.4\%\) (\(4768\) vs \(232\)). This implies that \(232\) covariables are selected in the investigator's choice "truth" while \(4768\) genes are given a null effect. The effect sizes of the selected covariables has a median of \(-0.01\) (range \([-2.97,2.05]\)).
**Step 7.** We generate one artificial outcome vector of size \(m^{*}=711\) for each of the \(N=500\) plasmode covariate datasets by calculating the linear predictor based on the combination of the resampled covariable information (step 5) and the "true" effects (step 6). Thus, we obtain statistical plasmode simulations based on real covariate information with an investigator's choice "truth".
**Step 8.** The artificial outcomes (of some of) the \(N=500\) plasmode datasets are compared with the original outcomes via histograms in Figure 3. The distribution of the original and artificial outcomes is very similar in shape and mean. The range of the original outcomes is larger than the range of the artificial outcome which can be explained by the outcome generation via resampled covariables and effects determined by LASSO (sparse and shrunken effects) which most likely will not lead to more extreme outcome values than contained in the underlying dataset.
We conclude that the artificial outcome data come close to reality but might not properly reflect extreme values. The range of the artificial outcomes could be increased, e. g., by manually altering some elements of the effect vector estimated by LASSO (as investigator's choice of the "truth"). By doing so, however, we would further alter the association between some of the covariables and the novel outcomes.
**Step 9.** Finally, we compare the performance of ridge regression and the linear mixed model in our statistical plasmode simulations. The MAB of the Ridge regression is estimated as \(0.025\) while the estimate of the MAB of the linear mixed model is \(0.022\), see also equation (5). The sample-split MSEP of ridge regression is estimated as \(18.30\) while the sample-split MSEP of the linear mixed model is \(12.83\), see also equation (5). In Figure 4 and 5 we depict the estimated values for each plasmode dataset via boxplots. This suggests that in our generated statistical plasmode simulations, which represent high-dimensional RNA expression data with sparse known effects and artificial normal outcomes, the linear mixed model performs superior to ridge regression.
Additionally, we illustrate in Figure 6 the convergence of the performance measures for increasing number of plasmode dataset. The estimators for MAB and MSEP for both ridge regression and the linear mixed model seem to have stabilized at about \(300\) generated simulations. Thus, we conclude that the generated number of statistical plasmode is sufficient to obtain stable estimates of the performance measures defined in step 1.
## 6 Conclusions and Outlook
Many simulation studies impose relatively strong assumptions regarding the nature of randomness in the data and its dependence structure. Mostly of theoretical kind, those assumptions primarily rely on the assumptions inherent in the statistical models applied to generate the data. Since not all assumptions can be justified in applied settings, the corresponding simulation studies may not be able to capture biologically meaningful relationships and thus result in misleading conclusions and research findings.
To avoid (at least some of) those issues, plasmode data sets are considered as an alternative data generation approach. While parametric simulations are known to provide only a partial representation of reality [4], plasmodes have been declared to generate data that resemble reality in the closest way [9] e.g.. Highly appreciated for their ability to generate most realistic data, plasmode do not impose any specific model assumptions on their data generation process. Thus, no assumptions need to be justified to address the applicability of plasmodes. Nevertheless, a number of assumptions such as the representativeness of the underlying data sample have to be verified in order to guarantee the reliability of the generated plasmode data.
Plasmodes can accommodate unknown features such as dependence structure, distributions, and others, in particular, in the case of high-dimensional data. We recall that in case of parametric simulations most of those quantities are to be specified in advance. All in total, plasmode data sets may provide an attractive supplement to parametric simulations and can be applied in order to increase the reliability of the obtained research results.
Figure 2: Empirical distributions illustrated by histograms (15 breaks each) and smoothed densities for four genes selected at random to illustrate the differences in location and shape.
In the present paper, we first discuss the concept of statistical plasmodes as those created by resampling of covariate information from empirical data at hand and subsequent outcome generation using an appropriate outcome-generating model. This is what distinguishes them from biological plasmodes which are usually created by conducting lab experiments. We interpret statistical plasmodes as an intermediate step between the parametric and nonparametric simulations, with the parametric component represented by the chosen outcome- generating model. After the introduction of statistical plasmodes, we discuss their main advantages and challenges and propose a step-by-step scheme for their generation and reporting. That scheme is then illustrated by means of a numerical example. All discussions in the present paper are presented in the context of prediction and explanatory models.
Plasmodes are bounded to the sample they are based on, and thus cannot produce the same variety of different scenarios as parametric simulations do. In this context, questions on the data availability and representativeness arise. In particular, even if plasmodes offer a flexible data generation procedure which creates realistic data, the representativeness of the generated data still substantially depends on the representativeness of the underlying real data set. To address this limitation, some authors such as Ejima et al. [35] assume that the empirical data at hand represents the entire population of interest. Of course, such an assumption cannot be satisfied in each particular situation.
Spurious correlations are another issue closely related to the question of representativeness. Although plasmodes do not specify the underlying dependence structure explicitly, they do reproduce it to a certain extent while generating new data. Thus, if the sample at hand does not adequately represent the population of interest, the existing spurious
Figure 3: Empirical distributions illustrated by histograms (15 breaks each) and smoothed densities for A: the original outcome (age at diagnosis) and B-D: vs. artificial outcomes of three plasmode datasets selected at random.
correlations may be increased or even distorted for the generated plasmode data sets. As a result, the corresponding generated dependence structure will not represent the real one.
Statistical plasmodes as introduced in the present paper incorporate features from both parametric simulations and resampling approaches, and, as a result, inherit the strengths and weaknesses of each data generation method. On one hand, statistical plasmodes offer the advantage of creating more realistic data by generating covariate information through resampling techniques. On the other hand, they may also introduce certain challenges with respect to the subsequent model comparisons, as compared to purely parametric simulations [30]. Statistical plasmodes enable to control and manipulate certain aspects of the "truth" through the use of parametric OGMs, which can be advantageous over pure resampling methods. Nevertheless, asymptotic results established for resampling techniques may not be directly applicable to statistical plasmodes.
Our discussion points out several interesting options for future research. First, basic expectations placed on the plasmodes are related to their ability to preserve real data distributions, the underlying dependence structure and, as a result, the existing empirical associations. Those expectations are to be guaranteed by resampling from the observed covariate data at hand, without any additional data modification. However, it is not obvious how the choice of a particular resampling technique and specification of its parameters (such as the subsampling proportion in case of the subsampling technique) might impact the robustness of the obtained data generation results, e. g., in the context of spurious correlations or sparse data. Additionally, the calculation of the optimal \(m\) might require high computational costs. A closer analysis of these impacts are possible topics for future research.
Figure 4: Boxplots of the mean absolute bias for both the linear mixed model and ridge regression in \(500\) statistical plasmode datasets.
Second, a data generation method is considered to be realistic if it reflects the real data structure and the existing dependencies in the most accurate way. Thus, appropriate distance measures need to be specified in advance and also included into the reporting step of the data generation procedure. Such measures can then be used to measure the closeness of the generated plasmode data set to the underlying real data set. The choice of an appropriate distance measure, as well as the robustness of the plamode generation procedure with respect to that choice, can also be an interesting research topic.
Finally, the outcome-generating models present the major obstacle for plasmodes to become a purely non-parametric data generation approach. In the future we intend to analyze the impact of an OGM on the performance of the plasmode data generation procedure and to construct examples where the replacement of a parametric OGM with a non-parametric one improves the obtained data generation results. It is also of great interest to address possible "plasmode failure" for data sets generated through statistical plasmodes.
In total, our paper presents a comprehensive analysis of statistical plasmode simulations, discusses their potentials and central challenges and provides step-by-step recommendations for their generation. Our future research aims to address (at least some of) the pitfalls in the most close way to potentially provide more understanding and further novel insights into statistical plasmode generation.
Figure 5: Boxplots of the sample-split mean squared error of prediction for both the linear mixed model and ridge regression in \(500\) statistical plasmode datasets.
## Acknowledgements
The authors would like to thank Jorg Rahnenfuhrer, Andrea Bommert and Marieke Stolte for helpful discussions.
|
2302.04840 | What are the mechanisms underlying metacognitive learning? | How is it that humans can solve complex planning tasks so efficiently despite
limited cognitive resources? One reason is its ability to know how to use its
limited computational resources to make clever choices. We postulate that
people learn this ability from trial and error (metacognitive reinforcement
learning). Here, we systematize models of the underlying learning mechanisms
and enhance them with more sophisticated additional mechanisms. We fit the
resulting 86 models to human data collected in previous experiments where
different phenomena of metacognitive learning were demonstrated and performed
Bayesian model selection. Our results suggest that a gradient ascent through
the space of cognitive strategies can explain most of the observed qualitative
phenomena, and is therefore a promising candidate for explaining the mechanism
underlying metacognitive learning. | Ruiqi He, Falk Lieder | 2023-02-09T18:49:10Z | http://arxiv.org/abs/2302.04840v1 | # What are the mechanisms underlying metacognitive learning?
###### Abstract
How is it that humans can solve complex planning tasks so efficiently despite limited cognitive resources? One reason is its ability to know how to use its limited computational resources to make clever choices. We postulate that people learn this ability from trial and error (_metacognitive reinforcement learning_). Here, we systematize models of the underlying learning mechanisms and enhance them with more sophisticated additional mechanisms. We fit the resulting 86 models to human data collected in previous experiments where different phenomena of metacognitive learning were demonstrated and performed Bayesian model selection. Our results suggest that a gradient ascent through the space of cognitive strategies can explain most of the observed qualitative phenomena, and is therefore a promising candidate for explaining the mechanism underlying metacognitive learning.
**Keywords: metacognitive learning, planning, strategy discovery, cognitive modelling, reinforcement learning**
## Introduction
Humans frequently face complex problems that requires planning long chains of actions to accomplish far-off objectives. A search tree can represent the space of potential future actions and outcomes, which expands exponentially as the length of the sequences increases. While exponential growth in computational power enables current trends in artificial intelligence, the cognitive capabilities of the human mind are much more constrained. Therefore, people have to make efficient use of their limited cognitive resources (_resource-rationality_) (Lieder & Griffiths, 2020). So, how is it possible that people can still plan so efficiently? One potential explanation is that meta-reasoning, the ability to reason about reasoning, might help people to accomplish more with less computational effort (Griffiths et al., 2019). In the context of planning, this means making wise choices about when and how to plan, that is whether and how to use computational resources. However, according to Russell and Wefald (1991), optimal meta-reasoning is often regarded as an intractable problem. This raises the question of how people can nonetheless solve the intractable meta-reasoning problem. One possibility is that people learn an approximate solution via trial and error, an idea known as _metacognitive reinforcement learning_(Lieder & Griffiths, 2017; Krueger, Lieder, & Griffiths, 2017; Lieder, Shenhav, Musslick, & Griffiths, 2018).
This idea has been used in earlier research to explain how people learn to select between various cognitive strategies (Erev & Barron, 2005; Rieskamp & Otto, 2006; Lieder & Griffiths, 2017), how many steps to plan ahead (Krueger et al., 2017) and when to exercise how much cognitive control (Lieder et al., 2018). In the context of planning, previous work suggests that metacognitive reinforcement learning adapts which information people prioritise in their decisions (Jain, Callaway, & Lieder, 2019; He, Jain, & Lieder, 2021) and how much planning they perform to the costs and benefits of planning (He, Jain, & Lieder, 2021). While previous work each focused on explaining individual aspects of metacognitive learning with a small set of models, none of the models was tested to explain _all_ observed qualitative phenomena. In addition, previous findings paint a rather inconsistent and even contradictory picture of how people learn planning strategies, with different articles arguing for different learning mechanisms (Jain, Gupta, et al., 2019; He et al., 2021).
Therefore, in this work, we investigate whether there is one single metacognitive reinforcement learning model that can largely explain all observed phenomena. Our contribution is two-fold: i) We systematically compare all existing models on data collected in empirical experiments where learning-induced changes in people's planning strategies were demonstrated, and ii) we extend existing models to systematically formalize plausible alternative assumptions and all of their possible combinations. This led to 86 different models, which we fit using maximum likelihood criterion and compare using Bayesian model selection, as well as perform model simulation. The winning model gives us an indication of the underlying mechanisms of how people learn planning strategies.
This line of research contributes to the larger goal of understanding metacognitive learning. It also provides a foundation for training programs aiming to improve human decision-making and to help people overcome maladaptive ways of learning planning strategies.
## Background
To model the mechanism of metacognitive learning, we take inspiration from reinforcement learning algorithms and use the framework of meta-decision-making, which we will now briefly introduce and explain how they can be combined into a framework called _metacognitive reinforcement learning_.
### Reinforcement learning
Previous studies suggest that human learning is motivated by reward and penalties gained through trial and error (Niv,
2009), which builds the foundation of reinforcement learning algorithms that learn to predict the potential reward from performing a specific action \(a\) in a specific state \(s\). This estimate \(Q(s,a)\) is updated according to the reward prediction error \(\delta\), which is the difference between actual and expected rewards:
\[Q(s,a)\gets Q(s,a)-\alpha\cdot\delta \tag{1}\]
where \(Q\) denotes the Q-value (Watkins and Dayan, 1992) and \(\alpha\) is the learning rate. To compromise between exploitation and exploration, the agent can pick its actions _probabilistically_, maximising the predicted action value, for example using the softmax rule (Williams, 1992)\(P(a|s,Q)\propto\exp(1/\tau\cdot Q(s,a))\) where \(\tau\) is the inverse temperature parameter.
### Meta-decision-making
The brain is supposedly equipped with multiple decision systems that interact in various ways (Dolan and Dayan, 2013; Daw, 2018). The model-based system, in contrast to Pavolvian and model-free systems, allows for flexible reasoning about which action is preferable but demands a process for deciding which information should be considered for a given decision. Therefore, an important part of deciding how to decide is to efficiently balance decision quality and decision time, known as _meta-decision-making_(Boureau, Sokol-Hessner, and Daw, 2015). The problem of meta-decision-making has been recently formalized as a meta-level MDP (Krueger et al., 2017; Griffiths et al., 2019):
\[M_{meta}=\left(\mathcal{B},\mathcal{C}\cup\{\bot\},T_{meta},r_{meta}\right), \tag{2}\]
where belief states \(b_{t}\in\mathcal{B}\) denotes the model-based decision system's beliefs about the values of actions. The computations of the decision system (\(c_{1},c_{2},\cdots\)) probabilistically determine the temporal development of those belief states \(b_{1},b_{2},\cdots\) according to the meta-level transition probabilities \(T_{\text{meta}}(b_{t},c_{t},b_{t+1})\). The meta-level reward function \(r_{\text{meta}}(b_{t},c_{t})\) encodes the cost of performing the planning operation \(c_{t}\in\mathcal{C}\) and the expected return of terminating planning (\(c_{t}=\bot\)) and acting based on the current belief state \(b_{t}\). Reinforcement learning algorithms, such as Q-learning (see Equation 1), can be used to solve this meta-level MDP.
### Metacognitive reinforcement learning
Finding efficient planning strategies can be formalized as solving a metalevel MDP for the best metalevel policy (Griffiths et al., 2019). However, as it is often computationally intractable to solve meta-decision-making problems optimally, we will assume that the brain approximates optimal meta-decision-making through reinforcement learning mechanisms (Russell and Wefald, 1991; Callaway, Gul, Krueger, Griffiths, and Lieder, 2018) that attempt to approximate the optimal solution of the meta-level MDP defined in Equation 2 by either learning to approximate the optimal policy directly (He et al., 2021) or by learning an approximation to its value function (Jain, Callaway, and Lieder, 2019).
### Experiments
For testing the ability of our models to explain different aspects of metacognitive learning, we will use data from previous work that examined several aspects of metacognitive learning in the domain of planning. He et al. (2021) and He et al. (2021) utilized the Mouselab-MDP paradigm to design two experiments where participants were asked to perform repeated trials of a planning task (see Figure 1). The goal in the experiment was to collect a high score, which signals the adaptiveness and resource-rationality (Lieder and Griffiths, 2020) of the participant at a given trial. The rewards are initially hidden but can be revealed by clicking on the nodes. Each click has a cost. Participants' clicks were recorded because they indicate planning operations that people perform to estimate the values of alternative future locations.
Adaptation to different environment structuresIn the first experiment, participants were randomly allocated to one of three conditions, where the environment structure rendered either long-term planning (examining the farthest nodes), short-term planning (examining immediate nodes) or best-first search planning (starting with examining immediate and middle nodes and continue to examine other nodes according to the most promising ones) most beneficial. The results suggested that people gradually learn to use the corresponding adaptive strategies for each environment.
Adaptation of the amount of planning depending on the costs and benefits of planningThe second experiment indicated that people do learn how much to plan. For this, participants were assigned to one of four different conditions, each differed in the benefit and cost of planning. Their number of clicks indicated whether participants learned to adapt their amount of planning depending on the condition.
## 4 Models and methods
The models of metacognitive learning we test in this article have three components: i) the representation of the planning strategies that the learning mechanism operates on, ii) the basic learning mechanism, and iii) additional attributes. The following three sections introduce these components. We then describe how the models were fit and selected.
Figure 1: Exemplary trial of the planning task
### Mental representation of planning strategies
The planning strategies are modelled as softmax policies that depend on a weighted combination of 56 features Jain et al. (2022). For instance, one group of features is related to pruning Huys et al. (2012), which is associated with giving a negative value to consider a path whose predicted value is below a specific threshold. Therefore, using this representation, a person's learning trajectory can be described as a time series of the weight vectors that correspond to their planning strategies in terms of those features.
### Basic learning mechanisms
We consider four possible basic learning mechanisms: learning the value of computation, gradient ascent through the strategy space, forming a mental habit, and no learning.
Learning the value of computationAccording to the Learned Value of Computation (LVOC) model, people learn how valuable it is to perform each planning operation depending on what is already known Krueger et al. (2017). This is achieved by approximating the meta-level Q-function by a linear combination of the features mentioned above:
\[Q_{\text{meta}}(b_{k},c_{k})\approx\sum_{j=1}^{56}w_{j}\cdot f_{j}(b_{k},c_{k}), \tag{3}\]
The weights of those features are learned by Bayesian linear regression of the bootstrap estimate \(\hat{Q}(b_{k},c_{k})=r_{\text{meta}}(b_{k},c_{k})+\langle\mu_{t},\mathbf{f}(b ^{\prime},c^{\prime})\rangle\) which is the sum of the immediate meta-level reward and the anticipated value of the future belief state \(b^{\prime}\) under the present meta-level policy. The predicted value of \(b^{\prime}\) is the scalar product of the posterior mean \(\mu_{t}\) of the weights \(\mathbf{w}\) given the observations from all preceding planning operations and the features \(\mathbf{f}(b^{\prime},c^{\prime})\) of \(b^{\prime}\) and the cognitive operation \(c^{\prime}\) that the current policy picks given state. To make the \(k^{\text{th}}\) planning operation, \(n\) weight vectors are sampled from the posterior distribution using a generalized Thompson sampling \(\tilde{w}_{k}^{(1)},\cdots,\tilde{w}_{k}^{(n)}\sim P(\mathbf{w}|\mathcal{I}_{k})\), where the set \(\mathcal{I}_{k}=\{e_{1},\cdots,e_{k}\}\) contains the meta-decision-maker's experience from the first \(k\) meta-decisions. Each meta-level experience \(e_{i}\in\mathcal{I}_{k}\) is a tuple \(\left(b_{i},h_{i},\hat{Q}(b_{i},c_{i};\mu_{i})\right)\) containing a meta-level state, the selected planning operation in it, and the bootstrap estimates of its Q-value. The arithmetic mean of the sampled \(n\) weight vectors is then used to predict the Q-values of each potential planning operation \(c\in\mathcal{C}\) according to Equation 3. The LVOC model therefore has the following free parameters: \(p\), the mean vector \(\mu_{prior}\) and variance \(\sigma_{\text{prior}}^{2}\) of its prior distribution \(\mathcal{N}(\mathbf{w};\mu_{prior},\sigma^{2}\cdot\mathbf{I})\) on the weights \(\mathbf{w}\), and the number of samples \(n\).
Gradient ascent through the strategy spaceAccording to the REINFORCE model Jain et al. (2019), which is based on the REINFORCE algorithm Williams (1992), metacognitive learning proceeds by gradient ascent through the space of possible planning strategies. When a plan is executed and its outcomes are observed, the weights \(w\) representing the strategy are adjusted in the direction of the gradient of the return1, that is
Footnote 1: The return is the the sum of the rewards along the chosen path minus the cost of the performed planning operations.
\[\mathbf{w}\leftarrow\mathbf{w}+\alpha\cdot\sum_{t=1}^{O}\gamma^{-1}\cdot r_{ meta}(b_{t},c_{t})\cdot\nabla_{\mathbf{w}}\ln\pi_{\mathbf{w}}(c_{t}|b_{t}), \tag{4}\]
where \(\gamma\) is the discount factor, and \(O\) is the number of planning operations executed by the model on that trial. The learning rate \(\alpha\) is optimised using ADAM Kingma and Ba (2014). The REINFORCE model has three free parameters: \(\alpha\), \(\gamma\) and inverse temperature \(\tau\) that are fit separately for each participant. The weights are initialised randomly.
Mental habit formationThis model assumes that the only mechanism through which people's planning strategies change is the formation of mental habits. Following Miller et al. (2019) and Morris (2022), this model assumes that people's propensity to perform a given (type of) planning operation increases with the number of times they have performed it in the past. This is implemented as a softmax decision rule applied to a weighted sum of frequency-based features, including the number of previous clicks on the same node, the same branch, and the same level, respectively.
Non-learning modelThis model does not perform any parameter updates and does not use habitual features.
### Extensions
We augmented the REINFORCE and LVOC models with three optional components: a two-stage hierarchical meta-decision-making process (_hierarchical meta-control_), metacognitive rewards for generating valuable information (_pseudo-rewards_), and deliberating about the value of termination when taking an action (_termination deliberation_).
Hierarchical meta-controlPrevious research suggests that foraging decisions are made by two distinct decision systems: the ventromedial prefrontal cortex and the dorsal anterior cingulate cortex Rushworth et al. (2012). We, therefore, developed an extension that first decides whether to continue planning (Stage 1) and then selects the next planning operation according to either the LVOC or the REINFORCE model (Stage 2) if it chose to continue planning in Stage 1. For Stage 1, our models consider three potential decision rules. Each decision rule is a tempered sigmoid function \(\sigma(x,\tau)=(1+e^{-\frac{1}{\tau}})^{-1}\)Papernot et al. (2021). In each case, the function's argument \(x\) is a different function \(f(\mathbb{M})\) of the expected sum of rewards along the best path according to the information observed so far (\(\mathbb{M}=\max_{path}\mathbb{E}[R(\text{path})\mid b]\)). Concretely, the three stopping rules compare \(\mathbb{M}\) against a fixed threshold, a threshold that tracks the outcomes of previous trials, and a threshold that decreases with the number of clicks, respectively.
Fixed thresholdThis decision rule probabilistically terminates planning when the normalized value of \(\mathds{M}\) reaches the threshold \(\eta\), that is \(P(C=\bot\mid b)=\sigma\left(\frac{\mathds{M}-v_{\min}}{v_{\max}-v_{\min}}-\eta, \tau\right)\), where \(v_{\min}\) and \(v_{\max}\) are the trial's lowest and highest possible returns, respectively.
Decreasing thresholdBuilding on the observation that the threshold of the resource-rational planning strategy decreases with the number of clicks (Callaway, Lieder, et al., 2018), this decision rule adjusts the threshold based on the number of clicks made so far (\(n_{c}\)), that is: \(P(C=\bot\mid b)=\sigma(\mathds{M}-e^{a}+e^{b}\cdot n_{c},\tau)\), where \(a\) and \(b\) are the free parameters.
Threshold based on past performanceThis decision rule models the idea that people learn what is good enough from experience. Concretely, this decision rule assumes that the threshold \(M\sim\mathcal{N}\Big{(}m;\frac{\eta}{\sqrt{n+1}}\Big{)}\) is a noisy estimate of the average \(m\) of their previous scores, that is \(P(C=\bot\mid b)=\sigma(\mathds{M}-M),\tau)\), where \(n\) is the number of trials and \(\eta\) is a free parameter. The probability distribution of the threshold is derived from the assumption that the threshold is an average of noisy memories of previous scores.
Pseudo-rewardsThe central role of reward prediction errors in reinforcement learning (Schultz, Dayan, & Montague, 1997; Glimcher, 2011) and the dearth of external reward in metacognitive learning (Hay, 2016) indicate that the brain might accelerate the learning process by producing additional metacognitive pseudo-rewards that convey the value of information produced by the last planning operation. Concretely, the pseudo-reward (PR) for transitioning from belief state \(b_{t}\) to \(b_{t+1}\) is the difference between the expected value of the path that the agent would have taken in the previous belief state \(b_{t}\) and the expected value of the best path in the new belief state \(b_{t+1}\): \(\text{PR}(b_{t},c,b_{t+1})=\mathds{E}[R_{\pi_{b_{t+1}}}|b_{t+1}]-\mathds{E}[R _{\pi_{b_{t}}}|b_{t+1}]\) where \(\pi_{b}(s)=\text{argmax}_{a}\mathds{E}_{b}[R\mid s,a]\) is the policy agent will use to navigate the physical environment when its belief state is \(b\), and \(R\) is the expected value of the sum of the external rewards (e.g., the sum of rewards collected by moving through the planning task) according to the probability distribution \(b\).
Termination deliberationIf people engaged in rational metareasoning (Griffiths et al., 2019), they would calculate the expected value of acting on their current belief \(b\) from the information it encodes (_termination deliberation_). Alternatively, people might learn when to terminate through the same learning mechanism through which they learn to select between alternative planning operations (no termination deliberation).
### Model fitting
Combining the basic learning mechanisms with the model attributes resulted in 86 different models (see [https://osf.io/wz9uj/](https://osf.io/wz9uj/) for a list of all model). We fitted all models to 382 participants from both experiments by maximizing the likelihood function of the participants' click sequences using Bayesian optimization (Bergstra, Yamins, & Cox, 2013). The likelihood of a click sequence is the product of the likelihood of the individual clicks.
### Model selection
After having fitted the models, we perform model selection using the Bayesian information criterion (BIC) (Schwarz, 1978), and Bayesian model selection (BMS). Concretely, we estimate the expected proportion of people who are best described by a given model (\(r\)) and the _exceedance_ probability \(\phi\) that this proportion is significantly higher than the corresponding proportion for any other model by using random effect Bayesian model selection (Rigoux, Stephan, Friston, & Daunizeau, 2014; Stephan, Penny, Daunizeau, Moran, & Friston, 2009). To obtain the equivalent conclusions for groups of models that share some feature, we perform family-level Bayesian model selection (Penny et al., 2010).
## Results
The code and the BIC of all 86 models can be found in [https://osf.io/wz9uj/](https://osf.io/wz9uj/).
### Comparing all models for all participants
To examine which of the learning mechanisms can best explain human behavior, we grouped the models into 4 model families: non-learning, mental habit, LVOC, and REINFORCE models. We found that the model family whose members provided the best explanation for the largest number of participants was the REINFORCE models (see Table 1), which explained about 41.13% of the participants better than models from other model families.
\begin{table}
\begin{tabular}{l c c c} Model family & \(r\) & \(\phi\) \\ \hline Non-learning & 0.34 & 0.06 \\ Mental habit & 0.06 & 0 \\ LVOC & 0.20 & 0 \\ REINFORCE & 0.41 & 0.94 \\ \hline \end{tabular}
\begin{tabular}{l c c c} Model family & \(r\) & \(\phi\) \\ \hline Non-learning & 0.25 & 0 \\ Mental habit & 0.07 & 0 \\ LVOC & 0.21 & 0 \\ REINFORCE & 0.47 & 1 \\ \hline \end{tabular}
\end{table}
Table 1: Family-level BMS for all participants
the non-learning model to 25%, while the proportion of participants best explained by the learning models increased to 75%. REINFORCE models now explain the data from 47.41% of the learners better than the other models (see Table 2).
Comparing the learning models individually, the models that were best for the highest proportion of participants are the REINFORCE model with pseudo-reward and the plain REINFORCE model, followed by the plain LVOC model, and the mental-habit model (see Table 3).
To examine whether how people learn adaptive planning strategies (Experiment 1) and how people adapt their amount of planning (Experiment 2) can be largely explained by a single model, we combined the data from both experiments into one data set. For both experiments, the REINFORCE learning mechanism explains the largest proportion of participants (see Table 4 and 5). While a larger proportion of participants are best explained by the REINFORCE model with pseudo-reward in Experiment 1, Experiment 2 favors the plain REINFORCE model (see Table 6). BMS family-level comparison on models with pseudo-reward against models without pseudo-reward on both experiments combined yielded \(r=0.42,\phi=0.01\) for models with pseudo-reward and \(r=0.58,\phi=0.99\) for models without. Comparing the difference in BIC between the REINFORCE model with pseudo-rewards and its plain version for all learners revealed substantial evidence for the absence of pseudo-rewards in 104 out of 224 participants (\(\Delta\)BIC \(>3.2\) for 65 participants of Exp. 1 and 39 from Exp. 2) and substantial evidence for its presence in 104 other participants (\(\Delta\)BIC\(<-3.2\) for 75 participants of Exp. 1 and 29 from Exp. 2). The remaining 16 participants' absolute difference in BIC was less than 3.2. This suggests that about half of the participants (42%) might use intrinsically generated pseudo-rewards to inform the metacognitive learning, while the other half (58%) do not. A \(\chi^{2}\) test comparing the proportion of participants whose data is better explained by a model with pseudo-rewards between the two experiments yielded no significant difference (44% vs. 38%, \(\chi^{2}(3)=0.73,p=.87\); see Table 7). Therefore, the difference between the learning behavior of people who seemed to use versus not use pseudo-rewards cannot be explained by situational factors. This suggests that those differences reflect inter-individual differences.
Next to pseudo-reward, family-level BMS by grouping the models into two groups with and without the attribute suggest that models without hierarchical meta-control and without termination deliberation are preferred (see Table 8).
### How well can our best models capture the qualitative changes in people's planning strategies?
To examine, whether the two most promising models - plain REINFORCE and REINFORCE with pseudo-reward - can explain all phenomena observed in Experiments 1 and 2, we simulated participants' behavior in the three conditions of Experiment 1 and the two conditions of Experiment 2 with the fitted model parameters. Figure 3 shows the increasing trend in the predicted level of resource-rationality over time across both experiments (Mann-Kendall test: all \(S>214\) and \(p<.01\) for both models and participants). This shows that our models can explain the observed increase in adaptiveness. Figure 2 shows the proportion of adaptive planning strategies in the first experiment. To determine whether a participant used an adaptive planning strategy on a given trial, we inspected the first click in each trial, which signals what kind of strategy has been used. First click on the farthest node signals the adaptive far-sighted strategy in the first condition; first click on an immediate node signals the near-sighted strategy in the second condition, and first click on the immediate and middle nodes signals the best-first-search in the third condition. Both models captured that people learned to increasingly more often rely on adaptive strategies in the condition, where far-sighted planning is beneficial (see Figure 1(a), Mann-Kendall test: increasing proportion of adaptive strategies for both models and participants; all \(S>383,p<.01\)).
\begin{table}
\begin{tabular}{c c c c} Model family & \(r\) & \(\phi\) & BIC \\ \hline REINFORCE with PR & 0.11 & 0.80 & 569.09 \\ Plain REINFORCE & 0.08 & 0.15 & 567.99 \\ LVOC & 0.07 & 0.03 & 589.76 \\ Mental-habit & 0.06 & 0.01 & 598.90 \\ \hline \end{tabular}
\end{table}
Table 3: Model-level BMS and BIC of the learning models for all learners across both experiments.
\begin{table}
\begin{tabular}{c c c} Model family & \(r\) & \(\phi\) \\ \hline HR & 0.15 & 0 \\ Non-HR & 0.85 & 1 \\ \hline \end{tabular}
\begin{tabular}{c c c} Model fam. & \(r\) & \(\phi\) \\ \hline PR & 0.38 & 0.02 \\ No PR & 0.62 & 0.98 \\ \hline \end{tabular}
\end{table}
Table 7: Family-level BMS analysis of pseudo-reward (PR) for Experiment 1 (left) and 2 (right)
\begin{table}
\begin{tabular}{c c c} Model family & \(r\) & \(\phi\) \\ \hline Mental habit & 0.09 & 0 \\ LVOC & 0.19 & 0 \\ REINFORCE & 0.72 & 1 \\ \hline \end{tabular}
\begin{tabular}{c c c} Model fam. & \(r\) & \(\phi\) \\ \hline TD & 0.30 & 0 \\ No TD & 0.70 & 1 \\ \hline \end{tabular}
\end{table}
Table 5: Family-level BMS for Experiment 2
\begin{table}
\begin{tabular}{c c c} Model family & \(r\) & \(\phi\) \\ \hline Mental habit & 0.10 & 0 \\ LVOC & 0.40 & 0.17 \\ REINFORCE & 0.50 & 0.83 \\ \hline \end{tabular}
\end{table}
Table 5: Family-level BMS for Experiment 3
\begin{table}
\begin{tabular}{c c c} Model family & \(r\) & \(\phi\) \\ \hline TR & 0.30 & 0.02 \\ No PR & 0.62 & 0.98 \\ \hline \end{tabular}
\end{table}
Table 8: Family-level BMS comparing attributes of hierarchical meta-control (HR) and termination deliberation (TD)
The plain REINFORCE model additionally captured the participants also learned to use increasingly more adaptive strategies in the environment that favored near-sighted planning (see Figure 1(b); increasing trend for participants and plain REINFORCE: \(S>227,p<.01\), no trend for REINFORCE with pseudo-rewards: \(S=11,p=.89\)). Moreover, both models captured that participants appeared to use increasingly fewer adaptive strategies in the environment that favored best-first-search planning (see Figure 1(c); all \(S<-217;p<.01\)).2 Although the models captured these qualitative effects, from a quantitative perspective, they underutilized adaptive strategies in the environments where near-sighted planning and best-first-search planning are most beneficial but not in the environment that favored far-sighted planning.
Footnote 2: This might reflect shortcomings of the rule He, Jain, & Lieder (2021) used to classify people’s strategies in this environment.
Both models can partly capture the participants' learning behavior in Experiment 2. For the conditions where planning is beneficial, both models correctly predicted that the amount of planning would increase significantly over time (see Figure 3(a); Mann-Kendall test: all \(S>516,p<.01\)). For the condition where planning is less beneficial, the models predicted that the number of clicks would decrease to a nearly optimal level (see Figure 3(b), Mann-Kendall test: all \(S<-167,p<.01\)). Participants learned to decrease their amount of planning to an even greater extent and converged on planning less than the resource-rational strategy. This indicates that participants experience an additional cost that is not yet captured by our models.
## Discussion and further work
In this article, we tested 86 computational models of how people learn planning strategies against data collected in two experiments that tested different characteristics of metacognitive learning, namely the adaptation to different environment structures and the adaptation to different levels of planning costs. Overall, we found consistent evidence that the learning mechanism REINFORCE can largely capture the observed phenomena, like learning far-sighted planning strategies and adjusting the amount of planning. Moreover, we found that some people learn from self-generated pseudo-rewards for the value of information, whereas others do not. However, the REINFORCE models failed to learn short-sighted or best-first search planning strategies to the same extent as the participants. This observation, combined with the high proportion of non-learning models, suggests that there is still room for improvement. Furthermore, planning incurs cognitive costs above and beyond the cost of acquiring information (He & Lieder, 2022; Felso, Jain, & Lieder, 2020; Callaway et al., 2022). Therefore, further work can improve our models by incorporating these additional costs into the reward signals that the models learn from. |
2305.19108 | DisCLIP: Open-Vocabulary Referring Expression Generation | Referring Expressions Generation (REG) aims to produce textual descriptions
that unambiguously identifies specific objects within a visual scene.
Traditionally, this has been achieved through supervised learning methods,
which perform well on specific data distributions but often struggle to
generalize to new images and concepts. To address this issue, we present a
novel approach for REG, named DisCLIP, short for discriminative CLIP. We build
on CLIP, a large-scale visual-semantic model, to guide an LLM to generate a
contextual description of a target concept in an image while avoiding other
distracting concepts. Notably, this optimization happens at inference time and
does not require additional training or tuning of learned parameters. We
measure the quality of the generated text by evaluating the capability of a
receiver model to accurately identify the described object within the scene. To
achieve this, we use a frozen zero-shot comprehension module as a critique of
our generated referring expressions. We evaluate DisCLIP on multiple referring
expression benchmarks through human evaluation and show that it significantly
outperforms previous methods on out-of-domain datasets. Our results highlight
the potential of using pre-trained visual-semantic models for generating
high-quality contextual descriptions. | Lior Bracha, Eitan Shaar, Aviv Shamsian, Ethan Fetaya, Gal Chechik | 2023-05-30T15:13:17Z | http://arxiv.org/abs/2305.19108v1 | # DisCLIP: Open-Vocabulary Referring Expression Generation
###### Abstract
Referring Expressions Generation (REG) aims to produce textual descriptions that unambiguously identifies specific objects within a visual scene. Traditionally, this has been achieved through supervised learning methods, which perform well on specific data distributions but often struggle to generalize to new images and concepts. To address this issue, we present a novel approach for REG, named _DisCLIP_, short for discriminative CLIP. We build on CLIP, a large-scale visual-semantic model, to guide an LLM to generate a contextual description of a target concept in an image while avoiding other distracting concepts. Notably, this optimization happens at inference time and does not require additional training or tuning of learned parameters. We measure the quality of the generated text by evaluating the capability of a receiver model to accurately identify the described object within the scene. To achieve this, we use a frozen zero-shot comprehension module as a critique of our generated referring expressions. We evaluate DisCLIP on multiple referring expression benchmarks through human evaluation and show that it significantly outperforms previous methods on out-of-domain datasets. Our results highlight the potential of using pre-trained visual-semantic models for generating high-quality contextual descriptions.
Figure 1: **Referring Expressions Generation** (REG) aims to generate textual descriptions that clearly identify an object in a given scene, ignoring similar distractors. REG is harder than object (dense) captioning because it must take into account the context of other objects. For instance, a REG model must be capable of identifying unique features such as the color of a tie (on the left) or general descriptions such as _“the man without the hat”_ (right). The same object can have multiple distinct descriptions based on the context.
Introduction
Referring expressions (REs) are a key component of language communication. They allow people to refer to one entity in a complex visual scene, in an unambiguous way. Comprehending and generating REs is essential for embedded agents that need to communicate with people about their environment. For instance, an autonomous vehicle may inquire a passenger about their preferences - "Should I park in the nearest spot or the shaded one?", or a robot assistant may wish to clarify an instruction: "Which chair should I get you: the black one or the white one?".
Significant effort has been devoted to RE _comprehension_, namely, training agents to understand referring expressions generated in natural language by people. The current paper focuses on a complementary task: Referring Expressions Generation (REG), namely, training agents to refer to entities in a visual scene using natural language. RE generation and comprehension are complementary; they can be viewed as played by two communicating players [1; 2; 3; 4]. First, a _speaker_ observes a scene that contains multiple objects and generates language that refers to a specific target object. Then, a second player, a _listener_, interprets the RE in the context of the same visual scene and selects the entity that is referred to. In this communication-as-a-game setup, the two players are cooperative and have a common objective, the speaker wishes to generate REs that are easily interpretable by the listener [5; 6].
REG generated by agents should satisfy two key properties: (1) discriminative - using attributes to point clearly to a unique object in the scene, and (2) intelligible - producing language that can be easily understood by people. Recent advances in NLP using web-scale corpora have been very successful in generating natural language, but datasets available for referring expressions are significantly smaller. As a result, current visual REG models are limited and do not transfer well to images outside the narrow domain they were trained on. In contrast,
Visual-Linguistic (VL) models, such as CLIP [7] and LLM, were trained on large text corpora, encompassing a wide variety of expressions. Therefore, they provide a more versatile and general-purpose framework for REG. In addition, the vast scale of foundation models enables them to generalize effectively to new data, even in zero-shot scenarios.
Building on these advances we propose an approach that builds on LLMs and large VL models. Our approach is based on two key components. First, we use a pre-trained CLIP as a listener to evaluate how well a text phrase corresponds to an object in a given scene. Second, we introduce a method for using CLIP in a discriminative manner across localized boxes. This optimization is guiding the text generation of an LLM, at inference time. As such, it does not require any further training of learned parameters. Importantly, we avoid training the listener and speaker models jointly, because such training can lead to a "runaway" drift into a specialized language that is less natural for human interpretation. Furthermore, CLIP was trained on a large text corpora, which allows for an open-vocabulary generation, that effectively generalize to new vision and language domains.
Referring expressions have been traditionally separated into two types: relational ("the person on the left") and attribute-based ("the person with the hat"). This paper focuses on attribute-based REG because current VL models represent attributes much better than spatial relations.
This paper makes the following contributions. **(1)** First, we introduce the first approach to open-vocabulary visual referring expression generation named _DisCLIP_ for discriminative CLIP. It allows generalizing to new data distributions and concepts, making it more versatile and adaptable to various applications. Notably, these results are achieved without any additional training or fine-tuning required. **(2)** We put forward a method that utilizes foundation models trained on image-level descriptions, for the generation of _contextual descriptions_. Such descriptions are costly to curate, and are rarely available, even in large VL corpora. **(3)** We show through extensive experiments that DisCLIP outperforms supervised methods on out-of-domain datasets in varying learning setups. Importantly, our model is producing descriptions that are more natural and accurate, according to human raters.
## 2 Related work
REG task is often considered a proxy of pragmatic reasoning and naturally falls under the paradigm of a dialog. Effective communication and contextual language are further explored in the Rational
Speech Act (RSA) framework [8]. Accordingly, a common architecture is a speaker and a listener, performing complementary tasks: REG and REC. A broad class of REG methods [9; 10; 11; 12; 13] rely on joint optimization of the speaker, and the listener. The risk in such a pipeline is creating a "secret" language [5] the speaker-listener architecture tends to overfit and struggle to generalize across domains. In contrast, our method does not require any training and entirely depends on inference time decoding. Other notable works in that field include [6], which designs a loss optimized to stir image captioning towards describing the differences between two images. [4] suggest a transmitter-emitter (ES) architecture. [3; 4; 14] generates pragmatically informative captions, by decoding general captioning models, at testing time, to produce captions that discriminate target images from a given set of distractor images.
Zero-Shot Image Captioning.RE methods can be viewed as a special case of image captioning methods. Recent work on open-world image captioning combines the abilities of two large pre-trained models: CLIP and GPT. [15] suggests regularizing sequences produced by GPT2 to be semantically related to a given image with a CLIP-induced score. Concurrently, [16] proposed a similar technique that relies on gradient update and optimization over context cache. This improves accuracy but dramatically slows down inference. In later work, [17] improve efficiency and inference speed by updating the context of a pseudo-token over different iterations in which the model generates full sentences. [18] suggests producing meaningful captions by initializing GPT2 with visual prefix embeddings, which is learned by employing a simple mapping network from CLIP embedding space to GPT2 space. [19] proposed to formalize the CLIP score as a new standard metric for image captioning.
Referring Expressions Generation (REG).As far as we know, the current state-of-the-art approach in Referring Expression Generation (REG) is presented in [20]. Their method involves incorporating pragmatic reasoning into context-agnostic generation models during inference. To generate pragmatically informative captions, they decode general captioning models at test time, producing captions that discriminate the target image from a set of distractor images. Decoding methods include criteria such as likelihood (Beam Search) and informativity (RSA decoding) [14]. Although some models achieve state-of-the-art (SotA) results on certain subsets of RefCOCO/+/g, there is no single model that consistently outperforms others across the board. A key difference is that our approach is designed to perform on objects within the same visual scene, rather than a curated set of distractors.
[21] studies REs from the standpoint of object saliency. It has been observed that salient objects can be referred to using short and simple phrases, whereas less salient objects require more complex descriptions that often involve relationships with other objects within the scene. While our work does not specifically target this aspect, we draw comparisons with this baseline due to its zero-shot REG framework. Another work by [22; 10] explicitly learns visual attributes and uses it as a supervision signal for REG-REC modules. Recent work [23] achieves impressive results, but their approach is supervised and works in domain. Our focus is on out-of-domain generalization.
## 3 Workflows for REs generation and comprehension
Visual referring expression involves two complementary tasks. First, _generation_, where a speaker module is given an image and a bounding box of a target object and has to create a natural language expression that refers to that object. Second, _comprehension_ where a listener module parses the RE, with the goal of selecting the correct object in a given image.
There are two main strategies for training these modules. The first strategy is to pre-train a listener, freeze it, then use it as a frozen evaluator to measure the quality of predicted REs (e.g., [9; 20]). Here, since the listener is fixed, it is used to calculate a static score, like box-selection accuracy, which can be readily used for training the speaker. The second approach is to train both modules jointly [23; 5]. This raises two main difficulties. First, since the generated language is discrete, passing gradients from the listener to the speaker is non-trivial and involves approximated optimization like using a Gumbel softmax or straight-through [5]. Second, unless restricted, the two modules tend to drift away from natural language and pass information that is unintelligible to people [5]. To alleviate this issue, some researchers use language quality metrics like BLEU against a ground truth set. Unfortunately, these measures tend to be highly insufficient [24; 25]. In all these cases, methods are trained on
paired data of images, boxes, and ground-truth referring expressions collected from human raters. A potential issue with these workflows is that they tend to be limited to the distribution of the data they are trained on. Indeed, our experiments below show that when tested on new data, they may collapse and yield very low accuracy.
How can we progress towards open-world referring expression generation? We wish to provide dataset-agnostic models that can provide referring expressions that are both natural and informative even for images outside the training distribution. To this end, we propose to use large pre-trained image captioning models [7]. These models are trained on massive web datasets and, as such, cover the long tail of visual and semantic content.
## 4 Model
DisCLIP model is composed of two branches (Fig 2): a language branch where a Large Language-Model (LLM) generates a sequence of words (Fig. 2 green box), and a visual branch that guides generation to be close to an input image in a visual-semantic space. In an iterative process, we maximize the similarity [19] between the generated sequence at a timestep \(x_{<t}\) to _the target region_ in the image, and minimize the similarity to a set of distractors regions (namely, other objects). Our work is closely related to [15], who put forward a similar process for zero-shot image captioning.
Let \(x_{<t}\) be a sequence generated by an LLM at time \(t\). Given input image \(\mathcal{I}\), and \(V^{(k)}\) (top \(k\)) candidate tokens from the LM, the probability of candidate token \(v\) is computed as
\[f(v|\mathcal{I},x_{<t},V^{(k)})=\frac{e^{CLIP(I,[x_{<t}:v])}}{\sum_{z\in V^{(k )}}e^{CLIP(I,[x_{<t}:z])}}\quad, \tag{1}\]
[\(:\)] denotes the concatenation operation, s.t. \(x_{<t}:v\) represents the generated sequence so far, together with the current token. \(CLIP(I,[x_{<t}:v])\) is the CLIP similarity score of an image \(I\) and text \([x_{<t}:v]\). In our case, given an image \(\mathcal{I}\) containing \(n\) objects \(O=\{o_{1},\dots,o_{n}\}\), we require that the generated sequence maximize the CLIP similarity with a target object \(O^{+}\), while minimizing CLIP similarity to a set of distractors \(O^{-}=\{o_{1}^{-},\dots,o_{n-1}^{-}\}\). The total score is defined as
Figure 2: **DisCLIP architecture for REG.** DisCLIP score encourages the language model (green) to generate text that is semantically related to the target object. The output sequence of the LM is encoded by CLIP’s text encoder (purple). CLIP image encoder (blue) is used to encode representations of the target object (\(v_{c}^{+},v_{b}^{+}\)) as well as the set of other objects in the scene (\(v^{-}\)). At each timestep, we maximize CLIP similarity with the target object and minimize similarity with a set of distractors.
\[\mathcal{L}_{DisCLIP}=\lambda\bigg{(}\overbrace{CLIP(O^{+},[x_{<t}:v])}^{S^{+}} \bigg{)}+(1-\lambda)\bigg{(}\frac{-1}{N}\sum_{i}\overbrace{CLIP(O_{i}^{-},[x_{<t }:v])}^{S^{+}_{i}}\bigg{)}\bigg{)}. \tag{2}\]
The hyper-parameter \(\lambda\in[0,1]\) controls how strongly the negative set affects generation. When \(\lambda=1\), negatives have no effect at all, and smaller values are expected to create increasingly discriminative text. The full objective includes terms designed for maintaining language fluency and consistency with the context tokens. For clarity, we refer to these terms as \(\mathcal{L}_{lang}\), and describe them in detail in the Appendix E.
\[v=\operatorname{argmax}\Big{\{}\mathcal{L}_{lang}+\beta\cdot\mathcal{L}_{ disCLIP}\ \Big{\}}. \tag{3}\]
Finally, the hyperparameter \(\beta\) controls the trade-off between language weighs and the DisCLIP vision score.
Representing boxes.In contrast with standard captioning, RE text-generating task has to: (i) describe a specific object in the scene instead of the entire image. This is challenging since CLIP was trained on image-level descriptions. (ii) The generated text should be contextual, which requires gathering information about the rest of the objects in the scene. This impacts how we experiment with the visual representation of the objects.
To capture both local and global information we create different representations for each object in the scene. The first is simply a crop of the object's box, and the second is a blurred version of the image, except for the target region. We discuss other representations in Appendix B.
Object representations are passed to the CLIP image encoder and used to compute the similarity to the generated text at time \(t\). For the set of negatives, we sum over the similarity scores \(S_{i}\),
\[S_{i}=\delta\cdot Blur(O_{i})+(1-\delta)\cdot Crop(O_{i})\quad, \tag{4}\]
where \(\delta\) controls the trade-off between the two representations, as illustrated in Fig 2.
## 5 Experiments
We evaluate DisCLIP and the baselines in several experimental setups. We begin by showing out-of-domain performance measured using a pre-trained listener on three datasets: Flickr30k-Entities, RefCLEF, and RefGTA. Next, we put forward human evaluation results on our generated REs compared to the baselines. We also show that DisCLIP performs reasonably well in in-domain benchmarks compared to supervised methods. To encourage future research and reproducibility, we will make our source code publicly available.
Data.We used the following datasets. Since our method does not require training, we only used the validation and text splits in evaluations. **(1) RefCOCO**[2] contains 142,209 referring expressions for 50,000 objects in 19,994 images. **(2) RefCOCO+**[2] contains 141,564 referring expressions for 49,856 objects in 19,992 images. This dataset focuses on objects' appearance, rather than spatial relations. In both RefCOCO and RefCOCO+, Test A contains references to humans, and Test B references to other object types. **(3) RefCOCOg (Google RefExp)**[11] contains 85,474 referring expressions for 54,822 objects in 26,711 images and contains longer and more complex expressions. **(4) RefCLEF** (Referti) [26] A dataset containing _complex_ photographs of real-world cluttered scenes. 10K test images, with 60K references in the train/val set and 60,105 in the test set. **(5) RefGTA**[21], contain synthetic images from the Grand Theft Auto (GTA) videogame. 6504 test images. All REs correspond to people, focusing on relations expressions. **(6) Flickr30k-Entities**[27], provides a comprehensive ground-truth correspondence between regions in images and phrases in captions. It contains 244K coreference chains, with 275K corresponding bounding boxes. We excluded "group" references (e.g. _People are outside having flags_), resulting in 1966 images and 4597 references in the validation set and 4601 in the test.
Baselines.We compared our approach with the following baselines with their model that trained on RefCOCO+: **(1) Schutz et al. 2021**[20] adopts an Emitter-Suppressor (ES) framework of [4]. A speaker (E) models a caption for a target image \(I_{t}\) in conjunction with a listener function (S) that rates
how discriminative is the utterance with regard to a distractor image, \(\lambda\) is a parameter that weighs the suppressor. We compare with \(\lambda=0.5\) for his best model. **(2) Tanaka et al. 2019 [21]** suggested an end-to-end training for encoder decoder. Based on low-level visual representations as the input, various aspects of the task are modeled jointly, e.g. lexicalization and content selection. **(3) Licheng Yu et al. 2017 [13]** proposed an end-to-end trained listener-speaker for RE task. He also added a discriminative reward-based module (reinforcer) to guide the sampling of more discriminative expressions and further improve his final model.
Evaluation metrics.Standard evaluation metrics for REs like BLUE or CIDER [20; 15] focus on agreement with ground-truth expressions. In the case of open-text generation, these metrics do not reflect true performance because LLMs produce rich natural sentences whereas GT phrases tend to be terse. To address this, we use two evaluation approaches: human raters and a frozen REC model - a "listener". We follow the protocol in [28; 29] and measure listener accuracy as the percentage of instances for which the predicted box whose IoU with the ground-truth box is at least 0.5, a standard metric used to evaluate RE methods. For consistency with previous works in the field, we also report standard language metrics, provided in Appendix A.
## 6 Results
Out-of-domain generalizationWe now evaluate all methods in an out-of-domain setup. We trained the baseline methods on RefCOCO+, which capture attribute-based referrals. DisCLIP requires no training, but we tuned its hyperparameters \(\delta\) and \(\lambda\) on a subset of 200 random samples from the validation split, see Figure 5. In the evaluations below, we used the "natural" listener that is "paired" with the speaker in the sense that the listener was used either when training or evaluating the speaker in their original papers.
Table 1 shows results on the out-of-domain datasets RefCLEF, RefGTA, and Flickr30k-Entities, where DisCLIP significantly outperforms the baselines methods.
Independent pre-trained listener.The performance degradation of baselines observed in Table 1 might result from the domain shift that the listener (the REC model) experiences, rather than the speaker - which is our prime interest. We further test a single pre-trained REC model as a common
\begin{table}
\begin{tabular}{l c c c c c c c} & \multicolumn{2}{c}{RefClef} & \multicolumn{2}{c}{RefGTA} & \multicolumn{2}{c}{Flickr30 Entities} \\ \cline{3-8} & trained on & Test A & Test B & Val & Test & Val & Test \\ \hline
**Supervised methods** & & & & & & & \\ Schutz et al.[20] & refCOCO+ & 26.0 & 18.2 & 11.5 & 11.8 & 31.7 & 32.0 \\ Tanaka et al. [21] & refCOCO+ & 27.0 & 33.4 & 52.5 & 53.2 & 34.6 & 39.6 \\ Licheng Yu et al. [13] & refCOCO+ & 38.0 & 41.4 & 31.2 & 31.8 & 50.9 & 49.0 \\ \hline
**Open-Vocabulary** & & & & & & & \\ DisCLIP (ours) + ReCLIP [28] & & 66.2 & 68.6 & 58.0 & 56.9 & 77.9 & 78.8 \\ DisCLIP-HPT (ours) + ReCLIP [28] & & **83.4** & **85.4** & **73.4** & **73.6** & **89.2** & **91.2** \\ \hline \end{tabular}
\end{table}
Table 1: **Out-of-domain generalization.** Listener accuracy on three different datasets, RefClef, RefGTA, and Flickr30k-Entities.
\begin{table}
\begin{tabular}{l c c c c c c c} & \multicolumn{2}{c}{RefClef} & \multicolumn{2}{c}{RefGTA} & \multicolumn{2}{c}{Flickr30 Entities} \\ \cline{3-8} & trained on & Test A & Test B & Val & Test & Val & Test \\ \hline GT RefExp & & 65.5 & 64.4 & 40.3 & 40.6 & 72.6 & 73.9 \\ \hline Schutz et al.[20] & refCOCO+ & 34.8 & 26.4 & **40.8** & **40.9** & **40.7** & **40.6** \\ Tanaka et al. [21] & refCOCO+ & 22.8 & 20.4 & 38.9 & 40.2 & 32.0 & 31.1 \\ Licheng Yu et al. [13] & refCOCO+ & 27.6 & 22.0 & 24.8 & 25.2 & 31.8 & 31.1 \\ \hline DisCLIP (ours) & & 35.0 & 29.8 & 33.0 & 32.6 & 37.0 & 36.7 \\ DisCLIP-HPT (ours) & & **36.2** & **30.8** & 33.9 & 33.3 & 36.3 & 35.9 \\ \hline \end{tabular}
\end{table}
Table 2: Evaluation of OOD with an independent listener module (mDETR)
listener to evaluate all different "speakers", in an identical way. For that listener, we choose mDETR [29]. It is an end-to-end modulated detector that detects objects in an image conditioned on a raw text query. The results are presented in Table 2. DisCLIP outperforms the baselines on the RefClef dataset and is competitive on RefGTA and Flickr30k entities.
To understand this difference, we note that mDETR was fine-tuned on RefCOCO/+/g. Presumably, it became tuned to short sentences and perform worse on rich natural sentences. Indeed, from a qualitative error analysis, we find that mDETR makes more mistakes with long sentences, potentially causing a bias against disCLIP and favoring the baselines. See qualitative examples in Fig. 3.
Human evaluations on OOD REs.Given the above limitations of out-of-the-box listeners as evaluators, as well as traditional language metrics, we follow up with evaluation by human raters. We generated REs for 100 random samples from three out-of-domain datasets, and sent each RE to three unique raters. Given a textual description (generated by us or the baselines), participants are asked to choose one out of \(n\) candidate boxes that best matches the RE (details in Appendix F). Table 3 shows that human raters prefer phrases generated by DisCLIP model, across all out-of-domain datasets by a large margin. Our method generates more diverse and natural phrases compared to baseline methods as shown by the qualitative examples in Sec. F.1 of the Appendix.
\begin{table}
\begin{tabular}{l c c c c c c c c} & & \multicolumn{3}{c}{In domain} & \multicolumn{3}{c}{GT Label shift} \\ \cline{3-10} & & \multicolumn{3}{c}{RefCOCO+} & \multicolumn{3}{c}{RefCOCO} & \multicolumn{3}{c}{RefCOCOg} \\ \cline{3-10} & trained on & Val & Test A & Test B & Val & Test A & Test B & Val & Test \\ \hline
**Supervised methods** & & & & & & & & & \\ Schutz et al. [20] & refCOCO+ & 58.3 & 68.4 & 48.2 & 58.2 & 68.4 & 48.1 & 62.1 & 62.5 \\ Tanaka et al. [21] & refCOCO+ & 65.8 & 70.9 & 62.5 & 65.8 & 70.9 & 62.2 & 72.0 & 71.4 \\ Licheng Yu et al. [13] & refCOCO+ & **79.2** & **82.9** & 75.0 & **79.1** & **82.9** & **74.6** & **86.1** & 85.7 \\ \hline
**Open-Vocabulary** & & & & & & & & \\ DisCLIP (ours) + ReCLIP [28] & & 67.3 & 70.1 & 64.7 & 67.2 & 70.1 & 64.5 & 72.8 & 74.9 \\ DisCLIP-HPT (ours) + ReCLIP [28] & & 78.6 & 80.2 & **77.2** & 76.5 & 80.2 & 73.7 & 85.1 & **86.5** \\ \hline \end{tabular}
\end{table}
Table 4: **In-domain accuracy** of models tested on three variants of RefCOCO. Each method uses paired (jointly trained) speaker and listener. All datasets have the same distribution of images, but GT labels shift between datasets.
Figure 3: An mDETR listener fails more with long natural sentences. In many cases, mDETR predicts a box (blue) that has a high overlap with GT box (green), even when the captions are completely unrelated to the image like in the two examples on the right. On the other hand, it often misses valid clues in the textual descriptions.
In-domain referring expression.DisCLIP is designed for out-domain and open-vocabulary setup. For completeness, we also tested its accuracy on in-domain datasets RefCOCO/+/g in Table 4. Baseline models had both their listener and speaker trained on RefCOCO+ (attribute-based REs). There are also versions of the baseline models that were trained on RefCOCO, but since it is focused on spatial phrases, this comparison is less relevant to our task, which focuses on attribute-based REs. Table 4 shows that DisCLIP stays competitive with the supervised baselines on all the in-domain datasets. Qualitative examples from both in and out-of-domain are shown in Fig. 4.
Limitations.DisCLIP is successful, but it is also important to address its limitations. First, CLIP has notorious poor sensitivity to spatial relations. As a result, the expressions generated by our model use attribute-based REs rather than relation-based REs, like "bike on right". Second, Our language generation is very simple, generating the expression token by token. It is appealing that smarter models for expression generation may improve performance. Given that DisCLIP does not rely on any training or fine-tuning procedures, using better foundation models in the future is expected to yield better REG using similar DisCLIP inference.
## 7 Conclusion
In this work, we present a novel method, named DisCLIP, to generate discriminative referring expressions in an open-world setting. Instead of training a model for one specific dataset, we leverage large pre-trained foundation models (CLIP, GPT2). DisCLIP achieves significant improvement over baseline models trained on different datasets, showing robustness to the domain shift occurring across datasets.
## Acknowledgements
This study was funded by a grant to GC from the Israel Science Foundation (ISF 737/2018), and by an equipment grant to GC and Bar-Ilan University from the Israel Science Foundation (ISF 2332/18). Lior Bracha is supported by a PhD fellowship in data science from the Israeli national council of higher education.
|
2310.05797 | In-Context Explainers: Harnessing LLMs for Explaining Black Box Models | Recent advancements in Large Language Models (LLMs) have demonstrated
exceptional capabilities in complex tasks like machine translation, commonsense
reasoning, and language understanding. One of the primary reasons for the
adaptability of LLMs in such diverse tasks is their in-context learning (ICL)
capability, which allows them to perform well on new tasks by simply using a
few task samples in the prompt. Despite their effectiveness in enhancing the
performance of LLMs on diverse language and tabular tasks, these methods have
not been thoroughly explored for their potential to generate post hoc
explanations. In this work, we carry out one of the first explorations to
analyze the effectiveness of LLMs in explaining other complex predictive models
using ICL. To this end, we propose a novel framework, In-Context Explainers,
comprising of three novel approaches that exploit the ICL capabilities of LLMs
to explain the predictions made by other predictive models. We conduct
extensive analysis with these approaches on real-world tabular and text
datasets and demonstrate that LLMs are capable of explaining other predictive
models similar to state-of-the-art post hoc explainers, opening up promising
avenues for future research into LLM-based post hoc explanations of complex
predictive models. | Nicholas Kroeger, Dan Ley, Satyapriya Krishna, Chirag Agarwal, Himabindu Lakkaraju | 2023-10-09T15:31:03Z | http://arxiv.org/abs/2310.05797v4 | # Are Large Language Models Post Hoc Explainers?
###### Abstract
Large Language Models (LLMs) are increasingly used as powerful tools for a plethora of natural language processing (NLP) applications. A recent innovation, in-context learning (ICL), enables LLMs to learn new tasks by supplying a few examples in the prompt during inference time, thereby eliminating the need for model fine-tuning. While LLMs have been utilized in several applications, their applicability in explaining the behavior of other models remains relatively unexplored. Despite the growing number of new explanation techniques, many require white-box access to the model and/or are computationally expensive, highlighting a need for next-generation post hoc explainers. In this work, we present the first framework to study the effectiveness of LLMs in explaining other predictive models. More specifically, we propose a novel framework encompassing multiple prompting strategies: i) Perturbation-based ICL, ii) Prediction-based ICL, iii) Instruction-based ICL, and iv) Explanation-based ICL, with varying levels of information about the underlying ML model and the local neighborhood of the test sample. We conduct extensive experiments with real-world benchmark datasets to demonstrate that LLM-generated explanations perform on par with state-of-the-art post hoc explainers using their ability to leverage ICL examples and their internal knowledge in generating model explanations. On average, across four datasets and two ML models, we observe that LLMs identify the most important feature with 72.19% accuracy, opening up new frontiers in explainable artificial intelligence (XAI) to explore LLM-based explanation frameworks.
## 1 Introduction
Over the past decade, machine learning (ML) models have become ubiquitous across various industries and applications. With their increasing use in critical applications (_e.g.,_ healthcare, financial systems, and crime forecasting), it becomes essential to ensure that ML developers and practitioners understand and trust their decisions. To this end, several approaches (Ribeiro et al., 2016, 2018, Smilkov et al., 2017, Sundararajan et al., 2017, Lundberg & Lee, 2017, Shrikumar et al., 2017) have been proposed in explainable artificial intelligence (XAI) literature to generate explanations for understanding model predictions. However, these explanation methods are highly sensitive to changes in their hyperparameters (Yeh et al., 2019, Bansal et al., 2020), require access to the underlying black-box ML model (Lundberg & Lee, 2017, Ribeiro et al., 2016), and/or are often computationally expensive (Situ et al., 2021), thus impending reproducibility and the trust of relevant stakeholders.
More recently, generative models such as Large Language Models (LLMs) (Radford et al., 2017) have steered ML research into new directions and shown exceptional capabilities, allowing them to surpass state-of-the-art models at complex tasks like machine learning translation (Hendy et al., 2023), language understanding (Brown et al., 2020), commonsense reasoning (Wei et al., 2022b, Krishna et al., 2023), and coding tasks (Bubeck et al., 2023). However, there is very little work on systematically analyzing the reliability of LLMs as explanation methods. While recent research has used LLMs to explain what patterns in a text cause a neuron to activate, they simply explain correlations between the network input and specific neurons and do not explain what causes model
behavior at a mechanistic level (Bills et al., 2023). Thus, the ability of LLMs to act as reliable explainers and improve the understanding of ML models lacks sufficient exploration.
**Present work.** In this work, we present the first framework to study the effectiveness of LLMs in explaining other predictive models (see Fig. 1). More specifically, we introduce four broad prompting strategies -- Perturbation-based ICL, Prediction-based ICL, Instruction-based ICL, and Explanation-based ICL -- for generating post hoc explanations using LLMs. Our first three strategies entail providing local neighborhood samples and labels of a given instance whose prediction we want to explain, before asking an LLM to identify features that are key drivers in the model's predictions. In our last approach, we leverage the in-context learning (ICL) (Liu et al., 2023b) behavior of LLMs by providing a small set of instances and their corresponding explanations (output by state-of-the-art post hoc explanation methods) as input to an LLM and ask it to generate feature importance-based explanations for new samples. We also explore different prompting and design choices, such as increasing the level of information in each, to generate more faithful explanations using LLMs.
We conduct extensive experimentation with four benchmark datasets, two black-box models, and two GPT models to analyze the efficacy of our proposed framework. Our empirical studies reveal the following key findings. 1) LLMs, on average, accurately identify the most important feature (top-\(k\)=1) with 72.19% accuracy across different datasets, with performance drop for larger values of top-\(k\) features. 2) LLMs can mimic the behavior of six state-of-the-art post hoc explanation methods using the proposed Explanation-based ICL prompting strategy and only four ICL samples. On average, LLMs behave as post hoc explainers by providing explanations that are on par with existing methods, such as LIME and gradient-based methods, in terms of their faithfulness. 3) LLMs struggle to retrieve relevant information from longer prompts, resulting in a decrease in the faithfulness of the explanations generated using a large set of ICL samples. 4) Our proposed framework paves the way for a new paradigm in XAI research, where LLMs can aid in explaining black-box model predictions.
## 2 Related Works
Our work lies at the intersection of post hoc explanations, large language models, and in-context learning, which we discuss below.
**Post Hoc Explanations.** The task of understanding model predictions has become increasingly intricate with the growing popularity of complex ML models (Doshi-Velez & Kim, 2017) due to their inherent black box nature, which makes it difficult to interpret their internal reasoning. To this end, a plethora of feature attribution methods (commonly referred to as post hoc explanation methods) have been proposed to provide explanations for these models' predictions. These explanations are predominantly presented in the form of feature attributions, which highlight the importance of each input feature on the model's prediction. Broadly, post hoc explainers can be divided into perturbation-based and gradient-based methods. While perturbation-based methods (Ribeiro et al., 2016; Lundberg & Lee, 2017; Zeiler & Fergus, 2014) leverage perturbations of the given instance to construct an interpretable approximation of the black-box model behavior, gradient-based methods (Smilkov et al., 2017; Sundararajan et al., 2017) leverage gradients _w.r.t._ the given instance to explain model predictions. In this work, we primarily focus on state-of-the-art local post hoc explainers, _i.e.,_ methods explaining individual feature importance for model predictions of individual instances.
**Large Language Models.** LLMs have seen exponential growth in recent years, both in terms of their size and the complexity of tasks they can perform (Radford et al., 2017). Recent advances in LLMs like GPT-4 (OpenAI), Bard (Google), Claude-2 (Anthropic) and Llama-2 (Meta) are chang
Figure 1: **Overview of our framework.** Given a dataset and model to explain, we provide 1) different prompting strategies to generate explanations using LLMs, 2) functions to parse LLM-based explanations, 3) utility functions to support new LLMs, and 4) diverse performance metrics to evaluate the faithfulness of explanations.
ing the paradigm of NLP research and have led to their widespread use across applications spanning machine translation (Vaswani et al., 2017), question-answering (Brown et al., 2020), text generation (Radford et al., 2017), and medical data records (Lee et al., 2020; Alsentzer et al., 2019). In this work, we, for the first time, explore the use of LLMs in explaining other predictive models.
**In-context Learning.** While the high performance and generalization capabilities have led to highly effective language models for numerous tasks (Wei et al., 2022), they have also increased the models' parameter sizes and the computational costs for additional fine-tuning on new downstream tasks. To alleviate this, recent works have introduced _in-context learning_ (ICL), which allows an LLM to perform well on new tasks by simply using a few task samples in the prompt (Liu et al., 2023). Despite their effectiveness in enhancing the performance of LLMs, these methods have not been thoroughly explored for their potential to generate post-hoc explanations. In this work, we investigate the utility of LLMs in generating post hoc explanations by leveraging their in-context learning abilities.
## 3 Our Framework
Next, we describe our framework that aims to generate explanations using LLMs. To achieve this goal, we outline four distinct prompting strategies -- _Perturbation-based ICL_ (Sec. 3.1), _Prediction-based ICL_ (Sec. 3.2), _Instruction-based ICL_ (Sec. 3.3), and _Explanation-based ICL_ (Sec. 3.4).
**Notation.** Let \(f:\mathbb{R}^{d}\rightarrow[0,1]\) denote a black-box ML model that takes an input \(\mathbf{x}\in\mathbb{R}^{d}\) and returns the probability of \(\mathbf{x}\) belonging to a class \(c\in C\) and the predicted label \(\mathbf{y}\). Following previous XAI works (Ribeiro et al., 2016; Smilkov et al., 2017), we randomly sample points from the local neighborhood \(\mathcal{N}_{\mathbf{c}}\) of the given input \(\mathbf{x}\) to generate explanations, where \(\mathcal{N}_{\mathbf{c}}=\mathcal{N}(\mathbf{x},\sigma^{2})\) denotes the neighborhood of perturbations around \(\mathbf{x}\) using a Normal distribution with mean 0 and variance \(\sigma^{2}\).
### Perturbation-based ICL
In the Perturbation-based ICL prompting strategy, we use an LLM to explain \(f\), trained on tabular data, by querying the LLM to identify the top-\(k\) most important features in determining the output of \(f\) in a rank-ordered manner. To tackle this, we sample input-output pairs from the neighborhood \(\mathcal{N}_{\mathbf{c}}\) of \(\mathbf{x}\) and generate their respective strings following a serialization template; for instance, a perturbed sample's feature vector \(\mathbf{x}^{\prime}=[0.058,0.632,-0.015,1.012,-0.022,-0.108]\), belonging to class 0 in the COMPAS dataset, is converted into a natural-language string as:
```
#Serialization template Input: A = 0.058, B = 0.632, C = -0.015, D = 1.012, E = -0.022, F = -0.108 Output:
```
While previous post hoc explainers suggest using a large number of neighborhood samples (Ribeiro et al., 2016; Smilkov et al., 2017), it is impractical to provide all samples from \(\mathcal{N}_{\mathbf{c}}\) in the prompt for an LLM due to their constraint on the maximum context length and performance loss when given more information (Liu et al., 2023). Consequently, we select \(n_{\text{ICL}}\) samples from \(\mathcal{N}_{\mathbf{c}}\) to use in the LLM's prompt. In the interest of maintaining a neutral and fundamental approach, we employ two primary sampling strategies, both selecting balanced class representation within the neighborhoods defined by \(\mathcal{N}_{\mathbf{c}}\). The first strategy selects samples randomly, while the second chooses those with the highest confidence levels, aiding the LLM in generating explanations centered on model certainty.
Given \(n_{\text{ICL}}\) input-output pairs from \(\mathcal{N}_{\mathbf{c}}\) and the test sample \(\mathbf{x}\) to be explained, we add context with respect to the predictive model, dataset, and task description in our prompt to aid the LLM in behaving like a post hoc explanation method. Motivated by the local neighborhood approximation works in XAI, the Perturbation-based ICL prompting strategy presumes that the local behavior of \(f\) is a simple linear decision boundary, contrasting with the often globally exhibited complex nonlinear decision boundary. Hence, assuming a sufficient number of perturbations in \(\mathcal{N}_{\mathbf{c}}\), the LLM is expected to accurately approximate the black box model's behavior and utilize this information to identify the top-\(k\) most important features. The final prompt structure is given below, where the _"Context"_ provides the LLM with the background of the underlying ML model, the number of features in the dataset, and model predictions, _"Dataset"_ denotes the \(n_{\text{ICL}}\) instances sampled from the
neighborhood \(\mathcal{A}_{\text{r}}\) of **x**, _"Question"_ is the task we want our LLM to perform, and _"Instructions"_ are the guidelines we want the LLM to follow while generating the output explanations.
_# Perturbation-based ICL Prompt Template_
**Context:** _"We have a two-class machine learning model that predicts based on 6 features: [A', 'B', 'C', 'D', 'E', 'F']. The model has been trained on a dataset and has made the following predictions."_
**Dataset:**
_Input: A = -0.158, B = 0.293, C = 0.248, D = 1.130, E = 0.013, F = -0.038_
_Output: 0_
_..._
_Input: A = 0.427, B = 0.016, C = -0.128, D = 0.949, E = 0.035, F = -0.045_
_Output: 1_
_Question:_
_"Based on the model's predictions and the given dataset, what appears to be the top five most important features in determining the model's prediction?"_
_Instructions:_
_"Think about the question. After explaining your reasoning, provide your answer as the top five most important features ranked from most important to least important, in descending order. Only provide the feature names on the last line. Do not provide any further details on the last line."_
### Prediction-based ICL
Here, we devise Prediction-based ICL, a strategy closer to the traditional ICL prompting style, where the primary objective remains the same -- understanding the workings of the black-box model \(f\) by identifying the top-\(k\) most important features. This strategy positions the LLM to first emulate the role of the black-box model by making predictions, staging it to extract important features that influenced its decision. We follow the perturbation strategy of Sec. 3.1 and construct the Prediction-based ICL prompt using \(n_{\text{ILC}}\) input-output pairs from \(\mathcal{A}_{\text{r}}\). The main difference in the Prediction-based ICL prompting strategy lies in the structuring of the prompt, which is described below:
_# Prediction-based ICL Prompt Template_
**Context:** _"We have a two-class machine learning model that predicts based on 6 features: [A', 'B', 'C', 'D', 'E', 'F']. The model has been trained on a dataset and has made the following predictions."_
**Dataset:**
_Input: A = 0.192, B = 0.240, C = 0.118, D = 1.007, E = 0.091, F = 0.025_
_Output: 0_
_..._
_Input: A = 0.709, B = -0.102, C = -0.177, D = 1.056, E = -0.056, F = 0.015_
_Output: 1_
_Input: A = 0.565, B = -0.184, C = -0.386, D = 1.003, E = -0.123, F = -0.068_
_Output:_
_Question:_
_"Based on the model's predictions and the given dataset, estimate the output for the final input. What appears to be the top five most important features in determining the model's prediction?"_
_Instructions:_
_"Think about the question. After explaining your reasoning, provide your answer as the top five most important features ranked from most important to least important, in descending order. Only provide the feature names on the last line. Do not provide any further details on the last line."_
Here, we construct the prompt using the task description followed by the \(n_{\text{ICL}}\) ICL samples and then ask the LLM to provide the predicted label for the test sample **x** and explain how it generated that label. The primary motivation behind the Prediction-based ICL prompting strategy is to investigate whether the LLM can learn the classification task using the ICL set and, if successful, identify the important features in the process. This approach aligns more closely with the traditional ICL prompting style, offering a different perspective on the problem.
### Instruction-based ICL
The Instruction-based prompting transitions from specifying task objectives to providing detailed guidance on the strategy for task execution. Rather than solely instructing the LLM on what the task entails, this strategy delineates how to conduct the given task. The objective remains to understand the workings of the black-box model and identify the top-\(k\) most important features. However, in
using step-by-step directives, we aim to induce a more structured and consistent analytical process within the LLM to target more faithful explanations. The final prompt structure is as follows:
```
_#Instruction-based ICL Prompt Template Context: "We are analyzing a fixed set of perturbations around a specific input to understand the influence of each feature on the model's output. The dataset below contains the change in features 'A' through 'F' (with negative values denoting a decrease in a feature's value) and the corresponding outputs." Dataset: Change in Input: A: -0.217, B: 0.240, C: 0.114, D: 0.007, E: 0.091, F: 0.025 Change in Output: -1 Change in Input: A: 0.185, B: -0.185, C: -0.232, D: -0.130, E: -0.020, F: 0.015 Change in Output: 0 Instructions: "For each feature, starting with 'A' and continuing to 'F': 1. Analyze the feature in question: a. Compare instances where its changes are positive to where its changes are negative and explain how this difference correlates with the change in output. b. Rate the importance of the feature in determining the output on a scale of 0-100, considering both positive and negative correlations. Ensure to give equal emphasis to both positive and negative correlations and avoid focusing only on absolute values. 2. After analyzing the feature, position it in a running rank compared to the features already analyzed. For instance, after analyzing feature 'B', determine its relative importance compared to 'A' and position it accordingly in the rank (e.g., BA or AB). Continue this process until all features from 'A' to 'F' are ranked. Upon completion of all analyses, provide the final rank of features from 'A' to 'F' on the last line. Avoid providing general methodologies or suggesting tools. Justify your findings as you go."_
```
Here, we provide some general instructions to the LLM for understanding the notion of important features and how to interpret them through the lens of correlation analysis. To achieve this, we instruct LLMs to analyze each feature sequentially and ensure that both positive and negative correlations are equally emphasized. The LLM assigns an importance score for each feature in the given dataset and then positions it in a running rank. This rank is necessary to differentiate features and avoid ties in the LLM's evaluations. The final line ensures that the LLM's responses are strictly analytical, minimizing non-responsiveness or digressions into tool or methodology recommendations.
### Explanation-based ICL
Recent studies show that LLMs can learn new tasks through ICL, enabling them to excel in new downstream tasks by merely observing a few instances of the task in the prompt. In the Explanation-based ICL prompting strategy, we leverage the ICL capability of LLMs to alleviate the computation complexity of some post hoc explanation methods. In particular, we investigate whether an LLM can mimic the behavior of a post hoc explainer by looking at a few input, output, and explanation examples. We generate explanations for a given test sample **x** using LLMs by utilizing the ICL framework and supplying \(n_{\text{ICL}}\) input, output, and explanation examples to the LLM, where the explanations in the ICL can be generated using any post hoc explanation method. For constructing the ICL set, we randomly select \(n_{\text{ICL}}\) input instances \(\textbf{X}_{\text{ICL}}\) from the ICL split of the dataset and generate their predicted labels \(\textbf{y}_{\text{ICL}}\) using model \(f\). Next, we generate explanations \(\textbf{E}_{\text{ICL}}\) for samples (\(\textbf{X}_{\text{ICL}}\), \(\textbf{y}_{\text{ICL}}\)) using any post hoc explainer. Using the above input, output, and explanation samples, we construct a prompt by concatenating each pair as follows:
```
_# Explanation-based ICL Prompt Template Input: A = 0.172, B = 0.000, C = 0.000, D = 1.000, E = 0.000, F = 0.000 Output: 1 Explanation: A,C,B,F,D,E... Input: A = 0.052, B = 0.053, C = 0.073, D = 0.000, E = 0.000, F = 1.000 Output: 0 Explanation: A,B,C,E,F,D Input: A = 0.180, B = 0.222, C = 0.002, D = 0.000, E = 0.000, F = 1.000 Output: 0 Explanation:
```
Using the Explanation-based ICL prompting strategy, we aim to investigate the learning capability of LLMs such that they can generate faithful explanations by examining the \(n_{\text{ICL}}\) demonstration pairs of inputs, outputs, and explanations generated by state-of-the-art post hoc explainer.
## 4 Experiments
Next, we evaluate the effectiveness of LLMs as post hoc explainers. More specifically, our experimental analysis focuses on the following questions: Q1) Can LLMs generate faithful post hoc explanations? Q2) Do LLM-Augmented post hoc explainers achieve similar faithfulness vs. their vanilla counterpart? Q3) Are LLMs better than state-of-the-art post hoc explainers at identifying the most important feature? Q4) Is Gpt-4 a better explainer than Gpt-3.5? Q5) Are changes to the LLM's prompting strategy necessary for generating faithful explanations?
### Datasets and Experimental Setup
We first describe the datasets and models used to study the reliability of LLMs as post hoc explainers and then outline the experimental setup.
**Datasets.** Following previous LLM works (Hegselmann et al., 2023), we performed analysis on four real-world tabular datasets: **Blood**(Yeh et al., 2009) having four features, **Recidivism**(ProPublica) having six features, **Adult**(Kaggle) having 13 features, and **Default Credit**(UCI) having 10 features. The datasets come with a random train-test split, and we further subdivide the train set, allocating 80% for training and the remaining 20% for ICL sample selection, as detailed in Sec. 3.4. We use a random set of 100 samples from the test split to generate explanations for all of our experiments.
**Predictive Models.** We consider two ML models with varying complexity in our experiments: i) Logistic Regression (LR) and ii) Artificial Neural Networks (ANN). We use PyTorch (Paszke et al., 2019) to implement the ANNs with the following combination of hidden layers: one layer of size 16 for the LR model; and three layers of size 64, 32, and 16 for the ANN, using ReLU for the hidden layers and Softmax for the output (see Table 1 for predictive performances of these models).
**Large Language Model.** We consider Gpt-3.5 and Gpt-4 as language models for all experiments.
**Baseline Explanation Methods.** We use six post hoc explainers as baselines to investigate the effectiveness of explanations generated using LLMs: LIME (Ribeiro et al., 2016), SHAP (Lundberg and Lee, 2017), Vanilla Gradients (Zeiler and Fergus, 2014), SmoothGrad (Smilkov et al., 2017), Integrated Gradients (Sundararajan et al., 2017), and Gradient x Input (ITG) (Shrikumar et al., 2017).
**Performance Metrics.** We employ four distinct metrics to measure the faithfulness of an explanation. To quantify the faithfulness of an explanation where there exists a ground-truth top-\(k\) explanation for each test input (_i.e._, LR model coefficients), we use the Feature Agreement (FA) and Rank Agreement (RA) metrics introduced in Krishna et al. (2022), which compares the LLM's top-\(k\) directly with the model's ground-truth. The FA and RA metrics range from \([0,1]\), where 0 means no agreement and 1 means full agreement. However, in the absence of a top-\(k\) ground-truth explanation (as is the case with ANNs), we use the Prediction Gap on Important feature perturbation (PGI) and the Prediction Gap on Unimportant feature perturbation (PGU) metrics from OpenXAI (Agarwal et al., 2022). While PGI measures the change in prediction probability that results from perturbing the features deemed as influential, PGU examines the impact of perturbing unimportant features. Here, the perturbations are generated using Gaussian noise \(\mathcal{N}(0,\sigma^{2})\).
**Implementation Details.** To generate perturbations for each ICL prompt, we use a neighborhood size of \(\sigma=0.1\) and generate local perturbation neighborhoods \(\lambda_{\kappa}\) for each test sample \(\mathbf{x}\). We sample \(\pi_{\kappa}=10,000\) points sampled for each neighborhood, where the values for \(\sigma\) and \(\pi_{\kappa}\) were chosen to give an equal number of samples for each class, whenever possible. We present perturbations in two main formats: as the raw perturbed inputs alongside their corresponding outputs (shown in the Sec. 3.1 and 3.2 templates); or as the change between each perturbed input and the test sample, and the corresponding change in output (shown in Sec. 3.3). The second approach significantly aids the LLM in discerning the most important features, providing only the changes relative to the test sample, and bypassing the LLM's need to internally compute these differences. As a result, the consistent value of the original test point becomes irrelevant, and this clearer, relational view allows
the LLM to focus directly on variations in input and output. Note that both of these formats are absent from Sec. 3.4, which uses test samples directly and does not compute perturbations.
For the LLMs, we use OpenAI's text generation API with a temperature of \(\tau=0\) for our main experiments. To evaluate the LLM explanations, we extract and process its answers to identify the top-\(k\) most important features. We first save each LLM query's reply to a text file and use a script to extract the features. We added explicit instructions like "_...provide your answer as a feature name on the last line. Do not provide any further details on the last line._" to ensure reliable parsing of LLM outputs. In rare cases, the LLM won't follow our requested response format or it replies with "_I don't have enough information to determine the most important features._" See Appendix 6.1 for further details. Codes for our framework are available here1.
Footnote 1: GitHub link: [https://github.com/AI4LIFE-GROUP/LLM_Explainer](https://github.com/AI4LIFE-GROUP/LLM_Explainer)
### Results
Next, we discuss experimental results that answer key questions highlighted at the beginning of this section about LLMs as post hoc explainers (Q1-Q5).
**1) LLMs can generate faithful explanations.** We compare our proposed prompting-based LLM explanation strategies to existing post hoc explainers on the task of identifying important features for understanding ANN (Fig. 2) and LR (Fig. 3) model predictions across four real-world datasets (see Table 2). For the ANN model, the LLM-based explanations perform on par with the gradient-based methods (despite having white-box access to the underlying black-box model) and LIME (that approximates model behavior using a surrogate linear model). In particular, our proposed prompting strategies perform better than ITG, SHAP, a Random baseline, and a 16-sample version of LIME, namely LIME\({}_{16}\), which is analogous to the number of ICL samples used in the LLM prompts. We observe that LLM explanations, on average, achieve 51.74% lower PGU and 163.40% higher PGI than ITG, SHAP, and Random baseline for larger datasets (more number of features) like Adult and Credit compared to 25.56% lower PGU and 22.86% higher PGI for Blood and Recidivism datasets. While our prompting strategies achieve competitive PGU and PGI scores among themselves across different datasets for ANN models, the Instruction-based ICL strategy, on average across datasets, achieves higher FA and RA scores for the LR model (Fig. 3). We find that gradient-based methods and LIME achieve almost perfect scores on FA and RA metrics as they are able to get accurate model gradients and approximate the model behavior with high precision. Interestingly, the LLM-based explanations perform better than ITG, SHAP, and Random baseline methods, even for a linear model.
**2) LLM-augmented explainers achieve similar faithfulness to their vanilla counterparts.** We evaluate the faithfulness of the explanations generated using the Explanation-based ICL prompting strategy. Our results show that LLMs generate explanations that achieve faithfulness performance on par with those generated using state-of-the-art post hoc explanation methods for LR and large ANN predictive models across all four datasets (Fig. 4; see Table 3 for complete results) and four evaluation metrics. We demonstrate that very few in-context examples (here, \(n_{\text{ICL}}=4\)) are sufficient to make the LLM mimic the behavior of any post hoc explainer and generate faithful explanations, suggesting the effectiveness of LLMs as an explanation method. Interestingly, for low-performing explanation methods like ITG and SHAP, we find that explanations generated using their LLM
Figure 2: PGU and PGI scores of explanations generated using post hoc methods and LLMs (Instruction-based, Prediction-based, and Perturbation-based ICL prompting strategies) for an ANN model. On average, across four datasets, we find that LLM-based explanations perform on par with gradient-based and LIME methods and outperform LIME\({}_{16}\), ITG, and SHAP methods.
counterparts achieve higher feature and rank agreement (Fig. 4) scores in the case of LR models, hinting that LLMs can use their internal knowledge to improve the faithfulness of explanations.
**3) LLMs accurately identify the most important feature.** To demonstrate the LLM's capability in identifying the most important feature, we show the faithfulness performance of generated explanations across four datasets. Our results in Fig. 5 demonstrate the impact of different top-\(k\) feature values on the faithfulness of explanations generated using our prompting strategies. We observe a steady decrease in RA scores (0.722 for top-\(k=1\) vs. 0.446 for top-\(k=2\) vs. 0.376 for top-\(k=4\)) across three datasets (Blood, Credit, and Adult) as the top-\(k\) value increases. Interestingly, the RA value for top-\(k=1\) for the Recidivism dataset is almost zero, though this can be attributed to the LLM's handling of the two primary features, whose LR coefficients have nearly identical magnitudes; the LLM generally places them both within the top two but, due to their similar importance, defaults to alphabetical order. However, when employing our Instruction-based ICL running-rank strategy, we find that the RA value rises from 0 to 0.5, highlighting the influence of nuanced prompts on the LLM's ranking mechanism. Further, we observe that LLMs, on average across four datasets and three prompting strategies, faithfully identify top-\(k=1\) features with 72.19% FA score (see Fig. 12), and their faithfulness performance takes a hit for higher top-\(k\) values.
**4) Gpt-3.5 vs. Gpt-4.** An interesting question is how the reasoning capability of an LLM affects the faithfulness of the generated explanations. Hence, we compare the output explanations from Gpt-3.5 and Gpt-4 models to understand black-box model predictions. Results in Fig. 6-8 show that explanations generated using Gpt-4, on average across four datasets, achieve higher faithfulness scores than explanations generated using the Gpt-3.5 model. Across four prompting strategies, Gpt-4, on average, obtains 4.53% higher FA and 48.01% higher RA scores than Gpt-3.5 on explanations generated for the Adult dataset. We attribute this increase in performance of Gpt-4 to its superior reasoning capabilities compared to the Gpt-3.5 model (OpenAI, 2023). In Figure 6, we find that Instruction-based ICL, on average across four datasets, outperforms the Perturbation-based ICL and Prediction-based ICL strategies on the RA metric. Further, our results in Fig. 8 show that the faithfulness performance of Gpt-4 and Gpt-3.5 are on par with each other when evaluated using our Explanation-based ICL strategy, which highlights that both models are capable of emulating the behavior of a post hoc explainer by looking at a few input, output, and explanation examples.
Figure 4: Faithfulness metrics on the Recidivism dataset for six post hoc explainers and their LLM-augmented counterparts for a given LR (left) and ANN (right) model. LLM-augmented explanations achieve on-par performance _w.r.t._ post hoc methods across all four metrics (see Table 3 for complete results on all other datasets).
Figure 3: FA and RA scores of explanations generated using post hoc methods and LLMs (Instruction-based, Prediction-based, and Perturbation-based ICL prompting strategies) for an LR model. On average, across four datasets, we find that gradient-based methods and the LIME method (with 1000 samples) outperform all other methods and Instruction-based ICL explanations outperform other two prompting strategies across all datasets.
**5) Ablation Study.** We conduct ablations on several components of the prompting strategies, namely the number of ICL samples, perturbation format, and temperature values. Results show that our choice of hyperparameter values is important for the prompting techniques to generate faithful post hoc explanations (Figs. 7,10). Our ablation on the number of ICL samples (Fig. 7) shows that fewer and larger numbers of ICL samples are not beneficial for LLMs to generate post hoc explanations. While fewer ICL samples provide insufficient information to the LLM to approximate the predictive behavior of the underlying ML model, a large number of ICL samples increases the input context, where the LLM struggles to retrieve relevant information from longer prompts, resulting in a decrease in the faithfulness of the explanations generated by LLMs. In contrast to LIME, the faithfulness of LLM explanations deteriorates upon increasing the number of ICL samples (analogous to the neighborhood of a given test sample). Across all four prompting strategies, we observe a drop in FA, RA, and PGI scores as we increase the number of ICL samples to 64. Further, our ablation on the temperature parameter of the LLMs shows that the faithfulness performance of the explanations does not change much across different values of temperature (see Appendix Fig. 10). Finally, results in Fig. 11 show that our prompting strategies achieve higher faithfulness when using the difference between the perturbed and test sample as input in the ICL sample.
## 5 Conclusion
We introduce and explore the potential of using state-of-the-art LLMs as post hoc explainers. To this end, we propose four prompting strategies -- Perturbation-based ICL, Prediction-based ICL, Instruction-based ICL, and Explanation-based ICL-- with varying levels of information about the local neighborhood of a test sample to generate explanations using LLMs for black-box model predictions. We conducted several experiments to evaluate LLM-generated explanations using four benchmark datasets. Our results across different prompting strategies highlight that LLMs can generate faithful explanations and consistently outperform methods like ITG and SHAP. Our work paves the way for several exciting future directions in explainable artificial intelligence (XAI) to explore LLM-based explanation frameworks.
|
2310.14842 | Joint Non-Linear MRI Inversion with Diffusion Priors | Magnetic resonance imaging (MRI) is a potent diagnostic tool, but suffers
from long examination times. To accelerate the process, modern MRI machines
typically utilize multiple coils that acquire sub-sampled data in parallel.
Data-driven reconstruction approaches, in particular diffusion models, recently
achieved remarkable success in reconstructing these data, but typically rely on
estimating the coil sensitivities in an off-line step. This suffers from
potential movement and misalignment artifacts and limits the application to
Cartesian sampling trajectories. To obviate the need for off-line sensitivity
estimation, we propose to jointly estimate the sensitivity maps with the image.
In particular, we utilize a diffusion model -- trained on magnitude images only
-- to generate high-fidelity images while imposing spatial smoothness of the
sensitivity maps in the reverse diffusion. The proposed approach demonstrates
consistent qualitative and quantitative performance across different
sub-sampling patterns. In addition, experiments indicate a good fit of the
estimated coil sensitivities. | Moritz Erlacher, Martin Zach | 2023-10-23T12:08:02Z | http://arxiv.org/abs/2310.14842v1 | # Joint Non-Linear MRI Inversion with Diffusion Priors
###### Abstract
Magnetic resonance imaging (MRI) is a potent diagnostic tool, but suffers from long examination times. To accelerate the process, modern MRI machines typically utilize multiple coils that acquire sub-sampled data in parallel. Data-driven reconstruction approaches, in particular diffusion models, recently achieved remarkable success in reconstructing these data, but typically rely on estimating the coil sensitivities in an off-line step. This suffers from potential movement and misalignment artifacts and limits the application to Cartesian sampling trajectories. To obviate the need for off-line sensitivity estimation, we propose to jointly estimate the sensitivity maps with the image. In particular, we utilize a diffusion model -- trained on magnitude images only -- to generate high-fidelity images while imposing spatial smoothness of the sensitivity maps in the reverse diffusion. The proposed approach demonstrates consistent qualitative and quantitative performance across different sub-sampling patterns. In addition, experiments indicate a good fit of the estimated coil sensitivities.
## 1 Introduction
Magnetic resonance imaging (MRI) provides detailed images of the human anatomy with excellent soft-tissue contrast non-invasively. However, patient throughput is limited by long examination times, which can be reduced by acquiring less data. In recent years, reconstruction methods for sub-sampled MRI have seen a lot of progress. Classical variational approaches impose prior knowledge -- such as gradient- or wavelet-sparsity [10, 13] -- onto the reconstruction. In general, such hand-crafted priors fail to accurately model the underlying data distribution [11] and purely data-driven approaches now represent state-of-the-art in MRI reconstruction [5, 6, 7, 8, 12, 20, 22, 24, 26]. Methods following a discriminative approach directly map k-space to image-space. This necessitates data-image pairs, which are scarcely available [7, 22, 24]. Moreover, such methods do not generalize well to different acquisition modalities without retraining. In contrast, generative approaches learn the underlying data distribution, relying only on much more abundantly available DICOM data. In addition, they are able to generalize to different acquisition modalities by adapting the forward model [5, 6, 8, 12, 20, 26].
As a particular instantiation of generative models, diffusion models have recently gained a lot of interest [4, 18, 5]. They convince with high sample quality without adversarial training [19]. On a high level, diffusion models generate samples by gradually transforming a "simple" distribution into the complex data distribution. This is typically modelled by stochastic differential equations (SDE), where sampling from the prior distribution amounts to reversing the SDE by using the gradient of the log perturbed data distribution learned by a deep neural network. This gradient is also known as the score function, hence such models are also commonly known as score-based generative models.
In this work, we propose to use diffusion models as an implicit prior during joint reconstruction of MRI images and coil sensitivities. Our approach is trained on broadly available DICOM data [28], resulting in a model that can be used for parallel imaging and different sub-sampling patterns without retraining. A sketch of our proposed approach is shown in Fig. 1.
### Related work
Diffusion models for MRI were proposed by different authors in recent years [5, 6, 8, 12, 20]. To combine the implicit diffusion-prior with the data-likelihood, the authors of [8] use annealed Langevin dynamics [18]. Notably, their work required complex-valued MRI images for training and relies on off-line sensitivity estimation, e.g. using ESPIRiT [23].
Off-line sensitivity estimation is prone to motion and misalignment artifacts, and not trivially applicable for non-Cartesian sampling trajectories [10, 25, 26]. To avoid off-line sensitivity estimation, [5] propose to apply a single score function -- trained on reference root sum of squares
(RSS) reconstructions -- to the real and imaginary parts of the individual coil images. Thus, the number of gradient evaluations needed in their algorithm is proportional to the number of acquisition coils. The authors also propose an alternative that relies on off-line sensitivity estimation, which suffers from the same shortcomings mentioned before. Joint image reconstruction and coil sensitivity estimation were first proposed by [25], who explicitly parametrized the sensitivities with low-order polynomials and used alternating minimization to solve the resulting optimization problem. [26] instead enforce spatial smoothness on the coil sensitivities during the optimization with inertial proximal alternating linearized minimization (iPALM) [15], and utilize a energy based model (EBM) resembling the data distribution to get high-fidelity reconstructions. However, EBM training is known to be unstable and requires hand tuning of many parameters [14, 26].
In this work, we propose a joint reconstruction algorithm that leverages an implicit prior given by a diffusion model. In contrast to [5], our algorithm requires only one gradient evaluation of the diffusion model in one iteration of the reverse diffusion. In addition, we propose a novel way to utilize a diffusion model for reconstruction problems of arbitrary image size, where only a cropped region follows the data distribution learned by the diffusion model.
## 2 Background
This paper is built on two main pillars: Diffusion models and joint non-linear MRI inversion. In this section, we will briefly introduce these concepts, but refer the reader to the provided references for more details.
### Diffusion models
Diffusion models circumvent the computation of the (typically intractable) partition function arising in maximum-likelihood density estimation by instead estimating the gradient of the log-prior, \(\nabla\log p_{X_{0}}\), which is referred to as the _score_. To facilitate efficient sampling and to accurately model low-density regions, the authors of [21] propose to construct an SDE
\[\mathrm{d}X=f(X,t)\,\mathrm{d}t+g(t)\,\mathrm{d}w \tag{1}\]
where \(w\) is the standard Wiener process, \(f:\mathbb{R}^{n}\times[0,\infty)\rightarrow\mathbb{R}^{n}\) is the drift and \(g:[0,\infty)\rightarrow\mathbb{R}\) is the diffusion coefficient. In this work we choose \(f\equiv 0\) and define \(g(t)=\sqrt{\hat{\sigma}^{2}(t)}\) (the choice of \(\sigma:[0,T]\rightarrow\mathbb{R}\) is detailed in Sec. 3.2), which is known as the _variance exploding_ SDE and has close connections to classical heat diffusion [27]. Denoting with \(X_{t}\) the random variable obeying (1), the _score matching_ objective reads
\[\min_{\theta}\tilde{\mathbb{E}}\big{[}\gamma(t)\|\nabla_{1}\log p_{X_{t}|X_{ 0}}(x_{t},x_{0})-s_{\theta}(x_{t},t)\|_{2}^{2}/2\big{]}. \tag{2}\]
Here, \(\tilde{\mathbb{E}}[\,\cdot\,]\) denotes \(\int_{0}^{T}\mathbb{E}_{x_{0}\sim p_{X_{0}},x_{t}\sim p_{X_{t}|X_{0}}(\,\cdot \,,x_{0})}[\,\cdot\,]\,\mathrm{d}t\), \(s_{\theta}:\mathbb{R}^{n}\times[0,T]\rightarrow\mathbb{R}^{n}\) is the diffusion model and \(\gamma:[0,T]\rightarrow\mathbb{R}_{+}\) is a weighting function. \(T>0\) is an artificial time horizon that can be set to \(T=1\) without loss of generality. In our setup, \(\nabla_{1}\log p_{X_{t}|X_{0}}(x_{t},x_{0})=\frac{x_{t}-x_{0}}{\sigma^{2}(t)}\) and we choose \(\gamma(t)=\sigma^{2}(t)\). For more detail about the training procedure, we refer to [21].
To generate samples from the data distribution, we run the reverse time SDE
\[\mathrm{d}X=[f(X,t)-g^{2}(t)\nabla\log p_{X_{t}}(X)]\,\mathrm{d}t+g(t)\, \mathrm{d}\bar{w} \tag{3}\]
Figure 1: Proposed joint MRI image reconstruction and coil sensitivity estimation approach. In the reverse diffusion, the image and the coil sensitivities are jointly estimated.
starting from \(x_{T}\sim p_{X_{T}}\) until \(t=0\), where we use the learnt score model \(s_{\theta}(\,\cdot\,,t)\) in place of \(\nabla\log p_{X_{t}}\). With our choice of \(f\) and \(g\), a straight forward time discretization of this process yields
\[x_{i}\gets x_{i+1}+(\sigma_{i+1}^{2}-\sigma_{i}^{2})s_{\theta}(x_{i+1}, \sigma_{i+1})+\sqrt{\sigma_{i+1}^{2}-\sigma_{i}^{2}}z \tag{4}\]
where \(z\sim\mathcal{N}(0,I)\).
### Non-linear MRI inversion
In this work we assume the acquisition model
\[y=\mathcal{A}(x,\Sigma)+\epsilon \tag{5}\]
where the data \(y\in\mathbb{C}^{nC}\) are acquired through the non-linear measurement operator
\[\begin{split}\mathcal{A}:\mathbb{R}^{n}\times\mathbb{C}^{nC}& \rightarrow\mathbb{C}^{nC}\\ (x,\Sigma)&\mapsto\begin{pmatrix}\mathcal{F}_{ \Omega}(c_{1}\odot x\odot|\Sigma|_{\mathcal{C}})\\ \mathcal{F}_{\Omega}(c_{2}\odot x\odot|\Sigma|_{\mathcal{C}})\\ \vdots\\ \mathcal{F}_{\Omega}(c_{C}\odot x\odot|\Sigma|_{\mathcal{C}})\end{pmatrix} \end{split} \tag{6}\]
acting on the underlying image \(x\in\mathbb{R}^{n}\) with \(\epsilon\in\mathbb{C}^{nC}\) summarizing the additive acquisition noise. In the above, the shorthand \(\Sigma\coloneqq(c_{j})_{j=1}^{C}\in\mathbb{C}^{nC}\) denotes the sensitivity maps of the \(C\in\mathbb{N}\) coils and \(|\,\cdot\,|_{\mathcal{C}}:\mathbb{C}^{nC}\rightarrow\mathbb{R}_{+}^{n}\) denotes the RSS map \((c_{j})_{j=1}^{C}\mapsto\sqrt{\sum_{j=1}^{C}|c_{j}|^{2}}\) where \(|\,\cdot\,|\) is the complex modulus acting element-wise on its argument (see [26] on why the division with \(|\Sigma|_{\mathcal{C}}\) is necessary). Further, \(\mathcal{F}_{\Omega}:\mathbb{C}^{n}\rightarrow\mathbb{C}^{n}\) is the (possibly non-uniform) Fourier transform acquiring the spectrum at locations specified by the trajectory \(\Omega\). For the sake of simplicity, we only consider the case where \(\mathcal{F}_{\Omega}=MF\), where \(F:\mathbb{C}^{n}\rightarrow\mathbb{C}^{n}\) is the standard Fourier transform on the Cartesian grid and \(M\) is a binary diagonal matrix specifying the acquired frequencies (hence also \(n=\bar{n}\)).
Motivated by recent advances in non-linear inversion, in this work we tackle the reconstruction by jointly estimating the image with the coil sensitivities. In detail, let \(D:\mathbb{R}^{n}\times\mathbb{C}^{nC}\rightarrow\mathbb{R}_{+}\) denote the least-squares objective of (5), i.e.
\[D:(x,\Sigma)\mapsto\|\mathcal{A}(x,\Sigma)-y\|_{2}^{2}/2 \tag{7}\]
The optimization problem \(\arg\min_{(x,\Sigma)}D(x,\Sigma)\) is highly underspecified due to ambiguities between \(x\) and \(\Sigma\) in (6). In addition, reconstructed images would exhibit strong subsampling artifacts. We resolve the ambiguities by imposing a hand-crafted smoothness prior on the coil sensitivities and utilize the implicit prior provided by a diffusion model to generate high fidelity reconstructions. We discuss the details in the next section.
## 3 Methods
For the reconstruction of the MRI image we follow the predictor-corrector sampling introduced by [5, 21]. To ensure data consistency during the reverse diffusion, similar to [5], we utilize gradient updates of the form
\[x_{i}\gets x_{i+1}-\lambda_{i+1}\nabla_{1}D(x_{i+1},\Sigma_{i+1}). \tag{8}\]
In detail, let \(\mathcal{A}|_{\Sigma}:\mathbb{R}^{n}\rightarrow\mathbb{C}^{nC}:x\mapsto \mathcal{A}(x,\Sigma)\) denote the linearization of \(\mathcal{A}\) in the first argument around \(\Sigma\). Then,
\[\nabla_{1}D(x,\Sigma)=(\mathcal{A}|_{\Sigma})^{*}(\mathcal{A}(x,\Sigma)-y) \tag{9}\]
with
\[\begin{split}(\mathcal{A}|_{\Sigma})^{*}:\mathbb{C}^{nC}& \rightarrow\mathbb{R}^{n}\\ (y_{j})_{j=1}^{C}&\mapsto\mathrm{Re}\biggl{(}\sum_{ j=1}^{C}\mathcal{F}_{\Omega}^{-1}(y_{j})\odot\bar{c}_{j}\oslash|\Sigma|_{C} \biggr{)}.\end{split} \tag{10}\]
denoting the adjoint of \(\mathcal{A}|_{\Sigma}\) and \(\lambda_{i}\in[0,1]\) is the step size (see [5] on why it is restricted to \([0,1]\)).
To apply the diffusion model trained on \(\tilde{n}=320\times 320\) images to the data of resolution \(n=640\times w,w\in\{368,372\}\), we propose the following. For the input of the diffusion model, we center-crop the image to \(\tilde{n}=320\times 320\) (denoted by a \(\mathfrak{q}_{1}\) in the superscript in Algorithm 1). After the reverse diffusion update steps, we have found it beneficial to pad the result with the image
\[x_{i}^{\text{fade}}=|F_{\Omega}^{-1}(y)|_{\mathcal{C}}+\sigma_{i}^{2}z \tag{11}\]
satisfying the forward SDE instead of the result of the gradient step on the data fidelity. The operator \(\mathrm{pad}:\mathbb{R}^{\tilde{n}}\times\mathbb{R}^{n}\rightarrow\mathbb{R} ^{n}\) in Algorithm 1 implements this padding.
### Estimating coil sensitivities during reverse diffusion
In addition to regularizing the image, we also estimate the coil sensitivities during the reverse diffusion process. In particular, we assume that the sensitivity maps are smother than the imaged anatomy. To enforce smoothness, we closely follow [26]. In detail, during the iterations of their proposed algorithm, they smooth the individual coil sensitivities by
\[\mathrm{prox}_{\mu\tilde{B}}:c_{j}\mapsto(Q_{\mu}\circ\mathrm{Re})(c_{j})+ \imath(Q_{\mu}\circ\mathrm{Im})(c_{j}) \tag{12}\]
where \(Q_{\mu}:x\mapsto\mathcal{S}^{-1}\bigl{(}\mathrm{diag}(\xi_{i}+\mu)^{-1} \mathcal{S}(\mu x)\bigr{)}\) utilizes the discrete sine transform \(\mathcal{S}\) and \(\xi_{i}=2-2\cos\phi_{i}\) are the eigenvalues of the discrete Laplace operator for equally spaced angles \(\phi_{i}=\frac{\pi i}{n}\) for \(i=0,\ldots,n-1\) (see [16, Chap. 19.4] for more detail). In the above, \(\mu>0\) defines the strength of smoothing and \(\imath\) is the imaginary unit. Notice that this
can be interpreted as the proximal operator of a quadratic gradient penalization
\[\tilde{B}:c_{j}\mapsto\frac{1}{2}\big{(}\|\mathrm{D}\operatorname{Re}(c_{j})\|_{ 2}^{2}+\|\mathrm{D}\operatorname{Im}(c_{j})\|_{2}^{2}\big{)} \tag{13}\]
where \(\mathrm{D}:\mathbb{R}^{n}\to\mathbb{R}^{2n}\) is the discrete gradient operator (see e.g. [2]). Let \(B:(c_{j})_{j=1}^{C}\mapsto\sum_{j=1}^{C}\tilde{B}(c_{j})\), then by proximal calculus rules
\[\operatorname{prox}_{\mu B}(\Sigma)=(\operatorname{prox}_{\mu\tilde{B}}(c_{1} ),\ldots,\operatorname{prox}_{\mu\tilde{B}}(c_{C}))^{\top}. \tag{14}\]
The update step for the coil sensitivities can thus be summarized as
\[\Sigma_{i}\leftarrow\operatorname{prox}_{\mu_{i+1}B}(\Sigma_{i+1}-\mu_{i+1} \nabla_{2}D(x_{i+1},\Sigma_{i+1})) \tag{15}\]
where the gradient step on \(D\)
\[(\nabla_{2}D(x,\Sigma))_{j}=\left(\frac{\kappa_{j}}{|\Sigma|_{\mathcal{C}}}- \frac{\alpha_{j}}{|\Sigma|_{\mathcal{C}}^{3}}\right)\odot x \tag{16}\]
ensures data consistency and the proximal step enforces smoothness. In the above, \(\kappa_{j}=\mathcal{F}_{\Omega}^{-1}(s_{j})\), \(\alpha_{j}=c_{j}\odot\left(\sum_{E\in\{\operatorname{Re},\operatorname{Im} \right)}}E(c_{j})\odot E(s_{j})\right)\) with \(s_{j}=\mathcal{F}_{\Omega}(x\odot c_{j}\odot|\Sigma|_{\mathcal{C}})-y_{j}\) denoting the residual of the \(j\)-th channel. We initialize the coil sensitivities with the zero-filled (ZF) estimate
\[c_{j}=\frac{\mathcal{F}_{\Omega}^{-1}(y_{j})}{|\mathcal{F}_{\Omega}^{-1}(y)|_{ \mathcal{C}}}. \tag{17}\]
The algorithm is summarized in Algorithm 1.
```
Require:\(s_{\theta},M,N,\{\sigma_{i}\},\{\lambda_{i}\},\{x_{i}^{\text{fsc}}\},\{\mu_{i}\}\) Result:\(x_{0},\Sigma_{0}\) \(\Sigma_{N}=\frac{\mathcal{F}_{\Omega}^{-1}(y)}{|\mathcal{F}_{\Omega}^{-1}(y)|c}\) \(\Sigma_{N}=\frac{\Sigma_{N}}{\|\Sigma_{N}\|_{\mathcal{C}}^{2}}\) for\(i\gets N-1,\ldots,0\)do \(z\sim\mathcal{N}(0,I)\) \(x_{i}\leftarrow\operatorname{pad}(x_{i+1}^{\mathbf{R}}+(\sigma_{i+1}^{2}- \sigma_{i}^{2})s_{\theta}(x_{i+1}^{\mathbf{R}},\sigma_{i+1})+\sqrt{\sigma_{i+ 1}^{2}-\sigma_{i}^{2}}z,x_{i+1}^{\text{fsc}})\) \(x_{i}\gets x_{i}-\lambda_{i+1}\nabla_{1}D(x_{i+1},\Sigma_{i+1})\) for\(j\gets 1,\ldots,M\)do \(z\sim\mathcal{N}(0,I)\) \(\epsilon_{i}\gets 2r^{2}\|z\|_{2}^{2}/\|s_{\theta}(x_{i+1}^{j-1,\, \mathbf{R}},\sigma_{i+1})\|_{2}^{2}\) \(x_{i}^{j,\,\mathbf{R}}\leftarrow x_{i}^{j-1,\,\mathbf{R}}+\epsilon_{i}s_{ \theta}(x_{i+1}^{j-1,\,\mathbf{R}},\sigma_{i+1})+\sqrt{2\epsilon_{i}}z\) end for \(x_{i}\leftarrow\operatorname{pad}(x_{i+1}^{M,\,\mathbf{R}},x_{i+1}^{\text{fsc}})\) \(x_{i}\gets x_{i+1}-\lambda_{i+1}\nabla_{1}D(x_{i+1},\Sigma_{i+1})\) \(\Sigma_{i}\leftarrow\) \(\operatorname{prox}_{\mu_{i+1}B}(\Sigma_{i+1}-\mu_{i+1}\nabla_{2}D(x_{i+1}, \Sigma_{i+1}))\) end for
```
**Algorithm 1**Diffusion-based joint MRI image reconstruction and coil sensitivity estimation.
### Implementation details
The model we use in this work follows the model of [21] and [5], both following a U-Net model architecture [17]. We use four BigGAN [1] residual blocks with additional skip connections for the latent vector and a self-attention block at the smallest scale. For each block a bias is added conditioned with Gaussian Fourier projections of the current time step \(t\in[0,T]\), resulting in an embedding of size \(128\times 1\times 1\). We employ \(\{64,64,128,128\}\) feature maps in the corresponding up and down sampling blocks. Our network has \(11\,951\,041\) trainable parameters, in comparison to \(61\,433\,601\) in the work of [5].
For the training, we closely follow [5] and [21]. We optimizing the objective Eq. (2) for \(10\,000\) epochs using a batch size of \(3\) with Adam [9] (\(\beta_{1}=0.9\), \(\beta_{2}=0.999\), learning rate \(10^{-4}\)). As proposed by [19], exponential moving average is used during training with a momentum of \(0.999\). For the noise variance schedule, we use the geometric series \(\sigma(t)=\sigma_{\text{min}}(\frac{\sigma_{\text{max}}}{\sigma_{\text{min}}})^ {t}\), with \(\sigma_{\text{min}}=0.01\) and \(\sigma_{\text{max}}=378\). Training was done on a NVIDIA TITAN V with 12GB of memory resulting in 20 days of training.
For sampling, we set \(N=1000\), \(M=1\). The choice of \(r\) follows [5, 21] with \(r=0.0075\). For \(\lambda\) and \(\mu\) an exponential schedule of the form \(\chi_{i}=e^{\zeta_{i}}\) is used where \(\zeta_{i}\) is equispaced between \(\log\chi_{N}\) and \(\log\chi_{1}\) for \(\chi\in\{\lambda,\mu\}\). We choose an exponentially decreasing schedule for \(\lambda\) to prioritize stronger data consistency influence at the beginning of the diffusion process, when noise dominates, and lower its impact towards the end to mitigate the introduction of sub-sampling artifacts. Similarly an increasing exponential schedule is choosen for \(\mu\) to ensure that the coils are not initially influenced by noise during reconstruction. The parameters were found by grid search and are shown in Tab. 1.
### Experimental data
The training and test data is taken from the fastMRI dataset [28]. For training our model we used the RSS reconstructions of size \(320\times 320\) available in the dataset from both the coronal proton density (PD) and proton den
sity fat-suppressed (PDFS) contrasts. To avoid training with noise, we focus on the central 10 slices, resulting in \(973\times 10=9730\) images. For consistent intensity ranges during training, the intensities in each image were normalized to lie in \([0,1]\). Due to computational limitations, we restrict the test set to \(15\) PD and \(15\) PDFS randomly chosen central slices from the fastMRI validation set (see Appendix).
### Comparison and evaluation
We compare our approach to the joint non-linear inversion method presented in [26] using a Charbonnier-smoothed total variation (TV) [3] regularizer and the end-to-end variational network (VN) from [22]. Due to the fact, that the approach of [5] does not work with the full size data we could not include it in this comparison. The VN was trained on the CORPD training split of the fastMRI dataset with random 4-fold Cartesian sub-sampling patterns using \(8\,\mathrm{\char 37}\) auto-calibration lines (ACL). We quantitatively compare the reconstructions using peak signal-to-noise ratio (PSNR): Since the reconstructed images vary strongly in magnitude, we define the PSNR as \(10\log_{10}\frac{\tilde{n}\|x^{\text{\text{\text{\text{\text{\text{ \text{\text{\text{\text{\text{\text{\text{\text
## 5 Conclusion
We propose a joint non-linear inversion algorithm for MRI that leverages the implicit prior defined by a diffusion model. Our approach is capable of reconstructing MRI images satisfactorily for different sub-sampling patterns, maintaining qualitative and quantitative performance. In contrast to the method presented in [5], our approach only requires one evaluation of the score function in each step of the reverse diffusion, irrespective of the number of coils, leading to faster reconstructions. Additionally, we propose a novel way of applying diffusion models to data of different size, where only a sub-region follows the data distribution implicit in the diffusion model. In addition to reconstructing the image, our approach also estimates the coil sensitivities. Experiments show that the estimation is as good as, or sometimes superior to, classical off-line estimation methods, but does not suffer from their drawbacks.
Expanding upon this research, one potential direction is finding a method using the full size data without padding using the forward SDE. Further reducing the reconstruction time is crucial, by applying different sampling approach or incorporating the ideas of [4], the speed gap could be reduced.
Figure 4: RSS null-space residual of our coil sensitivities versus ESPIRiT [23]. These are the results for the image in the first row of Fig. 2 (PD contrast) using \(8\%\) and \(4\%\) ACL.
Figure 3: Reference coil sensitivities computed from the fully-sampled data (top), ESPIRiT [23] estimation (middle), and the result of our joint estimation (bottom). These are the results for the image in the first row of Fig. 2 (PD contrast).
Figure 2: Reconstruction results for the different methods: \(1^{\text{st}}\) row: \(4\)-fold Cartesian sub-sampling using \(8\,\%\) ACL. \(2^{\text{nd}}\) row: Swapped phase encoding direction. \(3^{\text{nd}}\) row: \(4\)-fold Gaussian sub-sampling. \(4^{\text{th}}\) row: Radial sub-sampling with \(45\) spokes (\(\approx 11\) acceleration factor). The inlays show a zoom of the reconstruction (right) and the magnitude of its difference to the reference (left). |
2305.15732 | CLIP3Dstyler: Language Guided 3D Arbitrary Neural Style Transfer | In this paper, we propose a novel language-guided 3D arbitrary neural style
transfer method (CLIP3Dstyler). We aim at stylizing any 3D scene with an
arbitrary style from a text description, and synthesizing the novel stylized
view, which is more flexible than the image-conditioned style transfer.
Compared with the previous 2D method CLIPStyler, we are able to stylize a 3D
scene and generalize to novel scenes without re-train our model. A
straightforward solution is to combine previous image-conditioned 3D style
transfer and text-conditioned 2D style transfer \bigskip methods. However, such
a solution cannot achieve our goal due to two main challenges. First, there is
no multi-modal model matching point clouds and language at different feature
scales (low-level, high-level). Second, we observe a style mixing issue when we
stylize the content with different style conditions from text prompts. To
address the first issue, we propose a 3D stylization framework to match the
point cloud features with text features in local and global views. For the
second issue, we propose an improved directional divergence loss to make
arbitrary text styles more distinguishable as a complement to our framework. We
conduct extensive experiments to show the effectiveness of our model on
text-guided 3D scene style transfer. | Ming Gao, YanWu Xu, Yang Zhao, Tingbo Hou, Chenkai Zhao, Mingming Gong | 2023-05-25T05:30:13Z | http://arxiv.org/abs/2305.15732v2 | # CLIP3Dstyler: Language Guided 3D Arbitrary Neural Style Transfer
###### Abstract
In this paper, we propose a novel language-guided 3D arbitrary neural style transfer method (CLIP3Dstyler). We aim at stylizing any 3D scene with an arbitrary style from a text description, and synthesizing the novel stylized view, which is more flexible than the image-conditioned style transfer. Compared with the previous 2D method CLIP-Styler, we are able to stylize a 3D scene and generalize to novel scenes without re-train our model. A straightforward solution is to combine previous image-conditioned 3D style transfer and text-conditioned 2D style transfer methods. However, such a solution cannot achieve our goal due to two main challenges. First, there is no multi-modal model matching point clouds and language at different feature scales (e.g. low-level, high-level). Second, we observe a style mixing issue when we stylize the content with different style conditions from text prompts. To address the first issue, we propose a 3D stylization framework to match the point cloud features with text features in local and global views. For the second issue, we propose an improved directional divergence loss to make arbitrary text styles more distinguishable as a complement to our framework. We conduct extensive experiments to show the effectiveness of our model on text-guided 3D scene style transfer.
## 1 Introduction
Vision-Language models [11, 22, 19, 4, 30] have shown superior advantages over most of the current tasks, such as semantic segmentation, object detection and action recognition. However, the 3D scenes stylised with the guidance of Vision-Language driven is rarely explored.
In this paper, we propose a 3D stylization framework that stylizes the 3D scenes via a given text description of the style, which can be applied to stylize novel scenes without further training. The proposed method will have multiple potential applications in the rising VR, AR, MetaVerse, etc., with more flexible user-defined features.
The related work would be the arbitrary 3D scene stylization via a given style image [5, 6, 18], and the text-driven 2D image stylization [11, 4, 19]. The current 3D stylization work is built upon the point cloud 3D representations [18, 6] or the recently popular Neural Radiance Field (NeRF) [5]. Even though NeRF has several advantages over point clouds, such as easy training with only multiple views inputs and smoother interpolation between different views, the NeRF-based stylization models can only be applied to a single scene[5], which is inapplicable for stylizing multiple scenes or generalization on novel scenes, shown in Figure 2. In the CLIPNeRF[31], although the proposed 3D stylization method with text style condition shows a stable 3D consistency before and after stylization, it leads to a barely obvious style effect with the given text style condition (Figure 9), which is still under-explored for NeRF-based method. Thus, in this paper, we build our 3D CLIP-Styler upon the point cloud representation. One of the key components of arbitrary style image stylization is to match the content and the style of the stylized images with the input content and the style images. To match the 3D feature and the given 2D style feature, [18, 6] project the 3D point descriptor to the 2D plane and transfer it to a 2D image feature matching problem with a pre-trained image encoder. For the text-driven 2D image stylization CLIPstyler [11] utilizes the image encoder and the text encoder of CLIP [22] to match the image and text features in the same metric space.
In the under-explored text-driven 3D point cloud stylization task, we need to match the 3D point cloud descriptor with the text features. However, no such multi-modal model matches the feature of the point cloud and text caption. Generally, the natural solution is to bridge the ideas in [6], and [11] to project the 3D point cloud back to the 2D image space and match the text-image feature using CLIP. However, this straightforward solution faces two major challenges. First, the previous works refer to a style image to transfer the content images, where the content and the style are of the same modality. Thus, the multi-scale features extracted from different layers of a pre-trained VGG are all in the same feature space for the content, style, and stylized images, which is crucial for balancing faithful global style and local content details. However, in the pre-trained CLIP network, there is no such concept of a multi-scale feature for image-text matching; there is only a deep feature from the text encoder. In general, the deep style feature should transfer the deep content feature, but the point cloud descriptors are shallow layer's feature, which will cause blurred content and unfaithful style effect for stylizing novel views. Second, we observe that directly adopting the style matching loss of CLIPstyler [11] for multiple text styles transfer would lead to a mixing of style effects as shown in Figure 3, which is undesirable for general arbitrary stylization. And this style mixing effect is the key factor preventing the model from learning different text styles.
To address the above issues, we propose a more general framework for language-guided 3D point cloud stylization (CLIP3Dstyler). Our proposed CLIP3Dstyler enables the model to be trained with multiple scenes with arbitrary text styles and achieve content consistency and faithful style effect without mixing. To achieve content consistency and faithful style effect, we propose complimenting the local point cloud descriptor with a feature from the global view
Figure 2: In this figure, we show that our model can be generalized to the hold-out test scenes without retraining our model on it.
of the entire 3D scene to match the global text style feature.
Furthermore, we designed a light module to extract the global feature, and the additional overhead can be ignored compared to our full model, which is efficient yet effective. To fix the style mixing problem when we have multiple styles, we propose an improved directional divergence loss for segregating the style effect from one to another, enhancing stylization significantly. Our model can also generalize to stylize novel scenes without retraining, as shown in Figure 2. Text styles. For more details, please refer to Section 3 Our contribution is summarized into three points: 1) we present a new framework to address the language-guided 3D style transfer learning task; 2) We rethink the text style as a global information and generate associated global features from the point cloud to achieve better performance with a higher CLIP score; 3) We introduce a new directional divergence loss to solve the style mixing problem.
## 2 Related Work
**Language-driven model.** Recently, OpenAI introduced the CLIP[22] pre-trained model to bridge the text and image modality. By contrastive Learning from 40 million image and text pairs, CLIP shows giant potential ability about the language conditional model and brings many exciting interaction research between the text and image. For example, StyleCLIP[19] performs a different level of attribute manipulation from the text information of StyleGAN[9]. DALL-E 2[3] integrates the CLIP and diffusion model to generate a high-resolution and vivid image. CLIPstyler[11] performs text-driven style transfer learning by the minimize the difference of directional distance of text and image feature extracted by CLIP encoder. Our task of text 3D stylization want to extend 2D style transfer to the 3D space, as it requires the synthesis of novel view, and keeps consistency between the different views.
**Novel view synthesis.** Multi-images or single images novel view synthesis can achieve by projection[16, 1], and volume-based differentiate rendering. For volume rendering[28, 17, 29, 14], each pixel of a view image emits a ray of light. Each pixel gets the value in the picture by integrating the color and opacity value along the ray. The neural network is used as an implicit representation of 3D space. This neural network must be forward thousands of times in render process, which is the most severe bottleneck of render speed. Inspired by the traditional render pipeline, SynSin[33] proposed a projection based differentiate render pipeline by soft projecting the points onto a plane with a z-buffer to generate the feature map at the associate viewpoint. After the projection operation, it is attached with a decoder to create the render result. It is very efficient and capable of generalizing to arbitrary scene method. Hence, our model based on the projection differentiate rendering.
**3D stylization.** 3D content stylization[2, 5, 6, 18] has attracted growing interest in recent years. StyleScene[6] constructs the point cloud from a set of images and performs linear style transformation on each point associate feature. 3D Photo Stylization[18] generates the point cloud by depth estimation from the single images and performs the GCN to extract the geometrical feature of the 3D scene. By Adaptive Attention Normalization(AdaAttN)[15], the styles and contents are matched and combined by attention mechanism. With the post-back projection operation, novel views are synthesized. CLIPNeRF[31] use the neural radiance field as 3d representation, and text prompt as style information to perform stylization, but it can only change the color of the scene. In contrast, our approach achieves a very significant style migration effect.
**Deep learning for point cloud processing.** The deep neural network that gets the point cloud as input has been widely researched in classification[20, 21, 32, 34], semantic segmentation[36, 8, 25], and object detection[12, 35]. The voxel-based method[37, 12, 35] rasterizes the 3D space and gets the feature map for post-operation, but with the resolution increasing, memory consumption goes up exponentially. Point-Based[20, 21, 36] methods directly take the point cloud into the network without the complex pre-processing technique and are more memory friendly. Our approach uses the Point-based method to extract the features from the point cloud to handle millions of points in a scene.
## 3 Method
Given a set of 3D scenes of point clouds \(\{P_{n}\}_{n=1}^{N}\) and text style description \(\{S_{m}\}_{m=1}^{M}\), our goal is to stylize the given 3D point clouds with arbitrary text style and synthesize novel stylization views of 3D scenes. To briefly introduce our training framework. We decompose the proposed
Figure 3: Our proposed style divergence loss prevents the model from mixing styles
method into three components as shown in Figure 4; the first component generates a point cloud of the features via projecting image features into the corresponding 3D coordinates (depth map). The second component comprises a light global point cloud feature extractor and one off-the-shelf prompt feature extractor (e.g. clip text encoder). In the last component, we generate the stylized point cloud feature of the specific view, which mixes the content feature from a specific view of the point cloud with the text style feature and the complement global point cloud feature. After all the steps, we project the stylized point cloud features into a 2D stylized view.
### Point Cloud Construction
Given a group of images from a specific scene, we can estimate the relative image pose via COLMAP[26]. After this, we can calculate the full-depth map from MVS[27] to construct a 3D point cloud for this scene. Similar to [6], rather than build a point cloud from the image level features, we downsample it into a 3D point cloud of features given the extracted 2D feature map from VGG pre-trained model.
### Point Cloud Stylization
Our methods insert the style into the point cloud descriptor by changing the distribution of content features. Specifically, a Linear transformation module[13] will predict a transformation matrix \(T\) by matching the covariance statistic of content features and text style features. Given the feature vectors of the point cloud and text style embedding, the modulated point cloud will be computed in the equation below.
\[f_{p}^{d}=T(f_{p}^{c}-\bar{f}_{p}^{c})+\bar{f}^{s} \tag{1}\]
where \(\bar{f}_{p}^{c}\) is the mean of feature in the point cloud\(f_{p}^{c}\), \(\bar{f}^{s}\) is the mean of text style features \(f^{s}\), \(f_{p}^{d}\) is transferred feature of point cloud. In the previous work, the reference images provide the multi-scale features extracted from different layers of a pre-trained VGG. The feature of point cloud extracted from multi-view images by a three-layer pre-trained VGG encoder, each point's feature represents the surrounding local receptive field in an image. The multi-scale features from reference images provide a matching representation scale for point cloud stylization.
However, the connection across the modalities of CLIP is limited to the last layer of the encoders, which only provides the deep feature of text style. Therefore, there is a different representation scale problem between the content and style. To remove this contradiction, we extract a global feature from the point cloud to match the global text style feature by point-wise convolution and max pooling operation. Then, we use the same transformation matrix \(T\) to transfer the global feature representation. The global feature will be attached to each point's feature to prepare for view projection. The modulated global feature will be calculated below.
\[f_{g}^{c}=MaxPool(conv(f^{c}))\] \[f_{g}^{d}=T(f_{g}^{c}-\bar{f}^{c})+\bar{f}^{s} \tag{2}\]
The transformation matrix is calculated from the text style covariance matrix \(T^{s}\) and point cloud content covariance matrix \(T^{c}\). The text features are obtained by feeding the text encoder with different prompt sentences integrating a single text style. Once the style and content features have been calculated, the following convolution layers and a full-connected layer will compute the covariance matrix \(T^{s}\) and \(T^{c}\). Finally, we obtain the transformation matrix \(T=T^{s}T^{c}\).
### Stylized Novel View Synthesis
After the point cloud has been stylized, the next step is to generate the stylized images with a specified view. View synthesis can be achieved by projecting the point's features
Figure 4: **Method Overview. Our method starts from the point cloud reconstruction from a set of images and generate feature for each point. Point-wise and global feature representation stylized by given a text style. Integrating the per-points and global transferred features are projected to novel view and decoded to the RGB images.**
to an image plane with a z-buffer, camera pose and intrinsic parameters. Finally, the projected features are mapped to an image by a decoder.
**Projection.** Our projector follows the Wiles et al.[33], and project the point cloud features into a 2D plane to generate a feature map. In the project process, a z-buffer will accumulate the K-sorted closet points for a pixel. A point will affect \(r\) radius pixels on the image plane for better back-propagation and optimization of the decoder.
**Decoder.** The Decoder maps the projected feature map to an RGB image. The decoder is implemented by the convolution network, following the design of U-net[24]. It includes the down-sampling and up-sampling operations.
### Loss Function
#### 3.4.1 Style Loss
To guide the content scene to follow the semantics of text style, StyleGAN-NADA[4] proposed a directional loss that aligns the direction distance of the CLIP feature between the text-image pairs of source and target. The direction loss is given by:
\[\Delta T=E_{T}(text_{s})-E_{T}(text_{c}),\] \[\Delta I=E_{I}(f(\{P_{n}^{s}\}_{n=1}^{N}))-E_{I}(image_{c}),\] \[\mathcal{L}_{dir}=1-\frac{\Delta I\cdot\Delta T}{|\Delta I|| \Delta T|} \tag{3}\]
where \(\{P_{n}^{s}\}_{n=1}^{N}\) is stylized point cloud, \(E_{T}\) is CLIP text encoder, \(E_{I}\) is CLIP image encoder, \(t_{target}\) is the text style, \(t_{source}\) is "a Photo". \(f\) is an operation that projects the points to the associate view of the ground truth image and renders the transferred image with a decoder.
To improve the local texture of transferred images, CLIPStyler[11] proposed a patch-based CLIP directional loss. Specifically, the model will randomly crop several patches from rendered \(image_{s}\). The size of cropped images is fixed, and following the random perspective, augmentation will be applied on the N cropped patches \(image_{s}^{i}\). To alleviate some patch images that are easier to minimize the CLIP loss, the model will reject the patch that the \(l_{patch}^{i}\) value is larger than the specific threshold \(\tau\). The PatchCLIP loss is defined as below:
\[image_{s}=f(\{P_{n}^{sj}\}_{n=1}^{N}),\Delta T=E_{T}(text_{s})- E_{T}(text_{c}),\] \[\Delta I=E_{I}(aug(\hat{image_{s}^{i}}))-E_{I}(image_{c}),\] \[l_{patch}^{i}=1-\frac{\Delta I\cdot\Delta T}{|\Delta I||\Delta T |},\mathcal{L}_{patch}^{i}=\frac{1}{N}\sum_{i}^{N}R(l_{patch}^{i},\tau)\] \[whereR(s,\tau)=\left\{\begin{array}{l}0,ifs\leq\tau\\ s,otherwise\end{array}\right. \tag{4}\]
When we directly adopt the style matching loss of CLIP-styler [11] for multiple text styles transfer, the different styles are easy to mix because the previous PatchCLIP loss only constrains the directional distance from source to target and has no constraint between the different styles. To solve this issue, we propose a directional divergence loss. In a batch, we randomly sample N pairs data and minimize the \(\mathcal{L}_{dir}\) loss between the different style data pairs. The following equation describes this loss:
\[\Delta T=E_{T}(text_{s,i})-E_{T}(text_{s,j}),\] \[image_{s\_i}=f(\{P_{n}^{si}\}_{n=1}^{N}),image_{s\_j}=f(\{P_{n} ^{sj}\}_{n=1}^{N})\] \[\Delta I=E_{I}(image_{s\_i})-E_{I}(image_{s\_j}),\] \[\mathcal{L}_{dir}=1-\frac{\Delta I\cdot\Delta T}{|\Delta I|| \Delta T|}, \tag{5}\]
where \(text_{s,i}\) and \(text_{s,j}\) are different text styles from the dataset, \(\{P_{n}^{si}\}_{n=1}^{N}\) and \(\{P_{n}^{sj}\}_{n=1}^{N}\) are point cloud that has been transferred by different style. Further, we project different stylized views in a batch to assist the model coverage faster and more robustly. If we use the equation above to
Figure 5: **Loss function.** To support multiple scenes and styles in one model, only the constraint from source to target will cause the style mix-up. The constraint between styles can strengthen the model to distinguish different styles effectively.
measure the similarity of the text-image directional distance between different styles, the content disparity of different views will also be included. By calculating the similarity between different views, we successfully remove this noise in the loss function.
\[f_{c}^{i}=E_{I}(image_{c\_i}),f_{c}^{j}=E_{I}(image_{c\_j})\]
\[\mathcal{L}_{cd}=1-\frac{f_{c}^{i}\cdot f_{c}^{j}}{|f_{c}^{i}||f_{c}^{j}|}, \tag{6}\]
where \(image_{c\_i}\) and \(image_{c\_j}\) are content image pairs used for point cloud construction before. Altogether, our style loss function is \(\mathcal{L}_{s}=\mathcal{L}_{patch}+\mathcal{L}_{dir}-\mathcal{L}_{cd}\).
#### 3.4.2 Content Loss
The preservation of content information is ensured by the VGG perceptual loss \(L_{feat}\) between the synthesized image and the ground truth image. An RGB pixel level L1 loss is used to stabilize the model in the training stage. Finally, the content loss is \(\mathcal{L}_{c}=\lambda_{feat}\mathcal{L}_{feat}+\lambda_{rgb}\mathcal{L}_{rgb}\).
## 4 Experiments
To align the experimental settings with the previous method [23], We conduct our experiments on the dataset[10]. In this dataset, we split the scenes into training and testing sets. We train our model on sixteen scenes and test the generalization on the four hold-out testing scenes "M60, Truck, Train, and Playground". Besides, our text prompts for stylization are kept consistent among training and test sets.
### Qualitative results
To compare the effect of novel stylized views between different methods, we stylized the whole 3D scenes from different methods and sample the stylized views from the same camera poses for comparison.
Due to that StyleScene is conditioned with style images rather than text prompts in our model, we search for the most matched style images to our text prompts for StyleScene to generate the comparable stylized images. And for CLIPstyler[11], we modify the original AdaIN [7] to the more advanced linear style transfer [13] for a fair comparison, which is kept consistent for StyleScene and our method. Because CLIPstyler[11] only supports a single scene and cannot prevent the issue of style mixing effect, mentioned in Figure 3, we need to train different models to enumerate all the combinations of different scenes and text styles, in other words, one model for stylizing a single scene with the specific text style. However, our method supports multi-scene, multi-style and generalizable to novel scenes within one model training. Figure 6 shows the qualitative comparison of our approach with novel view syntheses to the 2D text stylization method and stylescene[6]. With the geometry information augmenting our 3D stylization, our model generates more preferable stylization results, and better consistency across the views, which is more stable than the 2D method.
### Quantitative results
**User Study.** In this qualitative experiment, we create an anonymous voting pool for comparing different methods. We generate 21 different stylized scenes and convert them to GIF format via the order of camera poses. In the voting section, the users are asked to choose the preferable stylized scenes w.r.t. the more faithful style effect or better view consistency. In total, we have 60 participants successfully finishing the voting questionnaire.
As shown in Fig. 8, the users deem that our approach reaches better consistency and conforms more to the target style. **Stylization Quality.** To quantify the stylization quality, we calculated the cosine similarity between the CLIP embedding of output stylized images and associated targets style text defined by CLIPstyler[11]. To better measure the local texture style transformation quality, we randomly crop 64 patches before calculating the image CLIP embedding.
The comparison with other 2D methods is shown in Table 1. With the advantage of 3D geometry, we achieve better stylization results with higher CLIP scores than the 2D
\begin{table}
\begin{tabular}{l|c|c|c}
**Dataset** & StyleScene & SVS\(\rightarrow\)CLIP+LT & **Ours** \\ \hline Truck & 0.2371 & 0.2651 & 0.2849 \\ M60 & 0.2362 & 0.2625 & 0.2874 \\ Playground & 0.2360 & 0.2659 & 0.2822 \\ Train & 0.2344 & 0.2600 & 0.2859 \\ \hline Average & 0.2359 & 0.2633 & 0.2851 \\ \end{tabular}
\end{table}
Table 1: **CLIP Score.** We compare the stylized image of our method with other 2D methods to determine whether our approach better matches the semantic text style.
\begin{table}
\begin{tabular}{l|c|c|c}
**Dataset** & StyleScene & SVS\(\rightarrow\)CLIP+LT & **Ours** \\ \hline Truck & 0.0835 & 0.0978 & 0.0827 \\ M60 & 0.0939 & 0.1037 & 0.0644 \\ Playground & 0.0762 & 0.0933 & 0.0441 \\ Train & 0.0827 & 0.1130 & 0.0818 \\ \hline Average & 0.0841 & 0.1019 & 0.0683 \\ \end{tabular}
\end{table}
Table 2: **Short-range consistency.** We use the (t-1)-th and t frame of video to measure the color variance by RMSE.
\begin{table}
\begin{tabular}{l|c|c|c}
**Dataset** & StyleScene & SVS\(\rightarrow\)CLIP+LT & **Ours** \\ \hline Truck & 0.1065 & 0.1109 & 0.1007 \\ M60 & 0.1150 & 0.1239 & 0.0754 \\ Playground & 0.1152 & 0.1091 & 0.0631 \\ Train & 0.1095 & 0.1285 & 0.1066 \\ \hline Average & 0.1116 & 0.1181 & 0.0864 \\ \end{tabular}
\end{table}
Table 3: **Long-range consistency.** We use the (t-7)-th and t frame of the video to measure the color variance by RMSE.
method and maintain the content of a scene. **View Consistency.** To measure inconsistency between the pair of stylized views, we reproject the pixel to the 3D space and project back to another plane by the camera intrinsic and extrinsic of a pair of views. By doing this, we can measure the color changing of a pair of pixels in different views projected from the same point and compute RMSE to measure these color differences as a consistency metric. Similar to Huang _et al_. [6], we calculate the RMSE of the adjacent frames of a video to quantify the short-range consistency and the viewpoints of (t - 7)-th and t-th video frames to quantify the long-range consistency. Finally, we average the RMSE of the adjacent frames of a video to quantify the short-range consistency and the viewpoints of (t - 7)-th and t-th video frames to quantify the long-range consistency.
the results from 21 different styles for all test scenes and report the mean value in Table 2 and Table 3. For either the short-range or long-range consistency, the proposed method reaches the lower RMSE values, which means the color of pixels projected onto different views from a point variant is smaller than the 2D method.
### Ablation studies
**Compare with CLIPNeRF.** To compare with the alternative solution of 3D scene stylization, we conduct a straightforward comparison between our method and the recent CLIPNeRF [31] on the same scene and the same text style prompts. It is obvious CLIPNeRF seems to only learn a color shift for the given style, which is not feasible compared with ours.
**Effect of global feature transformation.** We compare the stylization results with the non-global and global operations and calculate the CLIP score to measure whether the global information helps the model better match the text-described style. In Figure 7, with the global information, we obtained a much higher contrast and full texture details on the ground and the object surface. Without the global transferred feature, point-wise feature style transformation causes the obscure or indistinct of the truck and tank in the figure. We also measure the CLIP score of the render results with global and non-global style transformation in Table 4.
## 5 Conclusions
In this paper, we introduce the text conditional stylization on 3D scenes generating novel views from a set of images with a semantic text style. The model effectively distinguishes the different text styles by involving new directional distance constraints. Integrating the global transferred feature to the projected feature map enhances the model performance on fine details. We demonstrated the efficacy of our methods through extensive qualitative and quantitative studies.
\begin{table}
\begin{tabular}{l|c|c|}
**Dataset** & global & non-global \\ \hline Truck & 0.2849 & 0.2808 \\ M60 & 0.2874 & 0.2751 \\ Playground & 0.2822 & 0.2678 \\ Train & 0.2859 & 0.2810 \\ \hline Average & 0.2851 & 0.2761 \\ \end{tabular}
\end{table}
Table 4: **CLIP score of global and non-global.** By enhanced the global feature transformation, the stylization views achieve higher CLIP score, and better match text described style.
Figure 8: User study. We conduct the user to compare our method with other approaches from consistency and stylization perspectives.
Figure 7: **Non-global and global feature comparison.** We compare the stylization results between the non-global and global features on the Tank and Temple dataset[10]. The top-line images show the results with global information, and the bottom-line photos are not.
Figure 9: We also compare with the CLIPNeRF[31]; Note that we do not train the model on the nerf lff dataset; we only infer the model on these datasets and styles, but the CLIPNeRF needs to optimize for each scene and style.
## Supplementary Material
### 6 Implementation details
**Point Cloud Global Feature Extractor.** A lightweight module extracts the global feature. It implemented as several 1 x 1 convs to increase the channel of point cloud's features from 256 to 1024. The following MaxPooling operation aggregates the all features from the point cloud to generate a unified global representation. To use the same Transformation Matrix \(T\) calculated from the point cloud feature and style embedding, we have to compress the global feature to 256 channels by 1 x 1 convs.
**Neural Decoder.** The architecture of the decoder follows the design of U-Net[24]; it first receives 256 channels projected feature map. Then encoder part of U-Net down-samples the feature map by a series of convolution and AvgPool layers. It forces the model not only to memorize the content of a scene. The subsequently transposed convolution layers up-sample the feature maps to the three-channel images. All layers in the U-Net have a kernel size of 3 x 3 with ReLU non-linear function except the skip convolution.
**Training.** We first train the decoder without inserting the style into the point cloud feature with a batch size of \(4\) and a learning rate of \(0.0001\). The \(\lambda_{rgb}\) pixel of reconstruction loss \(\mathcal{L}_{rgb}\) is 0.005, and the \(\lambda_{feat}\) of feature perception loss \(\mathcal{L}_{feat}\) is 1.0. We then train the style transformation module with a batch size of 4 and a learning rate of 0.0001. To coverage faster of the model, we select different views from a scene in a batch. We also involve the \(\mathcal{L}_{tv}\) and \(\mathcal{L}_{gs}\) loss from [11] to alleviate the side artifacts and maintain the global content of an image. For hyperparameters, we set the \(\lambda_{rgb},\ \lambda_{feat},\ \lambda_{s}\) and \(\ \lambda_{tv}\) as \(5\times 10^{-3},1,1.5\times 10,1.3\times 10^{-6}\) respectively. We use the Adm optimizer with \(\beta_{1}=0.9\) and \(\beta_{2}=0.9999\) for all network training.
**CLIP Style Implementation Details.** The input of the CLIP model is an image with 224 \(\times\) 224 resolution. We have to resize the image before feeding it into the CLIP's image encoder. Following the [11], we randomly cropped 64 patches with the size of 96 and applied random perspective augmentation on each patch. For the threshold rejection \(\tau\), we set \(\tau\) as 0.7. To measure the directional divergence of different styles, we randomly sample 80% of all pairs of different styles to reduce the computation cost.
**Style Transformation Module.** Following the Linear Transformation[13] module, we compress the feature of the point cloud from 256 to 64 by an MLP layer. By reducing the feature's dimension, we accelerate the covariance matrix computation of the point cloud and transformation process. For the text style, we insert the text prompt into 79 template sentences to get the different perspectives representation of the style description. We then compress the feature size from 512 to 64 to calculate the covariance matrix.
After the style of the point cloud has been translated, we uncompress the feature from 64 to 256 channels. The global representation of the point cloud is also translated by the same transformation matrix \(T\). The translated global feature will be attached to each point cloud feature, and the combined feature will be compressed to 256 dimensions.
**2D Method Experiment.** Since we do not have the ground truth image of the synthesized views, we have to generate the associate view without the style first, as shown in Figure 10, and use the 2D method to stylize all of the views. In the training stage of the 2D method, each model only supports a single style and image. Therefore, we train the model with the multi-images from a single scene and a style.
## 7 Limitation and Future Direction
First, our model relies on the structure from motion to generate the point cloud of a scene, and it requires lots of images of overlapped views. If the pictures we provide have no overlap view with each other or the number of photos is minimal, the sfm algorithm cannot generate a good point cloud to represent a scene. Second, we extract the feature from the 2D images by a pre-trained VGG encoder as a feature of the point cloud; the geometry information is not inserted into the feature, which causes a consistency problem across the views. Third, we train the model with a batch size of 4, which means we have to cache four different stylized point clouds, and it consumes a lot of memory. With better memory consumption optimization, we can enlarge the batch size to make the model coverage faster. For example, we can pre-calculate the projected feature and record the point cloud's index. We then only need to cache the projected stylized feature, and we do not need to copy other features to reduce the memory consumption of the GPU.
### Additional Results
Figure 10: Synthesized views without style
Figure 11: additional results
Figure 12: additional results
Figure 13: additional results
Figure 14: additional results |
2303.09095 | SLOPER4D: A Scene-Aware Dataset for Global 4D Human Pose Estimation in
Urban Environments | We present SLOPER4D, a novel scene-aware dataset collected in large urban
environments to facilitate the research of global human pose estimation (GHPE)
with human-scene interaction in the wild. Employing a head-mounted device
integrated with a LiDAR and camera, we record 12 human subjects' activities
over 10 diverse urban scenes from an egocentric view. Frame-wise annotations
for 2D key points, 3D pose parameters, and global translations are provided,
together with reconstructed scene point clouds. To obtain accurate 3D ground
truth in such large dynamic scenes, we propose a joint optimization method to
fit local SMPL meshes to the scene and fine-tune the camera calibration during
dynamic motions frame by frame, resulting in plausible and scene-natural 3D
human poses. Eventually, SLOPER4D consists of 15 sequences of human motions,
each of which has a trajectory length of more than 200 meters (up to 1,300
meters) and covers an area of more than 2,000 $m^2$ (up to 13,000 $m^2$),
including more than 100K LiDAR frames, 300k video frames, and 500K IMU-based
motion frames. With SLOPER4D, we provide a detailed and thorough analysis of
two critical tasks, including camera-based 3D HPE and LiDAR-based 3D HPE in
urban environments, and benchmark a new task, GHPE. The in-depth analysis
demonstrates SLOPER4D poses significant challenges to existing methods and
produces great research opportunities. The dataset and code are released at
\url{http://www.lidarhumanmotion.net/sloper4d/} | Yudi Dai, Yitai Lin, Xiping Lin, Chenglu Wen, Lan Xu, Hongwei Yi, Siqi Shen, Yuexin Ma, Cheng Wang | 2023-03-16T05:54:15Z | http://arxiv.org/abs/2303.09095v2 | # SLOPER4D: A Scene-Aware Dataset for Global 4D Human Pose Estimation in Urban Environments
###### Abstract
We present SLOPER4D, a novel scene-aware dataset collected in large urban environments to facilitate the research of global human pose estimation (GHPE) with human-scene interaction in the wild. Employing a head-mounted device integrated with a LiDAR and camera, we record 12 human subjects' activities over 10 diverse urban scenes from an egocentric view. Frame-wise annotations for 2D key points, 3D pose parameters, and global translations are provided, together with reconstructed scene point clouds. To obtain accurate 3D ground truth in such large dynamic scenes, we propose a joint optimization method to fit local SMPL meshes to the scene and fine-tune the camera calibration during dynamic motions frame by frame, resulting in plausible and scene-natural 3D human poses. Eventually, SLOPER4D consists of 15 sequences of human motions, each of which has a trajectory length of more than 200 meters (up to 1,300 meters) and covers an area of more than 2,000 \(m^{2}\) (up to 13,000 \(m^{2}\)), including more than 100K LiDAR frames, 300k video frames, and 500K IMU-based motion frames. With SLOPER4D, we provide a detailed and thorough analysis of two critical tasks, including camera-based 3D HPE and LiDAR-based 3D HPE in urban environments, and benchmark a new task, GHPE. The in-depth analysis demonstrates SLOPER4D poses significant challenges to existing methods and produces great research opportunities. The dataset and code are released at [http://www.lidarhumanmotion.net/sloperf4d/](http://www.lidarhumanmotion.net/sloperf4d/).
## 1 Introduction
Urban-level human motion capture is attracting more and more attention, which targets at acquiring consecutive fine-grained human pose representations, such as 3D skeletons and parametric mesh models, with accurate global locations in the physical world. It is essential for human action recog
nition, social-behavioral analysis, and scene perception and further benefits many downstream applications, including Augmented/Virtual Reality, simulation, autonomous driving, smart city, sociology, etc. However, capturing extra large-scale dynamic scenes and annotating detailed 3D representations for humans with diverse poses is not trivial.
Over the past decades, a large number of datasets and benchmarks have been proposed and have greatly promoted the research in 3D human pose estimation (HPE). They can be divided into two main categories according to the capture environment. The first class usually leverages marker-based systems [16, 33, 45], cameras [14, 59, 60], or RGB-D sensors [13, 64] to capture human local poses in constrained environments. However, the optical system is sensitive to light and lacks depth information, making it unstable in outdoor scenes and difficult to provide global translations, and the RGB-D sensor has limited range and could not work outdoors. The second class [39, 49] attempts to take advantage of body-mounted IMUs to capture occlusion-free 3D poses in free environments. However, IMUs suffer from severe drift for long-term capturing, resulting in misalignments with the human body. Then, some methods exploit additional sensors, such as RGB camera [17], RGB-D camera [46, 57, 67], or LiDAR [27] to alleviate the problem and make obvious improvement. However, they all focus on HPE without considering the scene constraints, which are limited for reconstructing human-scene integrated digital urban and human-scene natural interactions.
To capture human pose and related static scenes simultaneously, some studies use wearable IMUs and body-mounted camera [12] or LiDAR [5] to register the human in large real scenarios and they are promising for capturing human-involved real-world scenes. However, human pose and scene are decoupled in these works due to the ego view, where auxiliary visual sensors are used for collecting the scene data while IMUs are utilized for obtaining the 3D pose. Different from them, we propose a novel setting for human-scene capture with wearable IMUs and global-view LiDAR and camera, which can provide multi-modal data for more accurate 3D HPE.
In this paper, we propose a huge scene-aware dataset for sequential human pose estimation in urban environments, named SLOPER4D. To our knowledge, it is the first urban-level 3D HPE dataset with multi-modal capture data, including calibrated and synchronized IMU measurements, LiDAR point clouds, and images for each subject. Moreover, the dataset provides rich annotations, including 3D poses, SMPL [32] models and locations in the world coordinate system, 2D poses and bounding boxes in the image coordinate system, and reconstructed 3D scene mesh. In particular, we propose a joint optimization method for obtaining accurate and natural human motion representations by utilizing multi-sensor complementation and scene constraints, which also benefit global localization and camera calibration in the dynamic acquisition process. Furthermore, SLOPER4D consists of over 15 sequences in 10 scenes, including library, commercial street, coastal run-way, football field, landscape garden, etc., with 2k\(\sim\)13k \(m^{2}\) area size and \(200\sim 1,300m\) trajectory length for each sequence. By providing multi-modal capture data and diverse human-scene-related annotations, SLOPER4D opens a new door to benchmark urban-level HPE.
We conduct extensive experiments to show the superiority of our joint optimization approach for acquiring high-quality 3D pose annotations. Additionally, based on our proposed new dataset, we benchmark two critical tasks: camera-based 3D HPE and LiDAR-based 3D HPE, as well as provide benchmarks for Gripe.
Our contributions are summarized as follows:
* We propose the first large-scale urban-level human pose dataset with multi-modal capture data and rich human-scene annotations.
* We propose an effective joint optimization method for acquiring accurate human motions in both local and global by integrating LiDAR SLAM results, IMU poses, and scene constraints.
* We benchmark two HPE tasks as well as a Gripe task on SLOPER4D, demonstrating its potential of promoting urban-level 3D HPE research.
## 2 Related Work
### 3D Human Motion Datasets
Many datasets have been proposed with different sensors and setups to facilitate the research on 3D human pose estimation. The H3.6M [16] is a large-size dataset providing synchronized video with optical-based MoCap in studio environments. To perform markerless capture in different indoor scenes, PROX [13] uses an RGB-D sensor to scan a single person. EgoBody [64] uses multiple RGB-D sensors to pre-scan the room and scan the interacting persons. LiDARHuman26M [27] can capture long-range human motions with static LiDAR and IMUs. However, they are limited to static environments, human activities, and interactions. 3DPW [49] is the first dataset providing 3D annotations in the wild which uses a single hand-held RGB camera to optimize human pose from IMUs for a certain period of frames. It doesn't provide accurate global translation and 3D scenes. HPS [12] reconstructs the human body pose using IMUs and self-localizes it with a head-mounted camera in large 3D scenes, but it heavily relies on the pre-built map. HSC4D [5] removes the reliance on the pre-built map and achieves global human motion capture in large scenes. However, the camera in HPS and the LiDAR in HSC4D are only used to perceive the environment rather than capture
human data. With the scene-aware dataset we proposed for global human pose estimation, we can benchmark the 3D HPE in the wild with the LiDAR or camera modalities.
### Human Localization and Scene Mapping
Human self-localization aims at estimating the 6-DoF of the human subject in global coordinates. The image-based methods [21, 37, 52] regress locations directly from a single image with a pre-built map. The scene-specific property makes them hard to generalize to unseen scenes. LiDAR is widely used in Simultaneous Localization and Mapping (SLAM) [4, 28, 41, 62] due to its robustness and low drift. To address the drift problem and improve robustness in dynamic motions, RGB cameras [40, 44, 63], IMU [10, 36, 42], or both [6, 43, 68], have been integrated with the mapping task. Most attention has been paid to autonomous driving [9][23] or robotics from the third-person view and they usually do not focus on humans. To achieve self-localization, LiDAR is designed as backpacked [20, 31, 54] and handheld [2]. To efficiently capture human motions and reconstruct urban scenes, we utilize LiDAR with a built-in IMU (different from the IMUs for motion capture) and propose a pipeline for constructing multi-modal data. This approach provides accurate information on human motions at both local and global levels, as well as enables mapping in large outdoor environments.
### Global 3D Human Pose Estimation
Most studies recover human meshes in camera coordinate [26, 66] or root-relative poses [24, 18, 25]. Recovering global human motions in unconstrained scenes is a challenging topic in computer vision and has gained more and more research interest in recent years. IMU sensors are widely used in commercial [38, 39] and research activities [51, 48, 15], and are attached to body limbs to capture human motions in studio-environments. But it suffers severe drift in the wild. Some methods rely on additional RGB [49, 34, 8, 50] or pre-scan maps [12], or LiDAR [5] to complement the IMUs in large-scale scenes. Based on human-scene interaction, some work proposed scene-aware solutions using static cameras [14, 60] to obtain accurate and scene-natural human motions. 4DCapture [30] uses a dynamic head-mounted camera to self-localize and reconstruct the scene with the Struct From Motion method. However, it often fails when the illumination changes in the wild. MOVER [60] uses a single camera to optimize the 3D objects in a static scene, resulting in better 3D scene reconstruction and human motions. GLAMR [61] uses global trajectory predictions to constrain both human motions and dynamic camera poses, achieving state-of-the-art results on in-the-wild videos. However, it lacks a benchmark for quantitatively comparing different HPE methods on a global level. To deal with this limitation, we propose SLOPER4D, the first large-scale urban-level human pose dataset with rich 2D/3D annotations.
## 3 SLOPER4D Dataset
SLOPER4D collects scene-aware 4D human data with our body-worn capturing system in urban scenes. In this section, we first introduce the data acquisition in Sec. 3.1, second, we detail the data construction and annotation process in Sec. 3.2, then we introduce the global optimization-based Sec. 3.3 method to obtain high quality both 3D/2D data, finally, we compare our dataset in Sec. 3.4 with the existing datasets and highlight our novelty.
### Data Acquisition
Hardware setup.As shown in Fig. 1, during the data collection procedure, the scanning person follows the performer (IMUs wearer) and scans him with a LiDAR and a camera on the helmet. Additionally, Fig. 2 shows the hardware details of our capturing system. Regarding the sensor module, the 128-beams Ouster-os1 LiDAR and the DJI-Action2 wide-range camera are rigidly installed on the helmet. To capture raw human motions, we use Noitom's inertial MoCap product, PN Studio, to attach 17 wireless IMUs to the IMU wearer's body limbs, torso, and head. The camera's field of view (FOV) is 116\({}^{\circ}\)\(\times\)84\({}^{\circ}\) and the LiDAR's FOV is 360\({}^{\circ}\)\(\times\)45\({}^{\circ}\). To make the performer within the LiDAR's FOV as much as possible, we tilt the LiDAR down around 45\({}^{\circ}\). Regarding the storage module, the scan
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline Dataset & In the wild & Global & 3D Scene & Point cloud & Video & IMU & \# Scene & \# Area size (\(m^{2}\)) & \# Subject & \# Frame \\ \hline H3.6M [16] & ✗ & ✓ & ✗ & ✗ & ✓ & ✗ & - & 12 & 11 & 3.6M \\
3DPW [49] & ✓ & ✗ & ✗ & ✗ & ✓ & ✓ & - & \(<\) 300 & 7 & 51k \\ PROX [13] & ✗ & ✓ & ✓ & ✗ & ✓ & ✗ & 12 & \(<\) 30 & 20 & 20k \\ HPS [12] & ✓ & ✓ & ✓ & ✗ & ✓ * & ✓ & 8 & 300 \(\sim\) 1k & 7 & 7k \\ HSC4D [5] & ✓ & ✓ & ✓ & ✓ * & ✗ & ✓ & 5 & 1k \(\sim\) 5k & 2 & 10k \\ LH26M [27] & ✗ & ✗ & ✗ & ✓ & ✓ & ✓ & - & \(<\) 200 & 13 & 184k \\ EgoBody [64] & ✗ & ✓ & ✓ & ✗ & ✓ & ✗ & 15 & \(<\) 50 & 20 & 153k \\ \hline SLOPER4D & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & 10 & 2k \(\sim\) 13k & 12 & 100k \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Comparisons with existing datasets. “Global” denotes to human poses with global translation. The “area size” is estimated with the published data. The * indicates the data modality is only used for human self-localization rather than for human-related data.**
ning person's backpack places a wireless IMU data receiver, a 24V battery, and an Intel NUC11. The mini-computer NUC11 stores IMU data from the wireless receiver and point clouds from LiDAR in real time. Videos are stored locally in the camera. The LiDAR and NUC11 are both powered by the battery.
Coordinate systems.Let's define three coordinate systems: 1) IMU coordinate system \(\{I\}\): the origin is at LiDAR wearer's spine base at the starting time, and the \(X/Y/Z\) axis is pointing left/upward/forward of the human. 2) LiDAR Coordinate system \(\{L\}\): the origin is at the center of the LiDAR, and the \(X/Y/Z\) axis is pointing right/forward/upward of the LiDAR. 3) Global/World coordinate system \(\{W\}\): the origin is on the floor of the LiDAR wearer's starting position, and the \(X/Y/Z\) axis is pointing right/forward/upward of the LiDAR wearer.
Calibration.Following the setup in [53], we use a chessboard to calibrate the camera intrinsic \(K_{in}\) and introduces a terrestrial laser scanner (TLS) to obtain accurate camera extrinsic parameter, \(K_{ex}\). Due to the LiDAR point cloud being too sparse, we manually choose the corresponding points both on the 2D image and the TLS map registered to the point cloud, and then we solve the perspective-n-point (PnP) problem to obtain \(K_{in}\). For every 3D scene, the calibration \(R_{WL}\), which transforms \(\{L\}\) to \(\{W\}\) is manually set to make the ground's \(z\)-axis upward and height to zero for the starting position. By using singular value decomposition, the calibration \(R_{WI}\), which transforms \(\{I\}\) to \(\{W\}\), is calculated through the similarity between IMU trajectory and LiDAR trajectory on the XY plane.
Synchronization.The synchronization of data from multiple sensors in human subject data is achieved through peak detection. Before and after the capture, the subject is asked to perform jumps. Then the peak height time in IMU is automatically detected and the peak times in the LiDAR and camera data are manually identified. Finally, all modalities are aligned by the peaks and downsampled to match the LiDAR frame rate of 20 Hz.
### Data Processing
2D pose detection.We use Detectron [56] to detect and Deepsort [55] to track humans in videos. However, the tracking often fails due to the IMUs wearer entering/exiting the field of view or occlusions. To solve this problem, we manually assign the same ID for the tracked person in a video sequence. As for 3D point cloud reference, we project them on images according to the \(K_{ex}\). However, due to the jitter brought by dynamic motions, the camera and the LiDAR are not perfectly rigidly connected. Thus, \(K_{ex}\) will be further optimized in Sec. 3.3.
LiDAR-inertial localization and mapping.The LiDAR-only method often fails in mapping because of the dynamic head rotation and crowded urban environments. Incorporating an IMU can compensate for motion distortion in a LiDAR scan \(p^{L}\) and provide an accurate initial pose. Using a LiDAR with an integrated IMU, and by combining Kalman filter-based lidar-inertial odometry [58] with factor graph-based loop closure optimization [7][22], we successfully estimate the ego-motion of LiDAR and build the global consistency 3D scene map with n frame point clouds \(P^{L}_{1:n}=\{p^{L}_{1},\ldots,p^{L}_{n}\}\). To provide accurate scene constrain in Sec. 3.3, we utilize the VDB-Fusion [47] to generate a clean scene mesh \(\mathbf{S}\) that excludes moving objects.
IMUs pose estimation.We use SMPL [32] to represent the human body motion \(M^{I}=\varPhi(\theta^{I},t^{I},\beta)\in\mathbb{R}^{6890}\) in IMU coordinate space\(\{I\}\), where pose parameter \(\Theta^{I}_{1:n}=\{\theta^{I}_{1},\ldots,\theta^{I}_{n}\}\in\mathbb{R}^{72 \times n}\) is composed of pelvis joint's orientation \(R^{I,m}_{1:n}=\{r^{I}_{1},\ldots,r^{I}_{n}\}\in\mathbb{R}^{3\times n}\) and the other 23 joints' rotation relative to their parent joint. The \(T^{I}_{1:n}=\{t^{I}_{1},\ldots,r^{I}_{n}\}\in\mathbb{R}^{3\times n}\) is the pelvis joint's translation and \(\beta\in\mathbb{R}^{10}\) is a constant value representing a person's body shape. \(T\) and \(\Theta\) are estimated by the commercial MoCap product, while \(\beta\) is obtained by using IPNet [3] to fit the scanned model captured by an iPhone13 Promax. Since the IMU are accurate locally but drift globally, \(T^{I}\) is used for raw calibration of the \(\{I\}\) to \(\{W\}\), and the initial global motion \(M=M^{W}=R^{WI}M^{I}\) will be further optimized.
### Data Optimization
To obtain precise and scene-plausible human motion \(M\) in the world coordinate system, we use scene geometry \(\mathbf{S}\) with several physic-based terms to perform joint optimizations to find the optimal motion \(M^{*}\) that minimize \(\mathcal{L}\). In a k-frame segment, the optimization is written as:
Figure 2: **Our capturing system’s hardware details.** The sensor module includes a LiDAR, a camera, and 17 body-attached IMU sensors. The storage module consists of a NUC11, a receiver, and a battery in the backpack.
\[M^{*}_{1:k}=\arg\min_{M_{1:k}}\mathcal{L}(M_{1:k},\mathbf{S}), \tag{1}\] \[\mathcal{L}=\mathcal{L}_{smt}+\lambda_{sc}\mathcal{L}_{sc}+\lambda_ {pri}\mathcal{L}_{pri}+\lambda_{m2p}\mathcal{L}_{m2p},\] \[\mathcal{L}_{smt}=\lambda_{trans}\mathcal{L}_{trans}+\lambda_{ orbit}\mathcal{L}_{ori}+\lambda_{jts}\mathcal{L}_{jts},\]
where \(\mathcal{L}_{smt}\) is a smoothness term, which consists of a translation loss \(\mathcal{L}_{trans}\), an orientation loss \(\mathcal{L}_{ort}\), and a joints loss \(\mathcal{L}_{jts}\). \(\mathcal{L}_{sc}\) is a scene-aware-contact term, \(\mathcal{L}_{pri}\) is a pose prior term, and \(\mathcal{L}_{m2p}\) is a mesh-to-points term. The \(\lambda_{sc}\), \(\lambda_{pri}\), \(\lambda_{trans}\), \(\lambda_{ori}\), \(\lambda_{jts}\), and \(\lambda_{m2p}\) are loss terms' coefficients. \(\mathcal{L}\) is minimized with a gradient descent algorithm.
**Smoothness term.** The objective of this term is to minimize the acceleration of the pelvis joint, the accelerations of the other 23 pelvis-relative joints, which is denoted as \(J_{1:k}=\{J_{1},\ldots,J_{k}\}\in\mathbb{R}^{69\times k}\), and the angular velocity of all joints to smooth human movements.
**Scene-aware contact term.** we compare the movement of every foot vertices in IMU motions \(M^{l}_{k}\) and label the foot as stable if its velocity is less than 0.1 \(m/s^{2}\). Finally, the Chamfer Distance (CD) between this foot and its closest surface is expressed as the scene contact loss \(\mathcal{L}_{sc}\).
**Pose prior term.** The poses estimated by IMUs are roughly accurate but will likely cause some misalignments to the end of the body limb due to the accumulating error. Hence, \(\mathcal{L}_{prior}\) is used to constrain the \(\Theta\) close to the initial value at the beginning of the optimization.
**Mesh-to-points term.** The point cloud \(p^{L}\) from the moving LiDAR provides strong prior depth information. However, though the SMPL mesh is watertight and complete, the human points are sparse and partial, which makes the registration methods such as ICP, not ideal as expected. To address this issue, we propose a viewpoint-based mesh-to-point loss function \(\mathcal{L}_{m2p}\). First, we remove the hidden SMPL mesh faces from the LiDAR's viewpoint. Then we sample points, denoted as \({P^{\prime}}_{1:k}=\{{p^{\prime}}_{1},\ldots,{p^{\prime}}_{k}\}\), from the remaining faces by LiDAR resolution. The loss is defined as the Chamfer Distance from \({P^{\prime}}_{1:k}\) to \(P_{1:k}\).
All loss terms functions are detailed as follows:
\[\mathcal{L}_{trans}=\frac{1}{k-2}\sum_{i=1}^{k-2}\|t_{i+2}-2t_{i+1 }+t_{i}\|_{2}^{2}, \tag{2}\] \[\mathcal{L}_{jts}=\frac{1}{k-2}\sum_{i=1}^{k-2}\|j_{i+2}-2j_{i+1 }+j_{i}\|_{2}^{2},\] \[\mathcal{L}_{\text{ori}}=\frac{1}{k-1}\sum_{i=1}^{k-1}\|r_{i+1}- r_{i}\|_{2}^{2},\] \[\mathcal{L}_{\text{pri}}=\frac{1}{k}\sum_{i=1}^{k}\|\theta_{i}-R ^{WI}\theta_{i}^{I}\|_{2}^{2},\] \[\mathcal{L}_{m2p}=\frac{1}{k}\sum_{j=1}^{k}(\frac{1}{|{p^{\prime }}_{i}|}\sum_{{p^{\prime}}_{i}}\min_{\hat{p}\in p_{i}}\|\hat{p}-\hat{p^{\prime} }\|_{2}^{2}).\]
Figure 3: **The pipeline of the dataset construction.** The capturing system simultaneously collects multimodal data, including LiDAR, camera, and IMU data. Then they are further processed. A joint optimization approach with multiple loss terms is then employed to optimize motion locally and globally. As a result, we obtain rich 2D/3D annotations with accurate global motion and scene information.
**Camera extrinsic optimization.** We aim to optimize extrinsic parameters \(K_{ex}\) for every frame by minimizing the \(\mathcal{L}_{cam}\), which comprises of the keypoints loss \(\mathcal{L}_{kpt}\) and the bounding box loss \(\mathcal{L}_{box}\). The \(\mathcal{L}_{kpt}\) measures the mean square error (MSE) between the 2D human keypoints \(kpt^{2d}\) in the image and the 3D human keypoints \(kpt^{3d}\) of the optimized SMPL model projected to the image with \(K_{ex}\); the \(\mathcal{L}_{box}\) computes the Intersection over Union(IoU) loss between the 2D human bounding box \(box^{2d}\) in the image and the 3D human bounding box \(box^{3d}\) projected to the image with \(K_{ex}\).
\[\begin{split} K_{ex}^{*}=&\arg\min_{K_{ex}\in \mathcal{S}E(3)}\mathcal{L}_{cam}(K_{ex}),\\ \mathcal{L}_{cam}=&\lambda_{kpt}\mathcal{L}_{kpt}( kpt^{3d},kpt^{2d},K_{ex})+\\ &\lambda_{box}\mathcal{L}_{box}(box^{3d},box^{2d},K_{ex}),\end{split} \tag{3}\]
where \(\lambda_{kpt}\) and \(\lambda_{box}\) are constant coefficients.
### Dataset Comparison
SLOPER4D is the first large-scale urban-level human pose dataset with multi-modal capture data and rich human-scene annotations for GHPE. The head-mounted LiDAR and camera are utilized to simultaneously record the IMU-wearer's activities, including running outside, playing football, visiting, reading, climbing/descending stairs, discussing, borrowing a book, greeting, etc.
The dataset consists of 15 sequences from 12 human subjects in 10 locations. There are a total of 100k LiDAR frames, 300k video frames, and 500k IMU-based motion frames captured over a total distance of more than 8 \(km\) and an area of up to 13,000 \(m^{2}\). The results of our dataset are shown in Fig. 4. For the captured person, we provide the segmentation of 3D points from LiDAR frames and 2D bounding boxes from images synchronized with LiDAR. We also provide 3D pose annotations with SMPL format. Compared to other datasets Tab. 1, it is worth mentioning that SLOPER4D provides the 3D scene reconstructions and accurate global translation annotations, allowing us to quantitatively study the scene-aware global pose estimation from both LiDAR and monocular videos. In addition to the dense 3D point cloud map reconstructed from the LiDAR, SLOPER4D provides the high-precision colorful point cloud map from a Terrestrial Laser Scanner (Trimble TX5) for better visualization and map comparison.
## 4 Experiments
In this section, we first evaluate SLOPER4D Dataset qualitatively, indicating that our dataset is solid enough to benchmark new tasks. Then we perform a cross-dataset evaluation to further assess our dataset's novelty on two tasks: LiDAR-based 3D HPE and camera-based 3D HPE. Finally, we introduce the new benchmark, GHPE, and per
Figure 4: The diverse scenes and activities of our dataset. The images in the left column are our reconstructed scenes with human trajectories overlaid on them. The right images are the SMPL meshes overlaid on images / point clouds / scenes.
form experiments on GLAMR. More quantitative evaluations and experiments are in the supplementary material.
Training/Test splits.We split our data into training and test sets for LiDAR/Camera-based pose estimation. The training set of SLOPER4D contains eleven sequences of data with a total of 80k LiDAR frames and corresponding RGB frames. The test set has four sequences of data with around 20k LiDAR frames and corresponding RGB frames.
For global pose estimation, we select three challenging scenarios for evaluation. The first one is a single-person football training scenario with highly dynamic motions. The second one is running along a coastal runway. The third one is a garden tour involving daily motions.
Evaluation metrics.For 3D HPE, we employ Mean per joint position error (MPJPE) and Procrustes-aligned MPJPE (PA-MPJPE) for evaluation. MPJPE is the mean euclidean distance between the ground-truth and predicted joints. PA-MPJPE first aligns the predicted joints to the ground-truth joints by carrying out rigid transformation based on Procrustes analysis and then calculates MPJPE. For global trajectory evaluation, we utilize Absolute Trajectory Error (ATE) and the Relative Pose (the pose refers to orientation here) Error (RPE) in visual SLAM systems [11], where the ATE is well-suited for measuring the global localization and, in contrast, the RPE is suitable for measuring the system's drift, for example, the drift per second. Global MPJPE (G-MPJPE) is MPJPE calculated by placing the SMPL model in the global coordinates.
Qualitative evaluation.For the human pose qualitative evaluation, we project the SMPL to the image and visualize the 3D human with corresponding LiDAR points in 3D space (shown in Fig. 4). The results demonstrate that the 3D human mesh aligns well with 3D environments and 2D images. As a large-scale urban-level human pose dataset, SLOPER4D provides multi-modal capture data and rich human-scene annotations, as well as diverse challenging human activities in large scenes. To evaluate our optimization method, we first compare our method with the results from ICP. As shown in Fig. 5, the scene-aware constraints and human mesh-to-points constraint efficiently optimize the local poses, global translation, and even the orientation error from IMU. To show the effectiveness of the camera extrinsic optimization, we report the results in Fig. 6. The 2D projecting error was visually lowered after optimization.
### Cross-Dataset Evaluation
We evaluate root-relative 3D human pose estimation with different modalities, namely the LiDAR and the camera. 3DPW is an in-the-wild human motion dataset that is most related to us. With VIBE, we cross-evaluated our dataset's camera modal by using 3DPW. LiDARHuman26M is a lidar-based dataset for long-range human pose estimation. We can cross-evaluate our dataset's LiDAR modal with it. Tab. 2(a) shows the evaluation results on LiDAR-based 3D pose estimation task and Tab. 2(b) shows the results on camera-based 3D pose estimation. Taking the results from Tab. 2(a), for example, when the model is trained from another dataset only, the errors are the largest.
\begin{table}
\end{table}
Table 2: Cross-dataset evaluation results with different modalities. The LH26M in (a) refers to LiDARHuman26M dataset from LiDARcap. VIBE is pre-trained on AMASS [33], MPI-INF-3DHP [35] InstaVariety [19], PoseTrack [1], PennAction [65]. HbryIK is pre-trained on H36M, MPI-INF-3DHP, MSCOCO [29]
Figure 5: Comparison between our optimization results (red SMPL) and the ICP results (green SMPL). It shows the red SMPL aligns better with the cyan human points than the green SMPL.
Figure 6: The comparison before (**left**) and after (**right**) extrinsic optimization by projecting the point clouds (upper) and SMPL (lower) are projected onto the image.
But the error will be further reduced by around 60% when training on LiDARHuman26M and our dataset together. It suggests a domain gap exists between different LiDAR sensors, and both datasets complement each other. The results of another task show that the pre-trained VIBE model generalizes better on 3DPW than on our dataset. But the error on 3DPW increases when finetuned on our dataset, while the error decreases on our dataset. This suggests that the pre-trained model complements SLOPER4D better than the opposite. Comparing the results across different modalities, the error on our dataset from the method trained on mixed LiDAR point cloud datasets is 13% lower than the method trained on the images.
### Benchmark on Global Human Pose Estimation
In this subsection, we benchmark the GHPE task of GLAMR [61] on SLOPER4D. GLAMR is a global occlusion-aware method for 3D global human mesh recovery from dynamic monocular cameras. For the scale uncertainty of the monocular camera, we compute the affine matrix from the estimated trajectory to the ground truth trajectory and rotate, translate and scale the estimated trajectory before error computation.
Tab. 3 reports the global trajectory error with ATE and RPE, Tab. 4 reports the global human pose metric, and Fig. 7 shows the ATE error mapped on GT trajectory. Comparing the results on the three scenes, the _football_ and _Garden001_ have a significantly lower RPE in the global scene. In comparison, GLAMR performs the worst on the running scene, with an ATE's RMSE of 29.48 m. This scene has the largest area size and the highest human pace. GLAMR achieves a low PA-MPJPE of 86.3mm on _Garden001_, a sequence with daily walking and visiting motions. It's the first time that we have tested the GPHE on such large outdoor scenes. GLAMR achieves relatively better results on daily human motion while performing worse on high-dynamic activities in the wild. The interesting point is that the trajectory tendency is pretty similar to the reference, even in dynamic football training motions, which demonstrates the ability of GLAMR to be a baseline. It is expected that more research will focus on GHPE in real-world interactive scenarios, and the experiments show our SLOPER4D's potential to promote urban-level GHPE research.
## 5 Discussions
**Limitations.** Firstly, SLOPER4D is limited to single-person capture though it perceives multiple-person data. Secondly, the camera and LiDAR are not synchronized online, causing tedious offline work if the camera loses frames even with a low time offset (\(<\)50 \(ms\)). Finally, texture information from the camera is not fully exploited for color and texture reconstruction of scenes and humans. In our future work, we will propose an online synchronization algorithm and extend our work to multiple-person capturing.
**Conclusions.** We propose the first large-scale urban-level human pose dataset with multi-modal capture data and rich human-scene annotations. Based on our proposed new dataset, we benchmark two critical tasks, camera-based 3D HPE and LiDAR-based 3D HPE. SLOPER4D also benchmarks the GHPE task. The results demonstrate the potential of SLOPER4D in boosting the development of these areas.
Our work contributes to extending motion capture to large global scenes based on the current methods and datasets. We hope this work will foster future creation and interaction in urban environments.
Acknowledgements. We thank Zhiyong Wang for helping us incorporate FAST-LIO2 into our mapping system. This work was supported in part by the National Natural Science Foundation of China (No.62171393, No.62206173), the Fundamental Research Funds for the Central Universities (No.20720220064), the open fund of PDL (WDZC20215250113, 2022-KJWPDL-12), and FuXiaQuan National Independent Innovation Demonstration Zone Collaborative Innovation Platform (No.3502ZCQXT2021003). We also acknowledge support from Shanghai Frontiers Science Center of Human-centered Artificial Intelligence (ShanghaiAI).
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Scene & Scale & MPJPE \(\downarrow\) & PA-MPJPE \(\downarrow\) & G-MPJPE \(\downarrow\) \\ \hline Football & 11.83 & 264.6 & 118.5 & 5268.7 \\ Running001 & 56.07 & 652.1 & 119.6 & 32329.3 \\ Garden001 & 6.55 & **139.4** & **86.3** & **4407.0** \\ \hline \hline \end{tabular}
\end{table}
Table 4: GHPE results from GLAMR. Unit: \(mm\).
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Scene & Metric & RMSE \(\downarrow\) & \(mean\) & \(std.\) & \(max\) \\ \hline Football & ATE & 3.26 & 2.85 & 1.58 & 11.83 \\ Running001 & ATE & 29.48 & 25.55 & 14.72 & 56.07 \\ Garden001 & ATE & **2.86** & 2.57 & 1.26 & 6.55 \\ \hline Football & RPE & 0.08 & 0.06 & 0.05 & 1.34 \\ Running001 & RPE & 0.40 & 0.35 & 0.19 & 1.04 \\ Garden001 & RPE & **0.06** & 0.04 & 0.04 & 0.71 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Global trajectory evaluation of GLAMR. Unit: \(m\).
Figure 7: The ATE error mapped on the GT trajectory. The color represents the error according to the color bar. |
2306.03400 | G-CAME: Gaussian-Class Activation Mapping Explainer for Object Detectors | Nowadays, deep neural networks for object detection in images are very
prevalent. However, due to the complexity of these networks, users find it hard
to understand why these objects are detected by models. We proposed Gaussian
Class Activation Mapping Explainer (G-CAME), which generates a saliency map as
the explanation for object detection models. G-CAME can be considered a
CAM-based method that uses the activation maps of selected layers combined with
the Gaussian kernel to highlight the important regions in the image for the
predicted box. Compared with other Region-based methods, G-CAME can transcend
time constraints as it takes a very short time to explain an object. We also
evaluated our method qualitatively and quantitatively with YOLOX on the MS-COCO
2017 dataset and guided to apply G-CAME into the two-stage Faster-RCNN model. | Quoc Khanh Nguyen, Truong Thanh Hung Nguyen, Vo Thanh Khang Nguyen, Van Binh Truong, Quoc Hung Cao | 2023-06-06T04:30:18Z | http://arxiv.org/abs/2306.03400v1 | # G-CAME: Gaussian-Class Activation Mapping Explainer for Object Detectors
###### Abstract
Nowadays, deep neural networks for object detection in images are very prevalent. However, due to the complexity of these networks, users find it hard to understand why these objects are detected by models. We proposed Gaussian Class Activation Mapping Explainer (G-CAME), which generates a saliency map as the explanation for object detection models. G-CAME can be considered a CAM-based method that uses the activation maps of selected layers combined with the Gaussian kernel to highlight the important regions in the image for the predicted box. Compared with other Region-based methods, G-CAME can transcend time constraints as it takes a very short time to explain an object. We also evaluated our method qualitatively and quantitatively with YOLOX [7] on the MS-COCO 2017 dataset [12] and guided to apply G-CAME into the two-stage Faster-RCNN [20]) model.
## 1 Introduction
In object detection, deep neural networks (DNNs) [8] have significantly improved with the adoption of convolution neural networks. However, the deeper the network is, the more complex and opaque it is to understand, debug or improve. To help humans have a deeper understanding of the model's decisions, several eXplainable Artificial Intelligence (XAI) methods using saliency maps to highlight the important regions of input images have been introduced.
One simple and common way to explain the object detector is to ignore the model architecture and only consider the input and output. This approach aims to determine the importance of each region in the input image based on the change in the model's output. For example, D-RISE [16], an improvement of RISE [15], estimates each region's effect on the input image by creating thousands of perturbed images, then feeds them into the model to predict and get the score for each perturbed mask. Another method is SODEx [22], which is an upgrade of LIME [21]. It also uses the same technique as D-RISE to explain object detectors. In contrast with D-RISE, SODEx gives each super-pixel score in the input image. Although the results of both SODEx and D-RISE are compelling, the generation of a large number of perturbations slows these methods down considerably.
Other approaches, such as CAM [28] and GradCAM [23], use the activation maps of a specific layer in the model's architecture as the main component to form the explanation. These methods are faster than mentioned region-based methods but still have some meaningless information since the feature maps are not related to the target object [27]. Such methods can give a satisfactory result for the classification task. Still, they cannot be applied directly to the object detection task because these methods highlight all regions having the same target class and fail to focus on one specific region.
In this paper, we propose the _Gaussian Class Activation Mapping Explainer_ (G-CAME), which can explain the classification and localization of the target objects. Our method improves previous CAM-based XAI methods since it is possibly applied to object detectors. By adding the Gaussian kernel as the weight for each pixel in the feature map, G-CAME's final saliency map can explain each specific object.
Our contributions can be summarized as follows:
1. We propose a novel CAM-based method, G-CAME, to explain object detectors as a saliency map. Our method can give an explanation in a reasonably short time, which overcomes the existing methods' time constraints like D-RISE [16] and SODEx [22].
2. We propose a simple guide in applying G-CAME to explain two types of commonly used models: YOLOX [7] (one-stage detector) and Faster-RCNN [20] (two-stage detector).
3. We qualitatively and quantitatively evaluate our method with D-RISE and prove that our method can give a less noise and more accurate saliency map than D-RISE.
## 2 Related Work
### Object Detection
Object detection problem is one of the fields in computer vision (CV). Object detection models are categorized into two types: one-stage model and two-stage model. In detail, the one-stage model detects directly over a dense sampling of locations, such as YOLO series [18], SSD [13], and RentinaNET [11]. While the two-stage model detects after two phases. In the first phase, the Region Proposal stage, the model selects a set of Region of Interest (ROI) from the feature extraction stage. Then, in the second stage, the model classifies based on each proposed ROI. Some of the most popular two-stage detection models are R-CNN family [8], FPN [10], and R-FCN [4].
### Explainable AI
In CV, several XAI methods are used to analyze deep CNN models in the classification problem, while in the object detection problem, the number of applicable XAI methods is limited. In general, there are two types of analyzing the model's prediction. One is based on the input region to give the saliency map, called _Region-based saliency methods_, while _CAM-based saliency methods_ uses feature maps to create the saliency map for the input.
#### 2.2.1 Region-based saliency methods
The first type of XAI, Region-based saliency methods, adopt masks to keep a specific region of the input image to measure these regions' effect on output by passing the masked input through the model and calculating each region's weight. In the classification problem, LIME [21] uses several random masks and then weights them by a simple and interpretable model like Linear Regression or Lasso Regression. An improvement of LIME is RISE [15], in which the authors first generate thousands of masks and employ them to mask the input, then linearly combine them with their corresponding weight score to create the final saliency map. Several methods are adjusted to apply to the object detection problem. Surrogate Object Detection Explainer [22] (SODEx) employed LIME to explain object detectors. Instead of calculating the score of each region for the target class like LIME, the author proposed a new metric that calculates the score of each region for the target bounding box. Detector Randomized Input Sampling for Explanation (D-RISE) [16] was proposed as an improvement over RISE. D-RISE defines a different metric to compute the weighted score for each random mask, then linearly combines them to explain the target bounding box. All mentioned methods are intuitive since users do not require to understand the model's architecture. One more common thing is that the explanation is sensitive to the hyperparameters modification. However, this is also one of the weaknesses of these methods because we can have multiple explanations for one object. Therefore, if we want a clear and satisfactory explanation, we must choose the hyperparameters carefully. Another weakness of these methods is taking a lot of time to explain.
#### 2.2.2 CAM-based methods
The other approach in XAI is _CAM-based_ methods. In this approach, we must access and explicitly understand the model's architecture. Class Activation Mapping (CAM) [28] is the first method to combine the weighted activation map of one or multiple selected convolution layers to form the explanation. After that, GradCAM [23], GradCAM++ [3], and XGradCAM [6] extended CAM to obtain the saliency map with fine-grained details. These methods use the partial derivatives of the selected layers' feature maps concerning the target class score produced by the CNN model to get a weight for each activation map. CAM-based methods are usually faster than Region-based methods because they only require one or some model's layers to form the explanation and only need to execute a single forward or backward pass. However, CAM-based methods' saliency maps usually contain meaningless features and depend entirely on feature maps. Also, all previous CAM-based XAI methods are used for the classification problem, but none of them has been proposed for the object detection problem yet.
In this paper, we proposed G-CAME, a CAM-based method that can explain both one-stage and two-stage object detection models.
## 3 Methods
For a given image \(I\) with size \(h\) by \(w\), an object detector \(f\) and the prediction of that model \(d\) includes the bounding box and predicted class. We aim to provide a saliency map \(S\) to explain why the model has that prediction. The saliency map \(S\) has the same size as the input \(I\). Each value \(S_{(i,j)}\) shows the importance of each pixel \((i,j)\) in \(I\), respectively, influencing \(f\) to give prediction \(d\). We propose a new
Figure 1: G-CAME can highlight regions that affect the target prediction on sample images in MS-COCO 2017 dataset.
method that helps to produce that saliency map in a white-box manner. Our method is inspired by GradCAM [23], which uses the class activation mapping technique to generate the explanation for the model's prediction. The main idea of our method is to use normal distribution combined with the CAM-based method to measure how one region in the input image affects the predicted output. Fig. 2 shows an overview of our method.
We cannot directly apply XAI methods for the classification model to the object detection model because of their output difference. In the classification task, the model only gives one prediction that shows the image's label. However, in the object detection task, the model gives multiple boxes with corresponding labels and the probabilities of objects.
Most object detectors, like YOLO [18] and R-CNN [8], usually produce \(N\) predicted bounding boxes in the format:
\[d_{i}=(x_{1}^{i},y_{1}^{i},x_{2}^{i},y_{2}^{i},p_{obj}^{i},p_{1}^{i},\dots,p_{ C}^{i}) \tag{1}\]
The prediction is encoded as a vector \(d_{i}\) that consists of:
* Bounding box information: \((x_{1}^{i},y_{1}^{i},x_{2}^{i},y_{2}^{i})\) denotes the top-left and bottom-right corners of the predicted box.
* Objectness probability score: \(p_{obj}^{i}\in[0,1]\) denotes the probability of an object's occurrence in predicted box.
* Class score information: \((p_{1}^{i},\dots,p_{C}^{i})\) denotes the probability of \(C\) classes in predicted box.
In almost object detectors, such as Faster-RCNN [20], YOLOv3 [19], YOLOX [7], the anchor boxes technique is widely used to detect bounding boxes. G-CAME utilizes this technique to find and estimate the region related to the predicted box. Our method can be divided into 3 phases (Fig. 2) as follows: 1) Object Locating, 2) Weighting Feature Map and 3) Masking Target Region.
### Object Locating with Gradient
The anchor box technique is used in most detector models like Faster-RCNN [20], YOLOX [7], TOOD [5], and PAFNet [25] to predict the bounding boxes. In the final feature map, each pixel predicts \(N\) bounding boxes (usually \(N=3\)) and one bounding box for _anchor free_ technique. To get the correct pixel representing the box we aim to explain, we take the derivative of the target box with the final feature map to get the location map \(G_{k}^{l(c)}\) as the following formula:
\[G_{k}^{l(c)}=\frac{\partial S^{c}}{\partial A_{k}^{l}} \tag{2}\]
where \(G_{k}^{l(c)}\) denotes the gradient map of layer \(l\) for feature map \(k\). \(\frac{\partial S^{c}}{\partial A_{k}^{l}}\) is the derivative of the target class score \(S^{c}\) with the feature map \(A_{k}\). In the regression task of most one-stage object detectors, \(1\times 1\) Convolution usually is used for predicting the bounding box, so in backward-pass, we have the Gradient map \(G\) having the value of 1 pixel. While in the two-stage object detector, because the regression and classification tasks are in two separate branches, we create a simple guide for implementing G-CAME for two-stage models in Sec. 4.5.
### Weighting feature map via Gradient-based method
We adopt a gradient-based method as GradCAM [23] for classification to get the weight for each feature map. GradCAM method can be represented as:
\[L_{GradCAM}^{c}=ReLU\bigg{(}\sum_{k}\alpha_{k}^{c}A_{k}^{c}\bigg{)} \tag{3}\]
\[\alpha_{k}^{c}=\sum_{i}\sum_{j}\frac{\partial S^{c}}{\partial A_{ij}^{k}} \tag{4}\]
where \(\alpha_{k}^{c}\) is the weight for each feature map \(k\) of target layer \(l\) calculated by taking the mean value of the gradient map \(G_{k}^{l(c)}\). GradCAM method produces the saliency map by linearly combining all the weighted feature maps \(A_{k}^{l}\), then uses the \(ReLU\) function to remove the pixel not contributing to the prediction.
As the value in gradient map can be either positive or negative, we divide all \(k\) feature map into two parts (\(k_{1}\) and \(k_{2}\), \(k_{1}+k_{2}=k\)), the one with positive gradient \(A_{k}^{c(+)}\) and another with negative gradient \(A_{k}^{c(-)}\). The negative \(\alpha\) is considered to reduce the target score, so we add two parts separately and then subtract the negative part from the positive one (as Eq. 7) to get a smoother saliency map.
\[A_{k_{2}}^{c(-)}=\alpha_{k_{2}}^{c(-)}A_{k_{2}}^{c} \tag{5}\]
\[A_{k_{1}}^{c(+)}=\alpha_{k_{1}}^{c(+)}A_{k_{1}}^{c} \tag{6}\]
\[L_{CAM}^{c}=ReLU\bigg{(}\sum_{k_{1}}A_{k_{1}}^{c(+)}-\sum_{k_{2}}A_{k_{2}}^{c(- )}\bigg{)} \tag{7}\]
Because GradCAM can only explain classification models, it highlights all objects of the same class \(c\). By detecting the target object's location, we can guide the saliency map to only one object and make it applicable to the object detection problem.
### Masking target region with normal distribution
To deal with the localization issue, we proposed to use the normal distribution to estimate the region around the object's center. Because the gradient map shows the target object's location, we estimate the object region around the pixel representing the object's center by using a Gaussian
mask as the weight for each pixel in the weighted feature map \(k\). The Gaussian kernel is defined as:
\[G_{\sigma}=\frac{1}{2\pi\sigma^{2}}\exp^{-\frac{(x^{2}+y^{2})}{2\sigma^{2}}} \tag{8}\]
where the term \(\sigma\) is the standard deviation of the value in the Gaussian kernel and controls the kernel size. \(x\) and \(y\) are two linear-space vectors filled with value in range \([1,kernel\text{-}size]\) one vertically and another horizontally. The bigger \(\sigma\) is, the larger highlighted region we get. For each feature map \(k\) in layer \(l\), we apply the Gaussian kernel to get the region of the target object and then sum all these weighted feature maps. In general, we slightly adjusted the weighting feature map (Eq. 7) to get the final saliency map as shown in Eq. 9:
\[\begin{split} L_{GCAME}^{c}=ReLU\bigg{(}&\sum_{k_{1 }}G_{\sigma(k_{1})}\odot A_{k_{1}}^{c(+)}-\\ &\sum_{k_{2}}G_{\sigma(k_{2})}\odot A_{k_{2}}^{c(-)}\bigg{)} \end{split} \tag{9}\]
#### 3.3.1 Choosing \(\sigma\) for Gaussian mask
The Gaussian masks are applied to all feature maps, with the kernel size being the size of each feature map, and the \(\sigma\) is calculated as in Eq. 12.
\[R=\log\left|\frac{1}{Z}\sum_{i}\sum_{j}G_{k}^{l(c)}\right| \tag{10}\]
\[S=\sqrt{\frac{H\times W}{h\times w}} \tag{11}\]
\[\sigma=R\log S\times\frac{3}{\left\lfloor\frac{\sqrt{h\times w}-1}{2}\right\rfloor} \tag{12}\]
Here, the \(\sigma\) is combined by two terms. In the first term, we calculate the expansion factor with \(R\) representing the importance of location map \(G_{k}^{l(c)}\) and \(S\) is the scale between the original image size (\(H\times W\)) and the feature map size (\(h\times w\)). We use the logarithm function to adjust the value of the first term so that its value can match the size of the gradient map. Because object detectors usually are multi-scale object detection, we have a different \(S\) for each scale level. For the second term, we follow the rule of thumb in choosing Gaussian kernel size as the Eq. 13 and take the inverse value.
\[kernel\text{-}size=2\times\lceil 3\sigma\rceil+1 \tag{13}\]
#### 3.3.2 Gaussian mask generation
We generate each Gaussian mask by following steps:
Figure 2: Overview of G-CAME method. We use the gradient-based technique to get the target object’s location and weight for each feature map. We multiply element-wise with Gaussian kernel for each weighted feature map to remove unrelated regions. The output saliency map is created by a linear combination of all weighted feature maps after applying the Gaussian kernel.
1. Create a grid filled with value in range \([0,w]\) for the width and \([0,h]\) for the height (\(w\) and \(h\) is the size of the location map \(G_{k}^{l(c)}\)).
2. Subtract the grid with value in position \((i_{t},j_{t})\) where \((i_{t},j_{t})\) is the center pixel of the target object on the location map.
3. Apply Gaussian formula (Eq. 8) with \(\sigma\) as the expansion factor as Eq. 12 to get the Gaussian distribution for all values in the grid.
4. Normalize all values in range \([0,1]\).
By normalizing all values in range \([0,1]\), Gaussian masks only keep the region relating to the object we aim to explain and remove other unrelated regions in the weighted feature map.
## 4 Experiments and Results
We performed our experiment on the MS-COCO 2017 [12] dataset with 5000 validation images. The models in our experiment are YOLOX-1 (one-stage model) and Faster-RCNN (two-stage model). All experiments are implemented in Pytorch [14] and conducted on NVIDIA Tesla P100 GPU. G-CAME's inference time depends on the number of feature maps in selected layer \(l\). Our experiments run on model YOLOX-1 with 256 feature maps for roughly \(0.5s\) per object.
### Saliency map visualization
We performed a saliency map qualitative comparison of G-CAME with D-RISE [16] to validate the results of G-CAME. We use D-RISE's default parameters [16], where each grid's size is \(16\times 16\), the probability of each grid's occurrence is \(0.5\), and the amount of samples for each image is \(4000\). For G-CAME, we choose the final convolution layer in each branch of YOLOX as the target layer to calculate the derivative (Fig. 3).
Fig. 4 shows the results of G-CAME compared with D-RISE. As can be seen from the result, G-CAME significantly reduced random noises. Also, G-CAME can generate smoother saliency maps compared with D-RISE in a short time.
### Localization Evaluation
To evaluate the new method, we used two standard metrics, Pointing Game [26] and Energy-based Pointing Game [24] to compare the correlation between an object's saliency map and human-labeled ground truth. The results are shown in Table 1.
#### 4.2.1 Pointing Game (PG)
We used the pointing game metric [26] as a human evaluation metric. Firstly, we run the model on the dataset and get the bounding boxes that best match the ground truth for each class on each image. A \(hit\) is scored if the highest point of the saliency map lies inside the ground truth; otherwise, a \(miss\) is counted. The pointing game score is calculated by \(PG=\frac{\#Hits}{\#Hits+\#Misses}\) for each image. This score should be high for a good explanation to evaluate an XAI method.
#### 4.2.2 Energy-Based Pointing Game (EBPG)
EBPG [24] calculates how much the energy of the saliency map falls inside the bounding box. EBPG formula is defined as follows:
\[Proportion=\frac{\sum L_{(i,j)\in bbox}^{c}}{L_{(i,j)\in bbox}^{c}+L_{(i,j) \notin bbox}^{c}} \tag{14}\]
Similar to the PG score, a good explanation is considered to have a higher EBPG.
PG and EBPG results are reported in Table 1. Specifically, more than 65% energy of G-CAME's saliency map falls into the ground truth bounding box compared with only 18.4% of D-RISE. In other words, G-CAME drastically reduces noises in the saliency map. In Pointing game evaluation, G-CAME also gives better results than D-RISE. 98% of the highest pixel lie inside the correct bounding box, while this number in D-RISE is 86%.
#### 4.2.3 Bias in Tiny Object Detection
Explaining tiny objects detected by the model can be a challenge. In particular, the saliency map may bias toward the neighboring region. This issue can worsen when multiple
Figure 3: We choose the last convolution layer in predicting the class phase of each branch of YOLOX-1-HEAD as the target layer for G-CAME
tiny objects partially or fully overlap because the saliency map stays in the same location for every object. In our experiments, we define the tiny object by calculating the ratio of the predicted bounding box area to the input image area (640\(\times\)640 in YOLOX). An object is considered tiny when this ratio is less than or equal to 0.005. In Fig. 5, we compare our method with D-RISE in explaining tiny object prediction for two cases. In the first case (Fig. 5a), we test the performance of D-RISE and G-CAME in explaining two tiny objects of the same class. The result shows that D-RISE fails to distinguish two "traffic lights", where the saliency maps are nearly identical. For the case of multiple objects with different classes overlapping (Fig. 5b), the saliency maps produced by D-RISE hardly focus on one specific target. The saliency corresponding to the "surfboard" even covers the "person", and so does the explanation of the "person". The problem can be the grid's size in D-RISE, but changing to a much smaller grid's size can make the detector unable to predict. In contrast, G-CAME can clearly show the target object's localization in both cases and reduce the saliency map's bias to unrelated regions. In detail, we evaluated our method only in explaining tiny object prediction with EBPG score. The MS-COCO 2017 validation dataset has more than 8000 tiny objects, and the results are reported in Table 1. Our method outperforms D-RISE with more than 26% energy of the saliency map falling into the predicted box, while this figure in D-RISE is only 0.9%. Especially, most of the energy in D-RISE's explanation does not focus on the correct target. In the PG score, instead of evaluating one pixel, we assess all pixels having the same value as the pixel with the highest value. The result also shows that G-CAME's explanation has better accuracy than D-RISE's.
### Faithfulness Evaluation
A good saliency map for one target object should highlight regions that affect the model's decision most. So, we employ the _Average Drop (AD)_ metric to evaluate the confidence change [3, 6, 17] in the model's prediction for the target object when using the explanation as the input. In
\begin{table}
\begin{tabular}{c c c} \hline \hline Method & D-RISE & G-CAME (Our) \\ \hline PG & & \\ _(Overall \(|\) Tiny object)_ & & \\ \hline EBPG & & \\ _(Overall \(|\) Tiny object)_ & & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Localization evaluation between D-RISE and G-CAME on the MS-COCO 2017 validation dataset. The best is in bold.
Figure 4: Visualization results of D-RISE and G-CAME on three sizes of the object: big object, normal object, and tiny object. In all cases, results show that G-CAME’s saliency maps are more precise and have less noise than D-RISE’s.
other words, when we remove these important regions, the confidence score of the target box should be reduced. The _Average Drop_ can be calculated by the formula:
\[AD=\frac{1}{N}\sum_{i=1}^{N}\frac{max(P_{c}(I_{i})-P_{c}(\tilde{I}_{i}),0)}{P_{c} (I_{i})}\times 100 \tag{15}\]
where:
\[\tilde{I}_{o}=I\odot(1-M_{o})+\mu M_{o} \tag{16}\]
\[P_{c}(\tilde{I})=IOU(L_{i},L_{j})\cdot p_{c(L_{j})} \tag{17}\]
Here, we adjust the original formula of _Average Drop_ for the object detection model. In Eq. 19, we create a new input image masked by the explanation \(M\) of G-CAME. \(\mu\) is the mean value of the original image. With the value of \(M\), we only keep 20% of the pixel with the most significant value in the original explanation and set the rest as 0. Then, we can minimize the explanation's noise, and the saliency map can focus on the regions most influencing the prediction.
In Eq. 20, to compute probability \(P_{c}(\tilde{I})\), we first calculate the pair-wise \(IOU\) of the box \(L_{j}\) predicted on perturbed image \(\tilde{I}\) with the box \(L_{i}\) predicted on the original image and take the one with the highest value. After that, we multiply the first term with the corresponding class score \(p_{c(L_{j})}\) of the box. In calculating \(P_{c}(I_{i})\), the \(IOU\) equals 1, so the value remains the original confidence score. Hence, if the explanation is faithful, the confidence drop should increase.
However, removing several pixels can penalize the method of producing the saliency map that has connected and coherent regions. Specifically, pixels representing the object's edges are more meaningful than others in the middle [9]. For example, pixels representing the dog's tail are easier to recognize than others lying on the dog's body.
To give a comparison when using the confidence drop score, we compare the _information level_ of the _bokeh_ image, which is created by removing several pixels from the original image, after applying the XAI method. To measure the _bokeh_ image's information, we use WebP [1] format and calculate the drop information by taking the proportion of the compressed size of the _bokeh_ image to the original image [9].
Table 2 shows the confidence and information drop results. In detail, D-RISE performs better in the drop confi
Figure 5: The saliency map of D-RISE and G-CAME for tiny object prediction. We test two methods in two cases: tiny objects of the same class lying close together (Fig. 4(a)) and multiple objects of different classes lying close together (Fig. 4(b)). In both cases, G-CAME can clearly distinguish each object in its explanation.
dence score, with a 42.3% reduction in the predicted class score when removing the highest value pixel. In the drop information score, our method achieves 29.1% compared to 31.58% of D-RISE, which means that our method preserves the original image's information better than D-RISE. Moreover, since G-CAME inherits the CAM-based strength in running time, G-CAME takes under 1 second to explain, while D-RISE needs roughly 4 minutes to run on the same benchmark. Because of employing feature maps as a part of the explanation, G-CAME can also reflect what the model focuses on predicting, while D-RISE cannot.
### Sanity check
To validate whether the saliency map is a faithful explanation or not, we perform a sanity check [2] with _Cascading Randomization_ and _Independent Randomization_. In _Cascading Randomization_ approach, we randomly choose five convolution layers as the test layers. Then, for each layer between the selected layer and the top layer, we remove the pre-trained weights, reinitialize with normal distribution, and perform G-CAME to get the explanation for the target object. In contrast to _Independent Randomization_, we only reinitialize the weight of the selected layer and retain other pre-trained weights. The sanity check results show that G-CAME is sensitive to model parameters and can produce valid results, as shown in Fig. 6.
### Approach for two-stage model (Faster-RCNN)
This section extends G-CAME's application to a two-stage model, namely Faster-RCNN [20]. In Faster-RCNN, the image is first passed through several stacked convolution layers to extract features. The Region Proposal Network (RPN) detects regions possibly containing an object. Those regions are then fed to the Region of Interest (ROI) Pooling layer to be in a fixed size. After that, two 1\(\times\)1 convolution layers, including a classification layer to detect the probability of an object's occurrence and a regression layer to detect the coordinate of bounding boxes, are used to detect the bounding boxes. The output bounding boxes are passed through the Faster-RCNN Predictors, including two fully connected layers. Then, G-CAME utilizes the feature maps at the end of the feature extraction phase to explain.
First, we calculate the partial derivative of the class score according to each feature map of selected layers. Faster-RCNN has four branches of detecting objects, and we choose the last convolution layer of each branch to calculate the derivative. When we take the derivative of the class score to the target layer, the gradient map (\(G_{k}^{l(c)}\)) has more than one pixel having value because anchor boxes are created in the next phase, namely the detecting phase. Thus, we cannot get the pixel representing the object's center through the gradient map. To solve this issue, we set the pixel with the highest value in the gradient map as the center of the Gaussian mask. We estimate that the area around the highest value pixel likely contains relevant features. We perform the same in the _Weighting feature map_ and _Masking region_ phases as in Fig. 2.
## 5 Conclusion
In this paper, we proposed G-CAME, a new method to explain object detection models motivated by the CAM-based method and Gaussian kernel. A simple guide is provided to implement our method in both one-stage and two-stage detectors. The experiment results show that our method can plausibly and faithfully explain the model's predictions. Moreover, our method runs reasonably short, which overcomes the time constraint of existing perturbation-based methods and reduces the noise in the saliency map.
|
2305.05521 | Chromatin remodeling due to transient-link-and-pass activity enhances
subnuclear dynamics | Spatiotemporal coordination of chromatin and subnuclear compartments is
crucial for cells. Numerous enzymes act inside nucleus\textemdash some of those
transiently link and pass two chromatin segments. Here we study how such an
active perturbation affects fluctuating dynamics of an inclusion in the
chromatic medium. Using numerical simulations and a versatile effective model,
we categorize inclusion dynamics into three distinct modes. The
transient-link-and-pass activity speeds up inclusion dynamics by affecting a
slow mode related to chromatin remodeling, viz., size and shape of the
chromatin meshes. | Rakesh Das, Takahiro Sakaue, G. V. Shivashankar, Jacques Prost, Tetsuya Hiraiwa | 2023-05-09T15:20:00Z | http://arxiv.org/abs/2305.05521v4 | # Transient-linking activity enhances subnuclear dynamics by affecting chromatin remodeling
###### Abstract
Spatiotemporal coordination of chromatin and subnuclear compartments is crucial for cells. A plethora of enzymes act inside nucleus, and some of those transiently link two chromatin segments. Here, we theoretically study how such transient-linking activities affect fluctuating dynamics of an inclusion in the chromatic medium. Numerical simulations and a coarse-grained model analysis categorize inclusion dynamics into three distinct modes. The transient-linking activity speeds up the inclusion dynamics by affecting a slow mode associated with chromatin remodeling, viz., size and shape of the chromatin meshes.
Genetic information of a eukaryotic cell is stored in its chromatin, a \(\sim 2\) m long polymeric entity comprising DNA and histone proteins, which is packed inside the nucleus typically of size 7-10 \(\mu\)m. In addition to the chromatin, the nucleus contains a diverse variety of subnuclear compartments (SNCs) like nucleoli, speckles, Cajal bodies, promyelocytic leukemia bodies, transcription factories etc., ranging \(\sim 50\) nm - 1 \(\mu\)m in sizes, all dispersed in a viscous fluid medium called nucleoplasm [1, 2, 3, 4, 5]. A spatiotemporal coordination among these SNCs and the chromatin is necessary for the healthy functionality of the cell [2, 3, 4, 5, 6, 7] lack of which correlates with several diseases [8, 9].
Recent studies have attributed such spatiotemporal coordination of SNCs and chromatin to the mechanical state of chromatin. Ref. [10] shows that coalescence kinetics of inert liquid droplets dispersed inside a nucleus depends on their dynamics dictated by the mechanics of the chromatic environment, whereas Ref. [11] shows how the number, size, and localization of such subnuclear condensates are dictated by chromatin mechanics. We found in our earlier study [12] that, transient-linking activity (TLA) associated with ATP-dependent actions of some classes of enzymes, like Topoisomerase-II [13, 14, 15, 16], can affect the microphase-separation structure of hetero-/eu-chromatin--this enables us to speculate that, even in a homogeneous medium of chromatin, _e.g._, even when looking into only the euchromatin part, enzymatic activities could affect local mechanical states of chromatin. Such change in the local mechanical state of chromatin could eventually affect dynamics of finite-size inclusions such as SNCs. Indeed, it is known that the dynamics of the chromatin and the other SNCs are usually ATP-dependent [17, 18, 19, 20, 21, 22, 23, 24], and ATP-dependent remodeling of the chromatic environment is recognized as one of the mechanisms for this dependency [23, 24, 25, 26]. Despite its possible relevance for biological (transcription) functions, up to present, no studies have clarified if TLA actually plays a role in an inclusion dynamics in this way and what kind of chromatic remodeling can contribute to it.
In this Letter, we theoretically investigate whether and how TLA can affect the chromatic medium and fluctuating dynamics of an inclusion. For this purpose, we consider a single polymer chain associated with TLA as a chromatin model, and a finite-size bead disconnected from the polymer chain as an inclusion, inside a spherical cavity (FIG. 1a). To implement TLA, we follow our earlier work in which a model enzymatic activity was constructed imagining Topoisomerase-II [12]. Through numerical simulations of this situation, we first show that TLA can indeed affect the inclusion dynamics. After that, we investigate what kind of chromatin remodeling is associated with this effect. Finally, we construct an effective model which is a fluctuating free particle model but keeps the essence to reproduce the TLA-dependency of the inclusion dynamics observed in our simulation. Using the effective model, we identify the three major dynamical modes in the system and show how TLA affects the dynamics through a slow mode associated with the chromatin remodeling.
We develop a self-avoiding linear homopolymer model for the chromatin confined within a spherical cavity of diameter \(D\) (FIG. 1a). The homopolymer comprises \(N\) soft-core beads of diameter \(d_{B}\) consecutively connected by finitely-extensible-nonlinear-elastic springs. A hard-core spherical inclusion of diameter \(d_{I}\) is placed at the center of the cavity. This inclusion experiences steric forces due to the surrounding polymer.
The dynamics of the positions of the beads (\(\mathbf{x}_{B}\)) and the inclusion (\(\mathbf{x}_{I}\)) are approximated by Brownian dynamics;
see Supplemental Material [27] and Ref. [12] for details. Stokes' relation has been assumed to mimic the frictional drag due to implicitly considered nucleoplasm. Setting the thermal energy and the nucleoplasmic viscosity to unity, and \(D=12\,\ell\), we obtain the simulation units (s.u.) of length (\(\ell\)), time (\(\tau\)), and energy (\(e\)).
TLA is implemented in the following sequential manner, which we call catch-and-release mechanism. In the normal state, any pair of spatially proximal beads experiences steric repulsion due to each other (self-avoidance potential \(h_{vex}>0\), see Supplemental Material [27]). The enzyme can catch that pair of beads with a Poisson rate \(\lambda_{ra}\) (FIG. 1b), and upon that, the beads attract each other (\(h_{vex}<0\)). Next, the attraction between those two beads is turned off with another Poisson rate \(\lambda_{an}\), and the beads stay there for a while without any steric interaction among themselves (\(h_{vex}=0\)). Eventually, the enzyme unbinds from the beads with a Poisson rate \(\lambda_{nr}\), and the beads return to their normal state with steric repulsion between themselves. Therefore, in our model, enzymatic activity is realized by the following sequence of Poisson transitions of the steric interaction between a pair of proximal beads: \(\text{state}\left(h_{vex}>0\right)\xrightarrow{\lambda_{ra}}\text{state} \left(h_{vex}<0\right)\xrightarrow{\lambda_{an}}\text{state}\left(h_{vex}=0 \right)\xrightarrow{\lambda_{nr}}\text{state}\left(h_{vex}>0\right)\). We implemented this process originally in Ref. [12] to mimic the action of Topoisomerase-II; indeed these steps allow the beads to pass across each other stochastically. We showed there that specifically the transient linking ability in this process underlies the emerged characteristic configurations of the chromatin, and this is why we call this process TLA.
We parameterize TLA as \(\Lambda=\lambda_{ra}\left(1/\lambda_{an}+1/\lambda_{nr}\right)\), which can be tuned in experiments by controlling ATP concen
Figure 1: (a) Bead-and-spring homopolymer model of chromatin packed inside a spherical cavity together with an inclusion at the center; a schematic and a typical simulation snapshot are shown. (b) Catch-and-release mechanism of Topoisomerase-II’s enzymatic activity. There is no steric repulsion between the pair of the beads when bound to the enzyme. (c) Mean \(\pm\) s.e.m. of \(\Delta x_{I,\,0.1r}\)-distribution are shown. Only the positive half is shown for better representation. (d) MSDs of the inclusion are shown for several \(\Lambda\). Dashed lines are to guide the early-time and the late-time diffusion. (e) Main—FPT distribution of the inclusion to the radius \(R=5\,\ell\) of the cavity starting from \(R=0\) (\(n=71\)). Inset—\(\Lambda\)-dependency of the mean \(\pm\) s.e.m. of FPT.
tration [30]. We choose \(\lambda_{an}=16.7\,\tau^{-1}\) and \(\lambda_{nr}=500\,\tau^{-1}\) and tune \(\lambda_{ra}\) to control the activity. For a given \(\Lambda\), we integrate the equations of motion of \(\mathbf{x}_{B}\) and \(\mathbf{x}_{I}\) using Euler discretization method, and thereby we simulate the dynamics of the polymer and the inclusion. This choice of the rates sets a typical timescale \(t_{TLA}=(1/\lambda_{an}+1/\lambda_{nr})\simeq 0.062\,\tau\) for which an enzyme catches a pair of beads. We have checked that by this timescale, a bead typically moves over only \(\mathcal{O}(d_{B})\).
Each realization of the simulation starts from a steady-state configuration of a self-avoiding polymer packed inside the cavity together with the inclusion at the center of the cavity (Supplemental Material [27]). The polymer and the inclusion follow the Brownian dynamics as described above. We note that the inclusion eventually touches the cavity boundary; so, we simulate our active polymer model (APM) until the inclusion touches the cavity boundary for the first time. The results reported below are obtained from \(n\) (values indicated in the corresponding figures) number of independent realizations, which are conducted using our lab-developed GPU-based CUDA code [12].
_Inclusion dynamics_--We first investigate how inclusion dynamics is affected by TLA by looking into its (i) one-dimensional displacement over a given time duration, (ii) mean-square-displacement (MSD), and (iii) first passage time (FPT) in the chromatic medium. Here, we consider an inclusion of size comparable to that of the bead size (\(d_{I}=0.40\,\ell\), \(d_{B}\simeq 0.43\,\ell\)).
First, we compare the distribution \(P\left(\Delta x_{I,\,0.1\tau}\right)\) of one-dimensional displacement of the inclusion over a time duration \(\Delta t=0.1\,\tau>t_{TLA}\) for several \(\Lambda\) (FIG. 1c). We note that \(P\left(\Delta x_{I,\,0.1\tau}\right)\) follows a Gaussian distribution and widens with \(\Lambda\). The distributions appeared to be symmetric around zero, although with some fluctuations. Hence, we carefully investigated the data for drift in the inclusion dynamics, and we found no drift (SFIG. 1).
Next, we calculate MSD of the inclusion as \(\langle\Delta r_{I}^{2}\left(\Delta t\right)\rangle=\langle|\mathbf{x}_{I}\left(t_ {0}+\Delta t\right)-\mathbf{x}_{I}\left(t_{0}\right)|^{2}\rangle\), where \(\langle\cdot\rangle\) represents average over several \(t_{0}\) and realizations. We note (A) a short early-time diffusive regime, (B) an intermediate subdiffusive regime, and (C) a late-time diffusive regime. Following this regime-(C), we note a significant slowing down of the inclusion dynamics (data not shown). We check that by this time, the inclusion reaches close to the cavity boundary, and although the inclusion does not interact with the boundary, its dynamics is indirectly affected by the polymeric density at that locality which differs from the bulk region. The effect of TLA at that locality has a different physics than that in the bulk region, and therefore, it will be discussed in the future. In this Letter, we focus on the bulk region (up to radius \(R=5\,\ell\) instead of the whole cavity of radius \(R=6\,\ell\)). We show the regimes (A)-(C) of the inclusion MSD in FIG. 1d. Note that the crossover time from the early-time regime to the intermediate regime increases with \(\Lambda\).
FPT represents the time that the inclusion takes to reach \(R=5\,\ell\) for the first time starting its dynamics from \(R=0\) at time \(t=0\). We prepare a histogram of the FPTs noted for several realizations of simulation and note that \(\Lambda\) affects it (FIG. 1e-main). Starting from a high value of the mean FPT (MFPT) for \(\Lambda=0\), it decreases monotonically up to a moderate \(\Lambda\) (FIG. 1e-inset). Taking the displacement distributions, MSDs, and the FPT-data together, we conclude that TLA enhances the inclusion dynamics.
_Chromatic environment_--The enhancement of the inclusion dynamics should be attributed to some change in the chromatic environment due to TLA. To investigate it in detail, we compare the polymeric configurations with and without TLA. We note that the polymeric environment becomes more heterogeneous with \(\Lambda\) (FIG. 2a, Supplemental Videos). The mean density of the beads, \(n_{B}\), increases with \(\Lambda\) (SFIG. 2). Along with that, the spatiotemporal
Figure 2: (a) Cropped slices (\(6\times 6\) in s.u.) of typical simulation snapshots are shown for \(\Lambda=0\) (left) and \(0.6188\) (right). (b) Fluctuation \(\sigma_{B}\) in local density \(n_{B}\) of beads increases with \(\Lambda\) (\(n=71\)). (c) Main—Distribution of the separation between an arbitrary point and its nearest polymeric-bead. \(\Lambda\)-dependent increase in the largest \(s_{min}\) with \(P(s_{min})>0\) indicates increase in chromatic meshsize. Data for \(s_{min}<0.2\) are not shown as the arbitrary points fall on the beads (\(d_{B}\simeq 0.43\) s.u.) in that range. Inset—Schematic of the inclusion inside a chromatic mesh.
fluctuation in \(n_{B}\), as manifested by its standard deviation \(\sigma_{B}\), also increases with \(\Lambda\) (FIG. 2b), suggesting an increase in heterogeneity of chromatin medium with TLA.
To further elaborate on this observed heterogeneity, we calculate the separation \(s_{min}\) of any arbitrary point in the bulk region of the cavity to its nearest bead and prepare its distribution \(P(s_{min})\). As a corollary to the increasing heterogeneity in chromatic environment, we note that the tail of \(P(s_{min})\) becomes heavier with \(\Lambda\) (FIG. 2c). The largest \(s_{min}(\Lambda)\) with non-zero \(P(s_{min})\) minus \(d_{B}/2\) could be interpreted as half of the meshsize in the corresponding chromatic environment. It is evident from FIG. 2c-main that the typical meshsize increases with \(\Lambda\). Considering the \(\langle\Delta r_{I}^{2}\rangle\)-data, it is straightforward to understand that the inclusion performs early-time diffusion until when it feels the mechanical hindrance due to its chromatic neighborhood beyond which it shows subdiffusion (FIG. 2c-schematic).
_Effective model_--Next, we develop an integrative understanding of chromatic environment-mediated effect of TLA on inclusion dynamics. We hypothesize that inclusion dynamics in our APM could be mimicked by its Brownian dynamics with colored noise defined over a coarse-grained timescale \(\delta t\). Therefore, we conceive a coarse-grained effective model--the inclusion is considered alone, and its dynamics is given by
\[\partial_{t}\mathbf{x}_{I}=v_{EM}\,\mathbf{\zeta}_{EM}, \tag{1}\]
where the right-hand-side is an effective noise whose characteristics can be defined from the \(\Lambda\)-dependent behavior of the displacement vector \(\Delta\mathbf{x}_{I,\,\delta t}\) defined over \(\delta t\). More specifically, \(v_{EM}^{2}=\langle|\Delta\mathbf{x}_{I,\,\delta t}|^{2}\rangle/(\delta t)^{2}\), and \(\mathbf{\zeta}_{EM}\) is a Gaussian noise with the mean \(\langle\mathbf{\zeta}_{EM}\rangle=0\) and the autocorrelation \(\langle\mathbf{\zeta}_{EM}(0)\cdot\mathbf{\zeta}_{EM}(\Delta t)\rangle=C_{\Delta\mathbf{x} _{I,\,\delta t}}\left(\Delta t\right)=\langle\sum_{t_{0}}\left[\Delta\mathbf{x}_{I,\,\delta t}\left(t_{0}\right)\cdot\Delta\mathbf{x}_{I,\,\delta t}\left(t_{0}+ \Delta t\right)\right]/\,\sum_{t_{0}}\lvert\Delta\mathbf{x}_{I,\,\delta t}\left(t_{ 0}\right)\rvert^{2}\rangle\) (\(\langle\cdot\rangle\) indicating average over several realizations). Hereafter, we consider \(\delta t=0.003\,\tau\) over which the inclusion dynamics should be affected by the dynamics of its chromatic neighborhood (FIG. 1d; also see SFIG. 3).
We obtain \(C_{\Delta\mathbf{x}_{I,\,\delta t}}\left(\Delta t\right)\) from our simulation data for several \(\Lambda\). A negative autocorrelation is noted for \(\Delta t\geq\delta t\) (FIG. 3a-inset) as expected for our current model system--(visco-)elasticity of polymeric medium surrounding the inclusion may tend to reverse the direction of inclusion motion [31]. The negative part of the autocorrelation shows good fit to double exponential function (FIG. 3a-main). Therefore, we write \(C_{\Delta\mathbf{x}_{I,\,\delta t}}(\Delta t)=A\delta(\Delta t)-g(\Delta t)\) with \(g(\Delta t)=a_{f}e^{-\Delta t/t_{f}}+a_{s}e^{-\Delta t/t_{s}}\), where the fitting parameters \(a_{f,\,s}\) and \(t_{f,\,s}\) depend on \(\Lambda\). The parameter \(A\) takes care of the autocorrelation for \(\Delta t<\delta t\).
Using the effective model, we analytically obtain MSD of the inclusion as
\[\langle\Delta r_{I,\,EM}^{2}\left(\Delta t\right)\rangle=\sum_{m\equiv f,s}2D _{m}t_{m}\left(1-e^{-\frac{\Delta t}{t_{m}}}\right)+2D_{n}\Delta t, \tag{2}\]
where \(D_{f,s}=v_{EM}^{2}a_{f,s}t_{f,s}\) and \(D_{n}=v_{EM}^{2}\left(A/2-a_{f}t_{f}-a_{s}t_{s}\right)\) (Supplementary Material [27]). Treating \(A\) as a \(\Lambda\)-dependent fitting parameter, we find good agreement between \(\langle\Delta r_{I}^{2}\rangle\) and \(\langle\Delta r_{I,\,EM}^{2}\rangle\) (SFIG. 4 and SFIG. 5). Thus,
Figure 3: (a) Autocorrelation \(C_{\Delta\mathbf{x}_{I,\,\delta t}}(\Delta t)\) of the inclusion’s displacement vector \(\Delta\mathbf{x}_{I,\,\delta t}\) over \(\delta t=0.003\,\tau\) for several \(\Lambda\) (symbols). Main, semi-log plot to focus on the negative part of \(C_{\Delta\mathbf{x}_{I,\,\delta t}}\); inset, linear plot. Dashed-lines are fits to \(-C_{\Delta\mathbf{x}_{I,\delta t}}(\Delta t>0)\) with the double exponential function, \(g(\Delta t)\). (b) Main—Parts of \(\langle\Delta r_{I,\,EM}^{2}\rangle\) corresponding to the three dynamical modes (n, f, and s) for several \(\Lambda\) (see FIG. 3a for legends). Top inset—\(\Lambda\)-dependency of the diffusivity corresponding to each mode. Bottom inset—Schematic of each mode. Golden: chromatin media. Blue: inclusion.
our simulation results are successfully reproduced by the effective model, where the chromatic environment-mediated effect of TLA on the inclusion dynamics is captured by the \(\Lambda\)-dependency in the features of the coarse-grained noise. Note that, since we have not assumed any radial-position dependency in the coarse-grained noise, this agreement may guarantee that the TLA-dependencies observed in the simulations come from the bulk effect emerged due to TLA.
We identify the MSD-terms corresponding to \(m\equiv f,s\) with that obtained for a particle subjected to a harmonic potential following an overdamped Langevin equation (Supplemental Material [27]). This gives us a diffusivity \(D_{m}\) and a characteristic time \(t_{m}\) for the mode-\(m\). Thus, the inclusion dynamics can be understood as that dictated by a fast (f) and a slow (s) mode (with \(t_{f}\sim 0.002\;\tau\) and \(t_{s}\sim 0.02\;\tau\)), plus a normal (n) diffusive mode. The n-mode originates from the delta-correlated forces that the inclusion feels due to the thermal noise, plus the polymeric neighborhood coarse-grained over \(\delta t\). The origin of the fast and slow modes must underlie in the fluctuation in the potential that the inclusion feels due to its polymeric neighborhood; we indeed found that the fluctuation in the number of the beads interacting with the inclusion has the characteristic timescale comparable to that of the fast mode, whereas remodeling of the polymeric mesh around the inclusion has the timescale comparable with the slow mode (SFIG. 6, Supplemental Material [27]). One can visualize the corresponding scenario as following--let us consider a polymeric mesh comprising some number of beads around the inclusion. At the scale of \(\sim 0.002\;\tau\), that number fluctuates because of the individual bead's thermal fluctuation (see SFIG. 7 for bead's MSD), and therefore the potential fluctuates. This leads to the fast mode-contribution to inclusion dynamics. However, at a longer timescale \(\sim 0.02\;\tau\), the size and shape of the mesh are reconfigured leading to another source of the potential-fluctuation. This dictates the slow mode-contribution to the inclusion dynamics.
Lastly, we investigate which mode mainly contributes to the speeding of the inclusion dynamics with TLA. FIG. 3b-main plots the parts of MSD, \(\langle\Delta r^{2}_{I,\,EM}\rangle\), separately derived from all the three modes. We find that the \(\Lambda\)-dependency in MSD at the intermediate time scale (\(\sim 0.001<\Delta t<\sim 0.02\)) is dominated by that of the s-mode. Furthermore, FIG. 3b-top inset shows the changes in the diffusivity in each mode with \(\Lambda\), where we note a significant increase in \(D_{s}\) (red circles) with \(\Lambda\). These results suggest that significant initial speeding up of the inclusion dynamics (around \(\Delta t=0.001\) in FIG. 1d) is induced through s-mode, which is associated with the chromatin remodeling. In conclusion, TLA-assisted remodeling of the chromatic neighborhood plays a major role in enhancing the inclusion dynamics. (See Supplemental Material [27] for interpretations of FIG. 3b for all three modes.)
_Discussion_--In summary, through numerical simulations of APM and construction and analysis of the effective model, we investigated the effect of enzymatic action-induced TLA on the inclusion dynamics in the chromatin environment. We showed that the inclusion dynamics in this complex medium comprises three modes--a fast mode dynamics within the local polymeric mesh, a slow mode dynamics associated with the polymeric reconfiguration, and a normal diffusive mode. TLA speeds up the inclusion dynamics by significantly facilitating chromatin remodeling. This finding is in line with Ref. [23] where the authors emphasized on the importance of ATP-dependent chromatin remodeling to explain their experimental observation. In Ref. [24], an effective model combining the fast and the normal diffusive modes (described as inclusion dynamics within a corral and the translocation of the corral due to chromatin diffusion, respectively) has been proposed to explain the inclusion dynamics. Our results further insist upon the importance of the slow mode associated with chromatin remodeling to fully understand the TLA-dependency of the inclusion dynamics (schematics in FIG. 3b-bottom inset). In Ref. [25], significant slowing down of transcription compartments' dynamics upon depression of temperature from \(37^{\circ}\) C to \(25^{\circ}\) C has been attributed to temperature-dependent active processes. The chromatin remodeling-associated dynamical mode reported here can be responsible for that experimental observation.
Our study may further predict that the effect of TLA can be significantly influenced by the inclusion size; for example, small inclusions can go through the polymeric mesh in chromatin, so their dynamics may not be affected by the activity; on the other hand, larger inclusions have to push away the surrounding polymers to travel in chromatin environment. So their dynamics may be highly influenced by the activity [32]. The subnuclear space is indeed full of SNCs of different sizes. Hence, before closing this letter, we briefly check this idea by investigating the inclusion-size dependency of the presented TLA effect.
We simulate the cases for \(d_{I}\in[0.2\,\ell,1.2\,\ell]\) and analyze the inclusion dynamics. The MSD profiles for all the \(d_{I}\) investigated are qualitatively similar to what we reported above for \(d_{I}=0.4\,\ell\) (FIG. 4a). Also, the crossover between the early-time diffusion and the intermediate subdiffusion is delayed by \(\Lambda\) for all the \(d_{I}\gtrsim 0.4\,\ell\).
Next, we investigate the FPT statistics of the inclusion. We note that the effect of TLA is insignificant for the smallest \(d_{I}\); however, it becomes significant for larger inclusions (FIG. 4b). For the largest \(d_{I}\) investigated, for some \(\Lambda>0\), the MFPT is reduced to almost half its value without TLA. Thus, the quantitative influence of TLA depends on the inclusion size.
_Acknowledgement_--R.D. acknowledges useful discussions he had with Raphael Voituriez and David Weitz. R.D. and T.H. appreciate Yuting Lou and Akinori Miyamoto for valuable discussions. This research was supported by Seed fund of Mechanobiology Institute (to J.P., T.H.) and Singapore Ministry of Education Tier 3 grant, MOET32020-0001
(to G.V.S., J.P., T.H.) and JSPS KAKENHI No. JP18H05529 and JP21H05759 from MEXT, Japan (to T.S.).
|
2304.05608 | Quantum de Sitter geometry | Quantum de Sitter geometry is discussed using elementary field operator
algebras in Krein space quantization from an observer-independent point of
view, {\it i.e.} ambient space formalism. In quantum geometry, the conformal
sector of the metric becomes a dynamical degree of freedom, which can be
written in terms of a massless minimally coupled scalar field. The elementary
fields necessary for the construction of quantum geometry are introduced and
classified. A complete Krein-Fock space structure for elementary fields is
presented using field operator algebras. We conclude that since quantum de
Sitter geometry can be constructed by elementary field operators, the geometry
quantum state is immersed in the Krein-Fock space and evolves in it. The total
number of accessible quantum states in the universe is chosen as a parameter of
quantum state evolution, which has a relationship with the universe's entropy.
Inspired by the Wheeler-DeWitt constraint equation in cosmology, the evolution
equation of the geometry quantum state is formulated in terms of the Lagrangian
density of interaction fields in ambient space formalism. | Mohammad Vahid Takook | 2023-04-12T04:52:29Z | http://arxiv.org/abs/2304.05608v2 | # Quantum de Sitter geometry
###### Abstract.
Quantum de Sitter geometry is discussed using elementary field operator algebras and Krein space quantization from an observer-independent point of view. In the conformal sector of metric, the massless minimally coupled scalar field appears as part of the geometrical fields. The elementary fields necessary for quantum geometry are introduced and classified. Krein-Fock space structure of elementary fields is presented using field operator algebras. The geometric fields can be constructed by elementary fields, and this leads us to conclude that the quantum state of de Sitter geometry is embedded and evolves in Krein-Fock space. The total number of accessible quantum states in the universe is chosen as a parameter of quantum state evolution, which has a relationship with the universe's entropy. Inspired by the **W**heeler-DeWitt constraint equation in cosmology, the evolution equation of the geometry quantum state is formulated in terms of the Lagrangian density of interaction fields in ambient space formalism.
## 1. Introduction
In quantum gravity, renormalizability and the construction of a complete space of quantum states are two of the most challenging problems. These problems will be addressed in this paper. Recently, Morris discussed that full quantum gravity may be perturbatively renormalizable in terms of Newton's constant, but non-perturbative in \(\hbar\)[1]. Morris's interesting idea is to use the renormalization group properties of the conformal sector of gravity. It is well known that in quantum theory, the conformal sector of the spacetime metric becomes a dynamical degree of freedom as a result of the trace anomaly [2, 3]. The metric-compatible condition is no longer valid and the simplest chosen geometry in this situation is Weyl or conformal geometry. Weyl geometry can be described with the tensor metric \(g_{\mu\nu}\) and its conformal sector, which can be expressed as a scalar field [4].
In the Landau gauge of the gravitational field in de Sitter (dS) space, the conformal sector is described by a massless minimally coupled (mmc) scalar field \(\phi_{m}\)[5]. Its quantization with positive norm states breaks the dS invariant [6]. For its covariant quantization, Krein space quantization is needed [7]. Using the interaction between the gluon field and the conformal sector of the metric in Krein space quantization, the axiomatic dS quantum Yang-Mills theory with color confinement and the mass gap was recently constructed [8, 9]. We showed that the mmc scalar field can be considered as a gauge potential and the dS metric field and its conformal sector are not elementary fields a la Wigner sense [10]. However, they can be written in terms of elementary fields, in which the mmc scalar field plays a central role. We presented two different perspectives on quantum geometry, namely the classical and quantum state perspectives. The first is observer-dependent and the second is observer-independent. We discussed that it is essential to use an observer-independent formalism when considering quantum geometry. Then, we must use the algebraic method in the ambient space formalism for studying quantum geometry and define the quantum state of geometry [9].
In recent years, some authors have also discussed the algebraic approach to quantum gravity. This approach takes into account an algebra of observables, Hilbert space structure, and geometry quantum state [11, 12]. By using the algebraic method, in the previous paper the complete Hilbert-Fock space was constructed for the massive elementary scalar field in dS ambient space formalism [13]. Here we generalize it to construct a Hilbert-Fock space structure for any spin fields in subsection 4.1. This space is a complete space under the action of all elementary field operators in dS space except linear gravity and the mmc scalar field. To obtain a complete space for these two fields, we need Krein space quantization, which is discussed in subsection 4.2. We know that the QFT in Krein space quantization combined with lightcone fluctuation is renormalizable [14]. The two problems of renormalizability and constructing the complete space of states for quantum dS geometry can be solved using Krein space quantization and ambient space formalism.
In the next section, we briefly review the necessary notions of general relativity and QFT for our discussion. All possible fundamental fields necessary for quantum geometry are introduced and classified in Section 3. In section 4.3, Krein-Fock space as a complete space for quantum geometry is presented. The quantum state of geometry \(|\mathfrak{F}\rangle\) is considered in section 5, which can be formally written in terms of orthonormal bases of Krein-Fock space. It is immersed and evolves in the Krein space \(\mathfrak{K}\) instead of the Hilbert space \(\mathcal{H}\). Quantum state evolution is characterized by the total number of accessible quantum states in the universe, which has a relationship with the total entropy of the universe. In Section 5, using the Wheeler-DeWitt equation, the constraint equation for the quantum state of geometry is formulated in terms of the Lagrangian density of interaction fields.
## 2. Basic notions
Spacetime structure and observation are challenging concepts in quantum theory. Riemannian geometry is usually discussed in general relativity. In Riemannian geometry, spacetime can be described by the metric \(g_{\mu\nu}(\vec{x},t)\) and curved spacetime can be visualized as a 4-dimensional hypersurface immersed in a flat spacetime of dimensions greater than 4. Although the 4-dimensional classical spacetime hypersurface is unique and observer-independent, the choice of metric \(g_{\mu\nu}\) is completely observer-dependent, which is a manifestation of the general relativity principle, _all observers are equivalent_. Spacetime hypersurfaces are no longer unique in quantum geometry. In the classical perspective, quantum spacetime is described by a sum of different spacetime hypersurfaces [10, 15]. However, in the quantum perspective, quantum spacetime is modeled by a quantum state \(|\mathfrak{F}\rangle\), which is presented in Section 4.3.
In QFT, the quantum field can be described by a quantum state vector \(|\Psi(\nu,n)\rangle\), where \(\nu\) and \(n\) are the set of continuous and discrete quantum numbers respectively. They are labeled the eigenvector of the set of commutative operator algebras of the physical system and determine the Hilbert space, in this respect for more details see [13]. Although the particle and spinor-tensor fields, \(\Phi(\vec{x},t)\), are immersed in a spacetime manifold \(M\), the quantum state vector is immersed in a Hilbert space \(\mathcal{H}\). The field operator \(\Phi(\vec{x},t)\) plays a significant role in the connection between these two different spaces: a spacetime manifold \(M\) and a Hilbert space \(\mathcal{H}\). On the one hand, it is immersed in spacetime, and on the other hand, it acts in Hilbert's space, which is defined at any point in a fixed classical spacetime background \(M\) (of course in the distribution sense). Hilbert space can be thought of as the "fiber" of a bundle over the spacetime manifold, where each point of the manifold corresponds to a different fiber, \(\mathcal{H}\times M\). The bundle is typically referred to as a "Fock space
bundle". For a better understanding of this idea, see [10] and noncommutative geometry [17]. The Wightman two-point function, \(\mathcal{W}(x,x^{\prime})=\langle\Omega|\Phi(x)\Phi(x^{\prime})|\Omega\rangle\), provides a correlation function between two different points in spacetime and their corresponding Hilbert spaces. \(|\Omega\rangle\) is the vacuum state. Historically, time played a central role in quantum theory, since the time parameter describes the evolution of the quantum state. Time, however, is an observer-dependent quantity in special and general relativity, and for quantum geometry to be observer-independent, the time evolution of quantum states must be replaced by another concept, which is discussed in Section \(\lx@sectionsign\).
It is useful to recall that in contrast to all massive and massless elementary fields, the mmc scalar field disappears at the null curvature limit. Its quantization with positive norm states also breaks the dS invariant [6] and its behavior is very similar to the gauge fields [Io]. As all of these properties are the same as properties of the curved spacetime geometrical fields, the mmc scalar field may be considered as a part of spacetime geometry. This idea was previously applied to explain the confinement and mass gap problems in dS quantum Yang-Mills theory, by using the interaction between the vector field and the scalar gauge field, as a part of the spacetime gauge potential [0].
## 3. Elementary fields
In the background field method, \(g_{\mu\nu}=g_{\mu\nu}^{bg}+h_{\mu\nu}\), the linear gravity \(h_{\mu\nu}\) propagate on the fixed background \(g_{\mu\nu}^{bg}\). The tensor field \(h_{\mu\nu}\) can be divided into two parts: the traceless-divergencelessness part \(h_{\mu\nu}^{T}\), which can be associated with an elementary massless spin-2 field and the pure trace part, \(h_{\mu\nu}^{P}=\frac{1}{4}g_{\mu\nu}^{bg}\phi_{m}\):
\[g_{\mu\nu}=g_{\mu\nu}^{bg}+h_{\mu\nu}^{T}+h_{\mu\nu}^{P}=\left(1+\frac{1}{4} \phi_{m}\right)g_{\mu\nu}^{bg}+h_{\mu\nu}^{T}\equiv e^{\sigma(x)}g_{\mu\nu}^{bg }+h_{\mu\nu}^{T}\,. \tag{3.1}\]
The pure trace part is also called the conformal sector of the metric, which becomes a dynamical variable in quantum theory [2, 3]. Quantum geometry is equal to the quantization of the tensor field \(g_{\mu\nu}\) or equivalently \(g_{\mu\nu}^{bg}\), \(h_{\mu\nu}^{T}\), \(h_{\mu\nu}^{P}\) and \(\phi_{m}\). In quantum geometry, the choice of the curved metric background \(g_{\mu\nu}^{bg}\) is not critical since we have simultaneous fluctuations in \(g_{\mu\nu}^{bg}\) and \(h_{\mu\nu}\) and it can also be considered as an integral over all possible spacetime hypersurfaces [Io, IS]. For a covariant quantization of \(h_{\mu\nu}\), the background must be curved [I8], and from the cosmological experimental data, the dS metric \(g_{\mu\nu}^{dS}\) is selected as a curved spacetime background.
For an observer-independent point of view, we use the dS ambient space formalism [8, 10]. In this formalism, the dS spacetime can be identified with the 4-dimensional hyperboloid embedded in the 5-dimensional Minkowski spacetime as:
\[M_{H}=\left\{x^{\alpha}\equiv x\in\mathrm{I\!R}^{5}|\ \ x\cdot x=\eta_{\alpha \beta}x^{\alpha}x^{\beta}=-H^{-2}\right\}\,\ \ \ \alpha,\beta=0,1,2,3,4\,, \tag{3.2}\]
with \(\eta_{\alpha\beta}=\)diag\((1,-1,-1,-1)\) and \(H\) is like Hubble's constant parameter. The metric is:
\[\mathrm{d}s^{2}=g_{\mu\nu}^{dS}\mathrm{d}X^{\mu}\mathrm{d}X^{\nu}=\left.\theta_ {\alpha\beta}\mathrm{d}x^{\alpha}\mathrm{d}x^{\beta}\right|_{x\cdot x=H^{-2}}\,, \tag{3.3}\]
where the \(X^{\mu}\)'s (\(\mu=0,1,2,3\)) form a set of 4-space-time intrinsic coordinates on the dS hyperboloid, and the \(x^{\alpha}\)'s are the ambient space coordinates. In this coordinate, the transverse projector on the dS hyperboloid, \(\theta_{\alpha\beta}=\eta_{\alpha\beta}+H^{2}x_{\alpha}x_{\beta}\), plays the same role as the dS metric
\(g^{dS}_{\mu\nu}\). In this formalism, quantum geometry is described by the quantization of the tensor fields \(\theta_{\alpha\beta},\mathcal{R}^{T}_{\alpha\beta}\) and \(\mathcal{R}^{P}_{\alpha\beta}=\frac{1}{4}\theta_{\alpha\beta}\phi_{m}\left(x \cdot\mathcal{R}^{T}=0=x\cdot\mathcal{R}^{P}\right)\).
Although the tensor field \(\mathcal{R}^{T}_{\alpha\beta}\) and scalar field \(\phi_{m}\) are elementary fields, the background metric \(\theta_{\alpha\beta}\) and conformal sector of the metric, \(\mathcal{R}^{P}_{\alpha\beta}=\frac{1}{4}\phi_{m}\theta_{\alpha\beta}\), are not elementary fields a la Wigner sense, since [10]:
\[\theta_{\alpha}^{\alpha}=4,\ \ \mathcal{R}^{P\alpha}_{\alpha}=\phi_{m}\,,\ \ \nabla^{\top}\cdot\mathcal{R}^{P}_{\beta}=\frac{1}{4}\theta_{\beta}^{\top}\phi_{ m}\,. \tag{3.4}\]
The transverse-covariant derivative acting on a tensor field of rank-2 is defined by:
\[\nabla^{\top}_{\alpha}K_{\beta\gamma}\equiv\partial^{\top}_{\alpha}K_{\beta \gamma}-H^{2}\left(x_{\beta}K_{\alpha\gamma}+x_{\gamma}K_{\beta\alpha}\right)\, \tag{3.5}\]
where \(\partial^{\top}_{\beta}=\theta_{\alpha\beta}\partial^{\alpha}=\partial_{\beta }+H^{2}x_{\beta}x\cdot\partial\) is tangential derivative. The tensor fields \(\mathcal{R}^{P}_{\alpha\beta}\) and \(\theta_{\alpha\beta}\) can be written in terms of elementary fields: the massive rank-2 symmetric tensor field \(K^{v}_{\alpha\beta}\) (\(v^{2}=\frac{15}{4}\)), mmc scalar gauge field \(\phi_{m}\) and massless vector field \(A_{\alpha}=\partial^{\top}_{\alpha}\phi_{m}\)[10].
The tensor field \(K^{v}_{\alpha\beta}\) is discussed as massive gravity in literature, which was studied in the previous paper [20]. The massless vector field quantization was presented in [21]. The constant pure trace part evokes the famous zero-mode problem in linear quantum gravity and the quantization of the mmc scalar field. The classical structure of our universe may be constructed by the following fundamental fields, which can be divided into three categories:
* A: Massive elementary fields with different spins, which transform by the unitary irreducible representation (UIR) of the principal series of the dS group.
* B: Massless elementary fields with the spin \(s\leq 2\), which includes the gravitational waves \(\mathcal{R}^{T}_{\alpha\beta}\), and mmc scalar fields \(\phi_{m}\). They transform by the indecomposable representation of the dS group where the central part is the discrete series representation of the dS group. They play an important role in defining the interaction between different fields as the gauge potential [10].
* C: The spacetime geometrical fields \(\theta_{\alpha\beta}\) and conformal sector of the metric \(\mathcal{R}^{P}_{\alpha\beta}\), which are not elementary fields, but they can be written in terms of the elementary fields of categories A and B. Although in classical field theory, they preserve the dS invariant, their quantization breaks the dS spacetime symmetry [10].
The quantization of the elementary massive and massless fields with the spin \(s\leq 2\) in dS ambient space formalism has been previously constructed for principle, complementary, and discrete series representations of the dS group; for a review, see [19]. The mmc scalar field and linear gravity \(\mathcal{R}^{T}_{\alpha\beta}\) can be quantized in a covariant way in Krein space quantization [5, 7]. We know that the QFT in curved spacetime suffers from renormalizability, and for solving this problem, Krein space quantization must be used; see [14] and the references therein. Due to the quantum fluctuation of the tensor field \(\mathcal{R}^{\mathcal{H}}_{\alpha\beta}\), the dS invariant is broken [10], which is reminiscent of the quantum instability of dS spacetime [22].
## 4. Quantum space of states
Before discussing the quantum geometry space of states, Hilbert-Fock space and Krein-Fock space constructions are briefly introduced using dS group algebra and field operators algebra. We discuss that the Krein-Fock space is a complete space for all elementary fields and quantum geometry.
### Hilbert-Fock space
From the dS algebra, one can construct a one-particle Hilbert space \(\mathcal{H}^{(1)}\) from dS group algebra for the principal, complementary, and discrete series UIR of the dS group [23, 24, 25]:
\[[J_{a},J_{b}]=f^{c}_{ab}J_{c} \implies |v;j_{1},j_{2};m_{j_{1}},m_{j_{2}}\rangle\in\mathcal{H}^{(1)} \equiv\bigoplus_{v;j_{1},j_{2}}\mathcal{H}^{v;j_{1},j_{2}}\,, \tag{4.1}\]
where the \(J_{a}\) are the generators of the de Sitter group \(\big{(}a,b=1,2,\cdots,10\big{)}\), \(f^{c}_{ab}\) the structure constants, \(j_{1}\) and \(j_{2}\) are two numbers, labeling the UIR's of the maximal compact subgroup \(\mathrm{SO}(4)\), picked in the sequence \(0,\frac{1}{2},1,\cdots\), such that \(-j_{1}\leq m_{j_{1}}\leq j_{1}\). The \(v\)'s are sets of parameters numbering the columns and rows of the (generalized) matrices, assuming continuous or discrete values [23].
The UIRs of the principal and complementary series are classified by the two parameters \(j\) and \(v\), whereas \(v\) is continuous and the sum is replaced with an integral [13, 25, 20]:
\[\mathcal{H}^{(1)}\equiv\bigoplus_{j}\int_{0}^{\infty}\mathrm{d}v\ \rho(v)\ \mathcal{H}^{v;j}\equiv\bigoplus_{j}\mathcal{H}^{j}\,, \tag{4.2}\]
where \(\varrho(v)\) is a positive weight in the dS background [27]. \(v\) refers to the mass parameter and \(j\) is equivalent to the spin. They determine the eigenvalues of the Casimir operators of the dS group. \(\mathcal{H}^{j}\equiv\int_{0}^{\infty}\mathrm{d}v\ \rho(v)\ \mathcal{H}^{v;j}\) is one-particle Hilbert space for a specific spin \(j\). A quantum state in this Hilbert space may be represented with \(|v,j;L\rangle\in\mathcal{H}^{(1)}\), where \(L\) is a set of quantum numbers that concern the maximal set of commuting operators with the Casimir operators, which represent the dS enveloping algebra [13, 26]. It is critical to note that the Hilbert space \(\mathcal{H}^{v;j}\) is not a complete space under the action of the dS group generator \(J_{a}\), but the Hilbert space \(\mathcal{H}^{(1)}\) is complete space [13]. The notation \(\mathcal{H}^{(1)}\) means it is the "one particle state" (first quantization). Since dS group generators and field operators do not modify the spin of the state, for a fixed spin \(j\), the Hilbert space \(\mathcal{H}^{j}\) is also a complete space. In this study, we do not consider supersymmetry and supergravity, otherwise, the sum over the index \(j\) should be taken into account for obtaining a complete space.
There are different realizations for the bases of the one-particle Hilbert space \(\mathcal{H}^{(1)}\) where some of them are presented for the scalar field (\(j=0\)) in [13]. Formally, we define a UIR of the de Sitter group by \(U^{(v;j)}(g(\alpha_{a}))\equiv U(J_{b}\,,\alpha_{a},v,j)\), which is a regular function of dS group generators and acts on the Hilbert space as:
\[|\psi\rangle\in\mathcal{H}^{(1)}\Longrightarrow U(J_{b}\,,\alpha_{a},v,j)| \psi\rangle=|\psi^{\prime};\alpha_{a},v,j\rangle\in\mathcal{H}^{(1)}\,, \tag{4.3}\]
where the \(10\)\(\alpha_{a}\)'s are the group parameters. These parameters make up a \(10\)-dimensional topological space \(\mathcal{T}(\alpha_{a})\). By using the expressions of the matrix elements (\(\sim\) coefficients) of this representation,
\[|\psi\rangle\,,|\psi^{\prime}\rangle\in\mathcal{H}^{(1)}\,,\quad\langle\psi|U^ {(v;j)}(g(\alpha_{a}))|\psi^{\prime}\rangle\,, \tag{4.4}\]
one can construct a space of square-integrable functions over some subspaces \(\mathcal{S}\) of the topological space \(\mathcal{T}\), _i.e._\(L^{2}(\mathcal{S})\) where \(\mathcal{S}\subset\mathcal{T}(\alpha_{a})\). Takahashi discusses different subspaces and defines the relations between some of them by the Plancherel formula [25]. The classical spinor-tensor field can be identified with some coefficients of the UIR of the dS group in dS ambient space coordinates under certain conditions: \(\Phi(x)\approx\langle\psi|U^{(v;j)}(g(\alpha_{a}))|\psi^{\prime}\rangle\), where \(x\in M_{H}\).
In QFT, these classical fields are assumed to be the operators, which act on a space with the Fock structure, _i.e._ like the harmonic oscillators. The well-defined operators are defined
in a tempered-distribution sense on an open subset \(\mathbb{G}\) of spacetime [27]:
\[\Phi(f)=\int\mathrm{d}\mu(x)\,f(x)\Phi(x)\,, \tag{4.5}\]
where \(f\) is a test function and \(\mathrm{d}\mu(x)\) is \(\mathrm{dS}\) invariant measure element. As usual in Fock structure, the field operator can be written in terms of its creation part, \(\Phi^{+}\), and its annihilation parts \(\Phi^{-}\): \(\Phi(f)=\Phi^{-}(f)+\Phi^{+}(f)\), where \(\Phi^{+}(f)\) creates a state and \(\Phi^{-}(f)\) annihilates a state in the Fock space. By defining a "number" operator \(N(f,g)\equiv\Phi^{+}(f)\Phi^{-}(g)\), one can prove the following algebra, which results in the construction of the Hilbert-Fock space [13]:
\[\left\{\begin{array}{ll}[\Phi^{-}(f),\Phi^{+}(g)]&=\mathcal{W}(f,g)\mathbb{1 }\,,\\ [N(f,g),\Phi^{+}(k)]&=\mathcal{W}(g,k)\Phi^{+}(f)\,,\\ [N(f,g),\Phi^{-}(k)]&=-\mathcal{W}(k,f)\Phi^{-}(g)\,,\end{array}\right. \tag{4.6}\]
where \(\mathcal{W}(f,g)=\int\mathrm{d}\mu(x)\mathrm{d}\mu(x^{\prime})f(x)g(x^{\prime })\mathcal{W}(x,x^{\prime})\), and here \(\Phi\) is the tensor field. For the fermion field, the anti-commutation relation must be used. \(\mathcal{W}(x,x^{\prime})=\langle\Omega|\Phi(x)\Phi(x^{\prime})|\Omega\rangle\) is the Wightman two-point function and \(|\Omega\rangle\) is the vacuum state, which can be fixed in the null curvature limit [27].
Now using the infinite-dimensional closed local algebra (4.6), one can construct the Hilbert-Fock space in a distributional sense on an open subset \(\mathbb{G}\) of the \(\mathrm{dS}\) spacetime [27, 16]:
\[\mathcal{H}\equiv\mathcal{F}(\mathcal{H}^{(1)})=\left\{\mathbb{C},\mathcal{H }^{(1)},\mathcal{H}^{(2)},\cdots,\mathcal{H}^{(n)},\cdots\right\}\equiv \bigoplus_{0}^{n}\mathcal{H}^{(n)}\equiv e^{\mathcal{H}^{(1)}}\,, \tag{4.7}\]
where \(\mathbb{C}\) is vacuum state, \(\mathcal{H}^{(1)}\) is one-particle states and \(\mathcal{H}^{(n)}\) is n-particle states. The n-particle states are constructed by the tensor product of one-particle states (for bosons, a symmetry product, \(\mathcal{H}^{(2)}=\mathcal{S}\mathcal{H}^{(1)}\otimes\mathcal{H}^{(1)}\) and for fermions anti-symmetric products, \(\mathcal{H}^{(2)}=\mathcal{A}\mathcal{H}^{(1)}\otimes\mathcal{H}^{(1)}\)). We used the Hilbert-Fock space phrase to emphasize that the structure of our QFT Hilbert space is in the form of the equation (4.7). An overview of axiomatic quantum fields, observable algebraic nets, and the algebraic setting of second quantization can be found in [10]. Considering the interaction fields, it does not add any supplementary operators but reduces the number of commuting operators. Then we have a new algebra, resulting in a new Hilbert space \(\mathcal{H}_{int}\). This space \(\mathcal{H}_{int}\) can be immersed in the original space, which means \(\mathcal{H}_{int}\subset\mathcal{H}\). Therefore one can use the Hilbert space (4.7) for the interaction fields, for the scalar field see [13].
### Krein-Fock space
The above Hilbert-Fock space structure cannot be used for the mmc scalar field operator, and then for \(\mathcal{H}_{\alpha\beta}^{T}\), \(\theta_{\alpha\beta}\), and \(\mathcal{H}_{\alpha\beta}^{P}\). The one-particle Hilbert space of mmc scalar field is not a complete space under the action of the \(\mathrm{dS}\) group generators \(J_{a}\). Their action results in the negative norm state [7]. This problem appeared as a \(\mathrm{dS}\)-invariant breaking and the appearance of infrared divergence in the two-point function \(\mathcal{W}(x,x^{\prime})\)[6]. Then the field operator algebra (4.6) breaks the \(\mathrm{dS}\) invariant and the \(\mathrm{dS}\) invariant Hilbert-Fock space structure cannot be constructed. That means the effect of the field operator over some states results in states out of the Hilbert space, _i.e._ states with the negative norm. These states are necessary to obtain a complete space.
This problem was solved in Krein space quantization, which is a direct tensorial sum of a Hilbert space and its anti-Hilbert space [7]:
\[\mathcal{K}^{(1)}\equiv\mathcal{H}^{(1)}\oplus[\mathcal{H}^{(1)}]^{*}\,. \tag{4.8}\]
In this case, the two-point function is the imaginary part of the two-point function of the positive mode solutions [28, 29]:
\[{}^{\mathcal{H}}\!\!W_{k}(x,x^{\prime})={}^{\mathcal{W}}\!\!W(x,x^{\prime})+{}^ {\mathcal{H}}\!\!W_{n}(x,x^{\prime})=\mathrm{i}\mathrm{Im}{}^{\mathcal{W}}\! \!\left(x,x^{\prime}\right), \tag{4.9}\]
which is dS invariant. \({}^{\mathcal{H}}\!\!W_{n}(x,x^{\prime})=-{}^{\mathcal{W}}\!\!\!W^{*}(x,x^{ \prime})\) is the two-point function of the negative norm states. If we replace the two-point function in the field operator algebra (4.6) with the Krein two-point function \({}^{\mathcal{H}}\!\!W_{k}(x,x^{\prime})\), we can construct the following dS invariant Krein-Fock space structure:
\[{}^{\mathcal{F}}\!\!\left({}^{\mathcal{K}}\!\!\left({}^{1}\right)\right)= \left\{\!C,{}^{\mathcal{K}}\!\!\left({}^{1}\right),{}^{\mathcal{K}}\!\!\left( {}^{2},\cdots,{}^{\mathcal{K}}\!\!\left({}^{n}\right),\cdots\right.\right\} \equiv\bigoplus_{n=0}^{\infty}{}^{\mathcal{K}}\!\!\left({}^{n}\right)\equiv e ^{{}^{\mathcal{K}}\!\!\left({}^{1}\right)}\,. \tag{4.10}\]
It is pertinent to note that the Krein-Fock space is a complete space for all massive and massless elementary field operators in the dS spacetime. The Krein space can be considered the "fiber" of a bundle over the dS base manifold, \({}^{\mathcal{K}}\!\!\times X_{H}\). In this complete space, we can define (in the distribution sense) the identity operator formally as \(\mathbb{1}\equiv\sum_{\mathcal{M}}|\mathcal{M}\rangle\langle\mathcal{M}|\).
### Quantum geometry space of states
In quantum geometry, the biggest challenge appears in the quantization of \(\theta_{\alpha\beta}\). Its quantum fluctuation breaks the dS invariant and the concept of spacelike separation points cannot be defined. Therefore one cannot define the field operator algebra for the construction of the Krein-Fock space structure. This problem has a long history and we do not want to discuss it here, see [15]. We ignore this problem for now since the Krein-Fock space (4.10) is a complete space for all elementary fields in dS space, and the geometrical fields \(\theta_{\alpha\beta}\) and \({}^{\mathcal{H}}\!\!\sigma_{\alpha\beta}\) can be written in terms of elementary fields. Therefore we can use the Krein-Fock space (4.10) for quantum field operators \(\theta_{\alpha\beta}\) and \({}^{\mathcal{H}}\!\!\sigma_{\alpha\beta}\). We can assume that quantum geometry is described by a quantum state \(|{}^{\mathcal{G}}\!\!\rangle\), which is immersed in the Krein-Fock space (4.10), \(|{}^{\mathcal{G}}\!\!\rangle\in{}^{\mathcal{F}}\!\!\left({}^{\mathcal{K}}\!\! \left({}^{1}\right)\right)\). It can be formally written by a superposition on the Krein-Fock space bases in the following form:
\[|{}^{\mathcal{G}}\!\!\rangle=\sum_{\mathcal{M}}c_{\mathcal{M}}({}^{\mathcal{ G}}\!\!\rangle|\mathcal{M}\rangle\,, \tag{4.11}\]
where the action of the field operators \(\theta_{\alpha\beta}\) on it results in \(|{}^{\mathcal{G}}\!\!\rangle=\theta_{\alpha\beta}|{}^{\mathcal{G}}\!\!\rangle \in{}^{\mathcal{F}}\!\!\left({}^{\mathcal{K}}\!\!\left({}^{1}\right)\right)\).
Krein space in quantum geometry plays the same role as all parts of the dS spacetime hyperboloid in classical theory. Hilbert space \({}^{\mathcal{H}}\!\!\) may be considered the observable part of space for an observer. When we use Hilbert space, we have a positive norm. To start, let's review a few facts about dS spacetime, where particles and fields are immersed and evolve within. The basis vectors of dS spacetime have negative and positive norms, where the spacetime interval is given by the metric signature \((1,-1,-1,-1)\). When we move from Euclidean geometry to Minkowskian geometry, negative norm vectors appear. However, this norm's meaning is completely different from the Euclidean norm. There are three types of vectors in spacetime based on their norms: lightlike vectors, spacelike vectors, and timelike vectors. Some regions of the dS hyperboloid are not observable to an observer due to spacetime curvature and the event horizon. However, these regions are necessary to construct a covariant formalism of the physical system.
Similarly, when discussing quantum geometry, we need to use the quantum state with the negative norm for covariant quantification. As a result, the Krein-Fock space is a complete space under geometrical field operators' actions. But what is the physical meaning of
this negative norm state in quantum geometry? In classical dS geometry, there are spacetime regions that cannot be observed by an observer. Similarly, in dS quantum geometry, negative norm states are also needed to obtain complete space but they cannot be observed by a local observer. By imposing the observer reality principle [14], they can be eliminated from the physical space of the observer. It may be argued that the nonexistence of interaction outside the event horizon is similar to the nonexistence of interaction between negative and positive norm states in Krein's space for a local observer.
At the null curvature limit, negative norm states have negative energy [20]. For a free particle state in flat spacetime, they have no physical interpretation and can be considered auxiliary states for the local observer. If we assume that the gravity state contains negative energy, the matter-radiation state carries positive energy, and their sum is zero, this hypothesis is compatible with the theory of the creation of everything from nothing in cosmology.
Different quantum gravity models are constructed in Hilbert space rather than Krein space. One of them, which is very close to our model is noncommutative geometry [17], where in the previous paper some similarities and differences were discussed [10]. The other is higher-dimensional spacetime \(M_{d}\) with \(d>4\). In this case, the field operator algebra (4.6) can be defined concerning the spacelike separation point in \(M_{d}\), which can be imagined as a fixed background space. The quantum fluctuation of \(X_{4}\) may be considered as a sum over different 4-dimensional manifolds in \(M_{d}\). However higher-dimensional spacetime is used in some theoretical models.
## 5. Quantum state evolution
As time is an observer-dependent quantity, time evolution does not make sense in quantum geometry from an observer-independent point of view. We see that the Kerin-Fock space is constructed from the free field operators algebra, which explains the kinematics of the physical system. Since all matter-radiation fields and geometrical fields are entangled and the change of one has a consequence for the other, therefore the dynamics of a physical system may be extracted from the algebra of interaction fields. But here for simplicity, we use the Lagrangian density of the interaction field for defining the evolution equation of the geometry quantum state.
Assuming the universe's evolution begins from the vacuum state, _i.e._ a quantum state without any quanta of the elementary and geometrical fields, \(|\mathcal{F}_{i}\rangle\equiv|\Omega\rangle\). Our universe is also assumed to be an isolated system. By these assumptions, the universe began with zero entropy. Due to quantum vacuum fluctuations in all elementary fields, and the interaction between some of them in the creation situation, the universe leaves the vacuum state and enters the inflationary phase. This means its entropy increases because isolated systems spontaneously evolve toward thermodynamic equilibrium, which is a state of maximum entropy. In the inflationary phase, which is explained by dS spacetime, we have an infinite-dimensional Hilbert space. But due to the compact subgroup SO(4) of the dS group and the uncertainty principle, the total number of quantum one-particle states becomes finite [30]. The finiteness hypothesis of energy results in the finiteness of the total number of quantum states \(\mathcal{N}\) in Fock space, which results in a finite entropy for the universe [30].
Since the universe is an isolated system and its entropy is increasing, \(\mathcal{N}\) increases with the evolution of the universe. Therefore the total number of accessible quantum states in the universe, \(\mathcal{N}\), may play the role of the time parameter and is used as the parameter of quantum state evolution. We assume that the evolution of the quantum state can be written
by an operator \(U\) as follows:
\[|\mathcal{F}\,;\mathcal{N}\rangle\in\mathcal{F}(\mathcal{K}^{(1)})\Longrightarrow U( \mathcal{N}^{\prime},\mathcal{N})|\mathcal{F}\,;\mathcal{N}\rangle\equiv| \mathcal{F}^{\prime};\mathcal{N}^{\prime}\rangle\in\mathcal{F}(\mathcal{K}^{(1) })\,, \tag{5.1}\]
which satisfies the following conditions:
\[U(\mathcal{N}_{3},\mathcal{N}_{2})U(\mathcal{N}_{2},\mathcal{N}_{1})=U( \mathcal{N}_{3},\mathcal{N}_{1})\,,\quad U(\mathcal{N},\mathcal{N})=1\,. \tag{5.2}\]
Due to the principle of increasing entropy, we always have \(\mathcal{N}_{3}\geq\mathcal{N}_{2}\geq\mathcal{N}_{1}\). For obtaining the evolution operator \(U(\mathcal{N}^{\prime},\mathcal{N})\), we need a constraint equation for the quantum state.
The quantum state of the universe is a function of the configuration of all the fundamental fields in the universe, Section 3. Previously, we obtained these fields' classical action or Lagrangian density in the ambient space formalism. It can be formally written in the following form:
\[S[\Phi]=\int\mathrm{d}\mu(x)\mathcal{L}(\Phi,\nabla_{\alpha}^{\top}\Phi)=\int \mathrm{d}\mu(x)\,\left[\mathcal{L}_{f}(\Phi,\nabla_{\alpha}^{\top}\Phi)+ \mathcal{L}_{int}(\Phi,\nabla_{\alpha}^{\top}\Phi)\right]\,. \tag{5.3}\]
For free field Lagrangian density \(\mathcal{L}_{f}\) see [10], and for interaction case \(\mathcal{L}_{int}\) see [8, 10]. Since in dS spacetime \(x^{0}\) plays the same role as the time variable in Minkowski space, see section 4 in [13], we define the conjugate field variable by \(\Pi\equiv\nabla_{0}^{\top}\Phi\). The Legendre transformation of the Lagrangian density \(\mathcal{L}(\Phi,\nabla_{\alpha}^{\top}\Phi)\) with respect to the variable \(\nabla_{0}^{\top}\Phi\) can be rewritten in the following form:
\[\mathsf{h}(\Phi,\Pi,\nabla_{i}^{\top}\Phi)=\left[\Pi\nabla_{0}^{\top}\Phi- \mathcal{L}(\Phi,\nabla_{\alpha}^{\top}\Phi)\right]\,, \tag{5.4}\]
where \(i=1,\cdots,4\). Calculating this function explicitly in the dS ambient space formalism for elementary fields is possible. Its physical meaning is unclear but at the null curvature limit it can be identified with the Hamiltonian density in Minkowski spacetime.
From this fact and inspired by the Wheeler-DeWitt equation, we define the constraint equation of geometry quantum state as follows:
\[|\mathcal{F}\,;\mathcal{N}\rangle\in\mathcal{F}(\mathcal{K}^{(1)}) \Longrightarrow H|\mathcal{F}\,;\mathcal{N}\rangle\equiv\left(\mathsf{H}_{f}+ \mathsf{H}_{int}\right)|\mathcal{F}\,;\mathcal{N}\rangle=0\,, \tag{5.5}\]
where \(\mathsf{H}(\Phi,\Pi)\equiv\int\mathrm{d}\mu(x)\mathsf{h}(\Phi,\Pi,\nabla_{i} ^{\top}\Phi)\). The first part is free fields theory which includes the dS massive gravity, the linear gravitational wave, and the mmc scalar field. The second part concerns the interaction of various fields. Using equation (5.1) and (5.5), we obtain \(\mathsf{H}|\mathcal{F}\,;\mathcal{N}\rangle=0=\mathsf{UH}|\mathcal{F}\,; \mathcal{N}\rangle\). Therefore the simple form of \(U\), which satisfies the conditions (5.2), is:
\[U(\mathcal{N},\mathcal{N}^{\prime})\equiv e^{-\mathrm{i}\int\mathrm{d}\mu(x) \mathsf{h}(\mathcal{N}-\mathcal{N}^{\prime})}\,. \tag{5.6}\]
Although the physical meaning of \(\mathsf{H}\) is unclear, it remains constant throughout the universe's evolution. By dividing it into geometrical and non-geometrical parts, \(\mathsf{H}=\mathsf{H}_{g}+\mathsf{H}_{ng}\,,\) we have a fluctuation between these two parts under the evolution of the universe, which neither is constant individually. It may be interpreted as an "energy" exchange between our universe's geometrical and non-geometrical parts. While the geometry quantum state evolves in Krein-Fock space, fluctuation of \(\theta_{\alpha\beta}\) breaks the dS invariant. The explicit calculation of the equation (5.6) is out of the scope of this paper and will be discussed elsewhere.
## 6. Conclusion
In quantum dS geometry, the Hilbert space \(\mathcal{H}\) is no longer a complete space. Instead, it is a subspace of a complete Krein space, \(\mathcal{H}\subset\mathcal{K}\), in which the requirement for positive definiteness is abandoned. Replacing Hilbert space with Krein space is essential in our quantum
geometry model. Krein space quantization permits us to construct a renormalizable QFT in curved space and quantum geometry. Ambient space formalism permits us to formulate quantum geometry from an observer-independent point of view and to visualize the many-world interpretation. It should be noted that although the metric quantization breaks the dS invariant, the Krein-Fock space is a complete space for quantum geometry. The dS geometry quantum state is introduced as a superposition of the Krein-Fock space basis, and its evolution is parametrized in terms of the total number of quantum states. Using the idea of the Wheeler-DeWitt constraint equation in cosmology, the evolution equation of geometry quantum state can be written in terms of the Lagrangian density of interaction fields.
**Acknowledgments:** The author wishes to express particular thanks to Jean Pierre Gazeau and Eric Huguet for their discussions. The author would like to thank College de France, Universite Paris Cite, and Laboratoire APC for their hospitality and financial support.
|
2307.04046 | Social Media Analytics in Disaster Response: A Comprehensive Review | Social media has emerged as a valuable resource for disaster management,
revolutionizing the way emergency response and recovery efforts are conducted
during natural disasters. This review paper aims to provide a comprehensive
analysis of social media analytics for disaster management. The abstract begins
by highlighting the increasing prevalence of natural disasters and the need for
effective strategies to mitigate their impact. It then emphasizes the growing
influence of social media in disaster situations, discussing its role in
disaster detection, situational awareness, and emergency communication. The
abstract explores the challenges and opportunities associated with leveraging
social media data for disaster management purposes. It examines methodologies
and techniques used in social media analytics, including data collection,
preprocessing, and analysis, with a focus on data mining and machine learning
approaches. The abstract also presents a thorough examination of case studies
and best practices that demonstrate the successful application of social media
analytics in disaster response and recovery. Ethical considerations and privacy
concerns related to the use of social media data in disaster scenarios are
addressed. The abstract concludes by identifying future research directions and
potential advancements in social media analytics for disaster management. The
review paper aims to provide practitioners and researchers with a comprehensive
understanding of the current state of social media analytics in disaster
management, while highlighting the need for continued research and innovation
in this field. | Mohammadsepehr Karimiziarani | 2023-07-08T20:49:18Z | http://arxiv.org/abs/2307.04046v1 | # Social Media Analytics in Disaster Response:
###### Abstract
Social media has emerged as a valuable resource for disaster management, revolutionizing the way emergency response and recovery efforts are conducted during natural disasters. This review paper aims to provide a comprehensive analysis of social media analytics for disaster management. The abstract begins by highlighting the increasing prevalence of natural disasters and the need for effective strategies to mitigate their impact. It then emphasizes the growing influence of social media in disaster situations, discussing its role in disaster detection, situational awareness, and emergency communication. The abstract explores the challenges and opportunities associated with leveraging social media data for disaster management purposes. It examines methodologies and techniques used in social media analytics, including data collection, preprocessing, and analysis, with a focus on data mining and machine learning approaches. The abstract also presents a thorough examination of case studies and best practices that demonstrate the successful application of social media analytics in disaster response and recovery. Ethical considerations and privacy concerns related to the use of social media data in disaster scenarios are addressed. The abstract concludes by identifying future research directions and potential advancements in social media analytics for disaster management. The review paper aims to provide practitioners and researchers with a comprehensive understanding of the current state of social media analytics in disaster management, while highlighting the need for continued research and innovation in this field.
## I Introduction
Natural disasters pose significant challenges to societies worldwide, requiring prompt and effective disaster management strategies to mitigate their impact. In recent years, the emergence of social media platforms has transformed the landscape of disaster management, offering new avenues for information sharing, communication, and situational awareness. This section provides an introduction to the topic, emphasizing the increasing prevalence of natural disasters and the pivotal role of social media in enhancing disaster response and recovery efforts.
### _Natural Disasters: Trends and Impacts_
Natural disasters, including hurricanes, earthquakes, floods, wildfires, and tsunamis, have devastating consequences on human lives, infrastructure, and the environment[1, 2, 3]. The frequency and intensity of such events have witnessed a notable rise in recent years, attributed to factors like climate change and urbanization [4, 5]. Understanding the gravity of these challenges necessitates effective disaster management strategies that leverage emerging technologies.
### _The Role of Social Media in Disaster Management_
Social media platforms, such as Twitter, Facebook, Instagram, and YouTube, have become pervasive tools for communication and information dissemination during disasters [6, 7]. These platforms facilitate real-time sharing of situational updates, emergency alerts, and resource coordination, enabling affected communities, response organizations, and individuals to collaborate and make informed decisions [8, 9].
### _Purpose of the Review Paper_
This review paper aims to provide a comprehensive analysis of social media analytics for disaster management. It seeks to examine the evolving landscape of social media usage in disaster scenarios, exploring the methodologies, techniques, and best practices employed in social media analytics. Additionally, the paper addresses ethical considerations and privacy concerns related to the utilization of social media data for disaster management purposes. The review aims to contribute to the existing body of knowledge in the field and offer insights for practitioners and researchers to enhance their understanding and implementation of social media analytics in disaster management. Procedure for Paper Submission
## II Significance of Social Media in Natural Disaster Response
Social media platforms have become integral tools in natural disaster response, offering unique advantages over traditional communication channels. This section explores the role of social media in disaster detection, situational awareness, and emergency communication, highlighting its significance in enhancing the effectiveness of response efforts.
### _Social Media for Disaster Detection_
Academic research has shown the potential of social media platforms in detecting and monitoring natural disasters. Real-time user-generated content on platforms like Twitter and Instagram can serve as early indicators of disaster events, enabling rapid response and resource allocation [10, 11, 12]. The analysis of geolocated tweets and keywords related to specific hazards can aid in identifying disaster-prone areas and informing early warning systems [13, 14, 15].
### _Social Media for Situational Awareness_
During a disaster, situational awareness is crucial for understanding the evolving circumstances and making informed decisions. Social media analytics provide valuable insights into the real-time conditions on the ground. By analyzing social media data, including posts, images, and videos, researchers and emergency responders can gain a comprehensive understanding of the impacted areas, resource needs, and emerging risks [16, 17, 18, 19]. Such situational awareness enables the effective coordination of response efforts and the allocation of resources to the most critical areas.
### _Social Media for Emergency Communication_
Traditional communication channels often face challenges during disasters due to infrastructure damage or congestion. Social media platforms offer alternative channels for emergency communication, allowing affected individuals to seek help, share their status, and receive updates from response organizations [20, 21, 22]. The use of hashtags, geolocation features, and official accounts facilitates the dissemination of accurate and timely information to a wide audience [23, 24, 25]. Moreover, social media enables the formation of online communities that provide emotional support, share recovery resources, and facilitate post-disaster resilience [26, 27, 28].
## III Challenges and Opportunities in Social Media Analytics for Disaster Management
The utilization of social media analytics for disaster management presents both challenges and opportunities. This section examines the key challenges associated with collecting, processing, and analyzing social media data in the context of disaster management. It also explores the potential opportunities and benefits that social media analytics offer for improving disaster response and recovery efforts.
### _Challenges in Social Media Data Collection_
Collecting social media data for disaster management purposes presents several challenges. The vast volume of data generated during disasters requires efficient and scalable data collection methods [8]. The identification of relevant and reliable sources amidst the abundance of user-generated content poses another challenge [29, 30, 31]. Additionally, ensuring data integrity, dealing with fake or misleading information, and addressing issues of data ownership and access rights require careful consideration [32, 33].
### _Challenges in Social Media Data Processing and Analysis_
Processing and analyzing social media data for disaster management purposes present their own set of challenges. The noisy and unstructured nature of social media content necessitates advanced natural language processing and machine learning techniques to extract meaningful information [34, 35]. Handling multilingual content, sarcasm, and context-specific nuances further complicates the data processing and analysis process (Sakaki et al., 2018; Yin et al., 2019). Moreover, ensuring the timeliness and accuracy of data analysis in fast-paced disaster scenarios poses a challenge [36, 37].
### _Opportunities and Benefits of Social Media Analytics_
Despite the challenges, social media analytics provides significant opportunities and benefits for disaster management. The real-time nature of social media data enables timely situational awareness, facilitating rapid decision-making and resource allocation [38, 39, 40]. Social media analytics can contribute to the identification of emerging patterns, trends, and user behaviors during disasters, enabling the prediction of future risks and the development of proactive response strategies [41, 42]. Furthermore, the integration of social media data with other data sources, such as remote sensing or sensor networks, can enhance the overall effectiveness of disaster management efforts [43, 44].
## IV Methodologies and Techniques in Social Media Analytics for Disaster Response
Effective social media analytics for disaster response require robust methodologies and techniques to collect, preprocess, and analyze the vast amount of social media data. This section examines the various methodologies and techniques employed in social media analytics, focusing on academic papers published after 2018.
### _Data Collection and Preprocessing_
Data collection and preprocessing are crucial stages in social media analytics for disaster response. Academic research has proposed several approaches to collect and filter relevant social media data. These include keyword-based queries, location-based filtering, and user-specific data retrieval [2, 45, 46, 47]. Furthermore, researchers have developed methods to preprocess social media data by removing noise, handling missing values, and addressing language-specific challenges [48, 49].
### _Text Mining and Natural Language Processing (NLP)_
Text mining and natural language processing techniques play a vital role in extracting meaningful information from social media data. Sentiment analysis, topic modeling, and named entity recognition are commonly used techniques to analyze the textual content of social media posts [50, 51]. Researchers have also explored the application of advanced NLP methods, such as deep learning models, to capture semantic relationships and context in social media data [52, 53].
### _Geospatial Analysis_
Geospatial analysis is a fundamental aspect of social media analytics for disaster response. By leveraging location information in social media posts, researchers can map the spatial distribution of disaster-related events, identify affected areas, and analyze patterns of user activity [54, 55]. Geographic information system (GIS) techniques and geospatial visualization tools are commonly employed to analyze and visualize geospatial data from social media [56, 57].
### _Machine Learning and Predictive Analytics_
Machine learning algorithms and predictive analytics techniques enable the prediction of disaster-related events, user behaviors, and resource needs. Researchers have utilized supervised learning methods, such as support vector machines and random forests, to classify social media posts based on their relevance to disasters [58, 59, 60]. Unsupervised learning approaches, including clustering and anomaly detection, have also been employed to identify patterns and outliers in social media data [61, 62, 63].
### _Network Analysis and Social Graphs_
Social media platforms provide rich social network data, which can be leveraged for analyzing information diffusion, community detection, and influence dynamics during disasters. Network analysis techniques, such as centrality measures, community detection algorithms, and sentiment propagation models, enable researchers to uncover the structure and dynamics of social networks in disaster contexts [64, 65, 66].
## V Case Studies and Best Practices in Social Media Analytics for Disaster Management
Social media analytics has been employed in various natural disaster contexts, including hurricanes, wildfires, earthquakes, landslides, and floods. This section presents case studies highlighting the application of social media analytics in each of these disaster scenarios. Additionally, it explores best practices and lessons learned from these case studies, showcasing successful strategies and techniques.
### _Hurricanes_
Case studies have demonstrated the effectiveness of social media analytics in hurricane response and recovery. For instance, research has showcased the use of social media data to track hurricane movements, assess damage, and coordinate relief efforts during events like Hurricane Harvey [1, 67, 68, 69, 70, 71, 72]. Best practices in this context include real-time monitoring of social media platforms, leveraging hashtags for information retrieval, and integrating social media data into decision support systems for resource allocation [73, 74, 75, 76, 77, 78].
### _Wildfires_
Social media analytics have proven valuable in managing wildfires and their aftermath. Studies have highlighted the role of social media in disseminating evacuation notices, providing real-time updates on fire locations, and facilitating community support during wildfire events [1, 54, 79, 80, 81, 82, 83, 84]. Best practices involve the use of geospatial analysis to monitor fire spread, sentiment analysis to gauge public perception, and network analysis to identify key influencers for information dissemination [85, 86, 87, 88, 89, 90, 91, 92].
### _Earthquakes_
Social media analytics has demonstrated its utility in earthquake response and recovery efforts. Case studies have shown how social media data can assist in rapid damage assessment, identify critical infrastructure disruptions, and support emergency response coordination [93, 94, 95, 96]. Best practices in this context include the integration of social media data with seismic monitoring systems, sentiment analysis to gauge public anxiety levels, and the use of machine learning algorithms for real-time event detection [97, 98].
### _Landslides_
Social media analytics has shown promise in landslide monitoring and response. Research has demonstrated the use of social media data for early detection of landslide events, crowd-sourced hazard mapping, and real-time communication with affected communities [99]. Best practices involve the combination of geospatial analysis with social media data to identify landslide-prone areas, sentiment analysis to assess public awareness, and predictive modeling for landslide susceptibility mapping [99, 100].
### _Floods_
Flood management efforts have also benefited from social media analytics. Case studies have highlighted the use of social media data to track flood levels, disseminate evacuation notices, and identify areas in need of immediate assistance [101, 102, 103, 104, 105, 106, 107]. Best practices in this context include the integration of social media data with hydrological models, sentiment analysis to gauge public sentiment and identify rumors, and the use of geospatial analysis for flood mapping and resource allocation.
## VI Ethical Considerations and Privacy Concerns in Social Media Analytics for Disaster Management
The utilization of social media data for disaster management raises important ethical considerations and privacy concerns. This section explores the ethical challenges associated with the use of social media data in disaster scenarios and highlights the need for responsible data handling practices.
### _Ethical Challenges in Social Media Data Usage_
The collection and analysis of social media data for disaster management purposes present ethical challenges. These include concerns related to informed consent, privacy, and data ownership. Researchers must ensure that data subjects are aware of the potential use of their data and have given their informed consent for its collection and analysis [108, 109]. Moreover, protecting the privacy and anonymity of social media users is crucial, as the information shared during disasters can be sensitive and personal. Respecting the rights and preferences of individuals regarding the use of their data is essential.
### _Responsible Data Handling Practices_
Responsible data handling practices are essential in social media analytics for disaster management. Researchers should adopt transparency and accountability in their data collection, processing, and analysis procedures [110, 111, 112, 41, 113]. Anonymization techniques and data de-identification methods should be employed to protect the privacy of social media users [114, 115, 116, 117, 118, 119, 87, 120, 88, 121, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107]. Additionally, researchers should consider the potential biases and limitations of social media data and communicate these limitations appropriately in their analysis and reporting. It is crucial to adhere to relevant legal and ethical frameworks, such as data protection regulations, when working with social media data.
### _Public Perception and Trust_
The public perception and trust in the use of social media data for disaster management play a significant role. Researchers and practitioners should engage in transparent communication about their data collection and analysis practices to build public trust [119, 120, 121]. Open dialogue with the affected communities, response organizations, and other stakeholders can help address concerns and ensure that the benefits of social media analytics are effectively communicated [122, 123, 124]. Establishing mechanisms for public feedback and incorporating public input in decision-making processes can contribute to the responsible and ethical use of social media data.
## VII Future Research Directions and Advancements in Social Media Analytics for Disaster Management
Social media analytics for disaster management is a rapidly evolving field with several avenues for future research and advancements. This section highlights potential research directions and technological advancements that can further enhance the application of social media analytics in disaster response and recovery efforts.
### _Integration of Multimodal Data Sources_
One promising area for future research is the integration of multimodal data sources with social media analytics. Combining social media data with other data sources, such as satellite imagery, sensor networks, and official reports, can provide a more comprehensive understanding of the disaster landscape [125, 126, 21]. By fusing information from multiple sources, researchers can improve accuracy in event detection, situational awareness, and resource allocation.
### _Real-time and Automated Decision Support Systems_
Developing real-time and automated decision support systems is another promising research direction. By leveraging machine learning, natural language processing, and geospatial analysis techniques, researchers can design intelligent systems that can automatically detect critical events, extract actionable
insights, and provide decision-makers with timely recommendations [127, 128]. These systems can enhance the speed and efficiency of decision-making during disasters.
### _Ethical and Privacy-aware Social Media Analytics_
Continued research into ethical and privacy-aware social media analytics is crucial. Researchers should explore methods to strike a balance between the potential benefits of social media data analysis and the protection of individual privacy rights [8]. This includes the development of privacy-preserving algorithms, techniques for informed consent and data anonymization, and the establishment of ethical guidelines for responsible data usage [6, 19].
### _Humanitarian Data Exchange Platforms_
The development of humanitarian data exchange platforms can facilitate data sharing and collaboration among researchers, response organizations, and affected communities [3, 129, 130]. These platforms can enable the sharing of social media datasets, analysis tools, and best practices, fostering interdisciplinary collaboration and knowledge sharing. Future research should focus on the design and implementation of such platforms to enhance data accessibility and encourage cooperation in the field of social media analytics for disaster management.
### _Resilience and Long-term Recovery_
Exploring the role of social media analytics in long-term recovery and community resilience is an important avenue for future research. Understanding how social media data can contribute to post-disaster recovery planning, resource allocation, and community engagement can enhance long-term resilience-building efforts [68, 131]. Additionally, investigating the psychological and social impacts of social media usage during and after disasters can provide valuable insights for supporting affected communities.
## VIII Conclusion and Future Outlook
Social media analytics has emerged as a powerful tool for disaster management, offering real-time insights, situational awareness, and enhanced decision-making capabilities. This paper has reviewed the application of social media analytics in the context of natural disasters, highlighting its benefits, challenges, and best practices. Looking ahead, there are several areas that warrant further research and development to advance the field of social media analytics for disaster management.
The integration of multimodal data sources, including social media data, satellite imagery, and sensor data, holds great potential for improving the accuracy and comprehensiveness of disaster response efforts [39, 70]. By combining information from various sources, researchers can gain a more holistic understanding of the disaster landscape and make more informed decisions.
The development of real-time and automated decision support systems can significantly enhance the speed and efficiency of disaster response. Leveraging machine learning, natural language processing, and geospatial analysis techniques, researchers can design intelligent systems that automatically detect critical events, extract actionable insights, and provide timely recommendations to decision-makers [132, 9, 133].
Ethical considerations and privacy concerns should remain at the forefront of social media analytics research. Further exploration of privacy-preserving algorithms, methods for informed consent and data anonymization, and the establishment of ethical guidelines can help ensure the responsible and ethical use of social media data.
The development of humanitarian data exchange platforms can facilitate collaboration, data sharing, and knowledge dissemination among researchers, response organizations, and affected communities [3, 55, 130]. Such platforms can promote interdisciplinary cooperation and accelerate advancements in social media analytics for disaster management.
Exploring the role of social media analytics in long-term recovery and community resilience is an important avenue for future research. Understanding how social media data can support post-disaster recovery planning, resource allocation, and community engagement can contribute to more effective and sustainable recovery efforts [134, 135].
In conclusion, social media analytics has the potential to revolutionize disaster management by harnessing the power of user-generated data. By addressing the challenges and adopting best practices, researchers and practitioners can leverage social media analytics to improve preparedness, response, and recovery efforts in the face of natural disasters.
|
2303.11802 | Could planet/sun conjunctions be used to predict large (>=Mw7)
earthquake? | No. | Pierre Romanet | 2023-03-21T12:36:15Z | http://arxiv.org/abs/2303.11802v2 | # Could planet/sun conjunctions be used to predict large (>=Mw7) earthquake?
###### Abstract
No.
Following the recent Mw 7.8 Kahramanmaras, Turkiye earthquake sequence on 6 February 2023, the assertion that planet/sun alignments and lunar phases may help to predict earthquake became widespread in some bad quality news and social medias. In the following, we will call this alignment of three celestial bodies a conjunction, although the correct word must be a syzygy.
Usually, this assertion is promoted by choosing carefully period of time over which it occurs and showing specific earthquakes at which it occurs. Also they usually do not mention that these events do happen extremely frequently, and that most of the time, these alignments are not followed by significant earthquakes.
The only literature available about it put into question fundamental physics without any proof (Omerbashich, 2011; Safronov 2022), or does not show the background rate of conjunctions (Awadh, 2021).
The major logical flaw in their analysis is showing only events that are working while not paying attention to the total quantity of conjunctions (see Khalisi, 2021, Zanette 2011). Indeed, if conjunctions are very common, it is easy to associate them with earthquakes.
This assertion can be seen as a more evolved version of that the moon phase is changing the earthquake. The moon phase theory, has been debated for a long time by seismologists (Schuster, 1897), and the question is still not completely answered yet (Ide et al., 2016; Hough, 2018; Kossobkov and Panza 2020; Zaccagnino et al., 2022). In some regions, slow-earthquakes like tremors (Nakata et al 2008, Rubinstein et al., 2008), or low frequency earthquake (Thomas et al., 2012) are influenced by tides. Depending on the area, the time in the seismic cycle (Tanaka 2010, 2012; Peng et al., 2021) and the focal mechanism of the earthquakes (Tsuruoka et al., 1995), it may have some influence or not. Overall, it seems to have an influence (Yan et al,. 2023), at least for some regions/period or time, that may be incorporated in long term probabilistic earthquake forecasting (Ide et al., 2016). Rigorous attempt to perform short term prediction with the idea that before a large earthquake, smaller earthquakes would be more tide-sensitive as the crust is approaching critical strength, was proven to be ineffective (Hirose et al., 2022).
While for the moon/earth/sun alignements, there exists a physical mechanism by which the stresses are changing in the crust (the gravity), and therefore may weakly influence earthquake occurence (Ide et al., 2016), there is no such mechanism for planets/sun alignements, because the electromagnetic and gravity fields by celestial body other than sun and moon are usually extremely small when they reach the Earth. Therefore, invoking "electrodynamic", "resonance",
and "molecule" as if they were keywords to explain the phenomena leading to this assertion only reflects the lack of scientific knowledge of the persons promoting this theory.
In this paper, we are testing the planet/sun alignment, together with the moon phase systematically over a 69 years period of time using global catalog of earthquakes. We are systematically comparing the percentage of earthquakes linked with conjunction(s) with the percentage of the time that conjunction(s) are happening. This assertion that planet/sun alignment is promoting earthquakes would be valid only if it is happening more frequently than conjunctions themselves. We also calculated significance of our results, by calculating the p-value, making the null hypothesis that earthquakes follow a binomial distribution during the period with the probability given by the probability of conjunctions.
## Method
We first chose the ISC-GEM catalog (Storchak et al., 2013, 2015; Di Giacomo et al., 2018) and selected earthquakes of Mw\(>\)7 over the period 1950/01/01-2018/12/31. The reason of selecting the year 1950, is because the catalog starts to be complete for shallow events (>60km) and for Mw\(>\)7 at years 1918-1939 (Michael, 2014). We chose the 10 years delay as a margin to be sure not to miss Mw 7 earthquakes which may flaw the analysis.
To calculate each planet/sun alignment, we took advantage of the Astropy package in python (The astropy collaboration et al., 2018, 2022), that allows to calculate the position of any planets in the solar system, the sun, and the moon at any time. For each day covering the period of the earthquake catalog, we calculated if there was a conjunction or not. We used the NASA JPL ephemeris model "DE430". We did not take into account leap seconds in the calculation of the day, because the offset is less than a minute for the considered period.
The celestial body included are: the Sun, Mercury, Venus, the Earth, Mars, Jupiter, Saturn, Uranus, and Neptune.
For each triplet of given three celestial bodies A, B and C in the solar system, we calculated their positions in International Celestial Reference System (ICRS).
We then calculated each vector \(\overrightarrow{AB}\), \(\overrightarrow{BC}\) and \(\overrightarrow{AC}\) and the associated norms \(\mid\mid\overrightarrow{AB}\mid\mid\), \(\mid\mid\overrightarrow{BC}\mid\mid\) and \(\mid\mid\overrightarrow{AC}\mid\mid\). The vector that has the longest norm shows the two bodies whose distance is the greatest, hence we can find the body that is in the middle. For example, if \(\mid\mid\overrightarrow{AC}\mid\mid\) is the greatest distance, we can guess that the celestial body B is in the middle. Finally, we can calculate the angle between \(\overrightarrow{AB}\) and \(\overrightarrow{BC}\) as:
\[\theta=\frac{180}{\pi}\arccos\left(\frac{\overrightarrow{AB}\cdot \overrightarrow{BC}}{\mid\overrightarrow{AB}\mid\overrightarrow{BC}\mid} \right)\text{ in degree}\]
When the angle \(\theta\) was smaller than a threshold \(\theta_{\text{thr}}\) we set that there was an alignment of the celestial bodies for the day.
For the moon phase, we calculated the projection of the moon on the ecliptic plane (the plane that contains the orbital of the Earth). Then, we try to find if the projection on this plane was in opposition (full Moon) or in conjunction (new Moon) with the sun from the Earth. A threshold of \(6.5^{*}\) was used, this threshold is chosen because the average orbital of the moon around the Earth during one day is around 12\({}^{*}\).
The results are presented in the above chart (table 1). The total period consists of 25202 days, among which 19565 days are associated with conjunctions. So that 78% of the time, there is at least one conjunction on the day. For the same period, there are 813 earthquakes, among which 640 are associated with conjunctions, so that 79% percent of earthquakes are associated with conjunctions.
We did the same study for earthquakes associated with full or new moon, as well as for earthquakes associated with both full or new moon, and at least one conjunction. The percentage of days associated with either full or new moon is 7% (1743/25202), very much the same as the number of earthquakes that happened during full or new moon 7% (58/813). Finally, there are 5% (1349/25202) of days, and 6% (52/813) of earthquakes associated with both full or new moon, and at least one conjunction.
We can formulate the null hypothesis that earthquakes follow a binomial law with the probability \(p\) given by the number of days that are associated with conjunctions:
\[P[k\,|\,n\,|\,p]=\binom{n}{k}\,p^{k}(1-p)^{n},\]
where \(P\) is the probability to observe \(k\) earthquakes that are associated with at least one conjunction in the total number of earthquakes \(n\). Because \(n\) is large in our sample, we can approximate the binomial distribution by a normal law:
\[P[k\,|\,n\,|\,p]\simeq\frac{e^{-\frac{1}{2}\left(\frac{k-np}{\sqrt{np(1-p)}} \right)^{2}}}{\sqrt{2\pi np(1-p)}},\]
finally the single-side p-value will be:
\begin{table}
\begin{tabular}{p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}} \hline
**Number of days** **associated with conjunction(s)** & **Number of earthquakes associated with** **conjunction(s)** & **Number of days associated with full/new moon** & **Number of earthquakes associated with full/new moon** & **Number of days associated with** **both full/new moon** & **Number of earthquakes** **both full/new moon and conjunction(s)** & **Number of earthquakes** **both full/new moon** \\ \hline
19565/25202 (77.63\%) & 640/813 (78.72\%) & 1743/25202 (6.92\%) & 58/813 (6.92\%) & 1349/25202 (7.13\%) & 52/813 (5.35\%) & 52/813 (6.40\%) \\ \hline \end{tabular}
\end{table}
Table 1: comparison of the frequency of a particular event (for example a conjunction), and the frequency of an earthquake that can be associated to the event during the period 1950/01/01-2018/12/31. The threshold used here to define a conjunction is \(\theta_{thr}=3^{*}\). The calculated p-value is one-sided, for the null hypothesis that the earthquakes follow a binomial law with the probability given by the frequency calculated with the number of days.
\[P_{value}=\frac{1}{2}-\frac{1}{2}\operatorname{erf}\left(\frac{k-np}{\sqrt{2np(1-p)} }\right)\text{, if }k>np\]
The p-value represents the probability to obtain a value worst or equal than the calculated value. Usually, a value \(p_{value}<0.05\) would mean that we can reject the null hypothesis, meaning that there is only 5% chance that we get such a bad result.
We choose a single-side p-value because this will favor the rejection of the null hypothesis (the single side p-value is lower than the two-side p-value), hence it is siding with the hypothesis that conjunctions are linked to earthquakes. Given that the p-value for earthquakes associated with conjunction(s) is 0.23, we cannot reject the null hypothesis, hence the difference between the observed value of earthquake linked with conjunction and the total number of conjunction is not significant. The same analysis can be done for earthquakes associated with full/new moon, or earthquakes that are associated with both full/new moon and at least one conjunction. In these two cases, the p-value is also high enough (\(p_{value}>0.05\)) so that we cannot reject the null hypothesis.
## Discussion and Conclusion
The frequency of earthquakes associated with conjunction(s) and the frequency of conjunctions are pretty much the same, and the difference is statistically non-significant (all the p-values are larger than 5%). It means that we cannot reject the hypothesis that earthquake are occurring following a binomial law given the time period.
The fact that the null hypothesis cannot be rejected does not mean either that this is the true hypothesis. It is known that earthquakes are not completely random, especially because of
Figure 1: Comparison of the percentage of days involving at least one conjunction associated with a given planet, and the percentage of earthquakes linked with at least one conjunction associated with a given planet. The threshold angle to define a conjunction is \(\theta_{thr}=3^{*}\).
aftershocks, and aftershocks have not been removed here. It just means that given this earthquake catalog, we cannot decipher to reject it.
Nether-the-less, the assertion that earthquakes are linked with conjunction is unlikely based on our results. For such a strong claim, that earthquakes can be predicted using conjunctions and moon phase, because it would have extremely important societal outcome, it would need very significant results and hence associated with very law p-value. This is far from being the case here.
We also tried to find if a planet was more often that others associated with conjunctions (figure 1). It seems not the case because the difference between the percentage of planet/sun involved at least in one conjunction during one day is within 3% the same as the percentage of earthquakes that can be at least associated with a given planet/sun in a conjunction.
Finally, we tried to see if a conjunction was more often than others associated with earthquake occurence (figure 2). The results are less clear, because for a given conjunction, the percentage of one particular conjunction during the whole period is small (<2% for the conjunction that is the most frequent), so that the number of earthquakes sampling this conjunction is also very small. This leads to a large variability. However, we can still say that the overall trend is respected, the conjunctions that are the most frequent are most often associated with earthquakes.
The change of the threshold for conjunction does not change the results, and the same conclusion can be made. If the threshold angle is too small, we may miss some conjunctions because the orbital plane is not exactly the same for each planet. For example, the results with the threshold of 2% is given in appendix (table 2). Reducing the threshold angle mainly reduces the percentage of time conjunctions are happening and reduces in the same way the percentage of earthquakes that are associated with conjunctions.
Persons defending the assertion of planetary/sun conjunctions may continue arguing that I still did not look at a particular association of conjunctions, or association with only full moon. This is true. But given the number of possible associations, it is impossible to test them all. If so, they are very welcomed to indicate these specific associations, so that it can be tested rigorously and scientifically, keeping in mind that normally the person making assertions should be the one proving them.
The alignment of three planets/sun is actually something extremely ordinary in the solar system that is happening close to everyday (for the threshold 3, it happens 78% of the time). Finding a syzygy on the day of an earthquake is therefore normal, moreover if we start looking at some days before and after an earthquake. We showed that the percentage of earthquake associated with at least one conjunction is actually very similar to the percentage of the time where there is at least a conjunction, and that the difference between the two is not statistically significant. Hence, there is no significant effect of planet/sun alignment or moon effect on the occurence of large earthquakes, and it can certainly not be used to provide short term prediction of earthquakes. Finally to plagiarize Khalisi, 2021, "Sooner or later there will be another earthquake close to a _conjunction_, and the self-proclaimed prophets will have their joy."
## Acknowledgement
I would like to thank all the seismologists/geologists/scientists that supported me to write this article. Special thanks to Martijn van den Ende and Sylvain Barbot, because I would have never thought about publishing a paper on this topic. I would also like to thank Susan Hough that allowed me to plagiarize her abstract.
## Data
International Seismological Centre (2018), ISC-GEM Earthquake Catalogue, [https://doi.org/10.31905/d808b825](https://doi.org/10.31905/d808b825)
|
2306.05071 | A Causal Framework for Decomposing Spurious Variations | One of the fundamental challenges found throughout the data sciences is to
explain why things happen in specific ways, or through which mechanisms a
certain variable $X$ exerts influences over another variable $Y$. In statistics
and machine learning, significant efforts have been put into developing
machinery to estimate correlations across variables efficiently. In causal
inference, a large body of literature is concerned with the decomposition of
causal effects under the rubric of mediation analysis. However, many variations
are spurious in nature, including different phenomena throughout the applied
sciences. Despite the statistical power to estimate correlations and the
identification power to decompose causal effects, there is still little
understanding of the properties of spurious associations and how they can be
decomposed in terms of the underlying causal mechanisms. In this manuscript, we
develop formal tools for decomposing spurious variations in both Markovian and
Semi-Markovian models. We prove the first results that allow a non-parametric
decomposition of spurious effects and provide sufficient conditions for the
identification of such decompositions. The described approach has several
applications, ranging from explainable and fair AI to questions in epidemiology
and medicine, and we empirically demonstrate its use on a real-world dataset. | Drago Plecko, Elias Bareinboim | 2023-06-08T09:40:28Z | http://arxiv.org/abs/2306.05071v1 | # A Causal Framework for
###### Abstract
One of the fundamental challenges found throughout the data sciences is to explain why things happen in specific ways, or through which mechanisms a certain variable \(X\) exerts influences over another variable \(Y\). In statistics and machine learning, significant efforts have been put into developing machinery to estimate correlations across variables efficiently. In causal inference, a large body of literature is concerned with the decomposition of causal effects under the rubric of mediation analysis. However, many variations are spurious in nature, including different phenomena throughout the applied sciences. Despite the statistical power to estimate correlations and the identification power to decompose causal effects, there is still little understanding of the properties of spurious associations and how they can be decomposed in terms of the underlying causal mechanisms. In this manuscript, we develop formal tools for decomposing spurious variations in both Markovian and Semi-Markovian models. We prove the first results that allow a non-parametric decomposition of spurious effects and provide sufficient conditions for the identification of such decompositions. The described approach has several applications, ranging from explainable and fair AI to questions in epidemiology and medicine, and we empirically demonstrate its use on a real-world dataset.
## 1 Introduction
Understanding the relationships of cause and effect is one of the core tenets of scientific inquiry and the human ability to explain why events occurred in the way they did. Hypotheses on possible causal relations in the sciences are often generated based on observing correlations in the world, after which a rigorous process using either observational or experimental data is employed to ascertain whether the observed relationships are indeed causal. One common way of articulating questions of causation is through the average treatment effect (ATE), also known as the total effect (TE), given by
\[\mathbb{E}[y\mid do(x_{1})]-\mathbb{E}[y\mid do(x_{0})], \tag{1}\]
where \(do(\cdot)\) symbolizes the do-operator [9], and \(x_{0},x_{1}\) are two distinct values attained by the variable \(X\). Instead of just quantifying the causal effect, researchers are more broadly interested in determining which causal mechanisms transmit the change from \(X\) to \(Y\). Such questions have received much attention and have been investigated under the rubric of causal mediation analysis [3; 12; 10; 14].
Often, however, the causal relationship may be entirely absent or account only for a part of the initially observed correlation. In these cases, the spurious (or confounded) variations between \(X\) and \(Y\) play a central role in explaining the phenomenon at hand. Interestingly, though, tools for decomposing spurious variations are almost entirely missing from the literature in causal inference 1.
Phenomena in which spurious variations are of central importance are abundant throughout the sciences. For instance, in medicine, the phenomenon called the _obesity paradox_ signifies the counter-intuitive association of increased body fat with better survival chances in the intensive care unit (ICU) [6]. While the full explanation is still unclear, evidence in the literature suggests that the relationship is not causal [5], i.e., it is explained by spurious variations. Spurious variations also play a central role in many epidemiological investigations [13]. In occupational epidemiology, for example, the relationship of exposure to hazardous materials with cancer is confounded by other hazardous working conditions and lifestyle characteristics [4], and such spurious variations themselves may be the target of scientific inquiry.
Spurious variations are key in applications of fair and explainable AI as well. For instance, consider the widely recognized phenomenon in the literature known as _redlining_[15; 7], in which the location where loan applicants live may correlate with their race. Applications might be rejected based on the zip code, disproportionately affecting certain minority groups. Furthermore, in the context of criminal justice [8], the association of race with increased probability of being classified as high-risk for recidivism may in part be explained by the spurious association of race with other demographic characteristics (we take a closer look at this issue in Sec. 5). Understanding which confounders affect the relationship, and how strongly, is an important step of explaining the phenomenon, and also determining whether the underlying classifier is deemed as unfair and discriminatory.
These examples suggest that a principled approach for decomposing spurious variations may be a useful addition to the general toolkit of causal inference, and may find its applications in a wide range of settings from medicine and public health all the way to fair and explainable AI. For concreteness, in this paper we will consider the quantity
\[P(y\mid x)-P(y\mid do(x)),\]
which we will call the _experimental spurious effect_ (Exp-SE, for short). This quantity, shown graphically in Fig. 1, captures the difference in variations when observing \(X=x\) vs. intervening that \(X=x\), which can be seen as the spurious counterpart of the total effect. Interestingly, the Exp-SE quantity is sometimes evoked in the causal inference literature, i.e.,
\[P(y\mid x)-P(y\mid do(x))=0 \tag{2}\]
is known as the _zero-bias_ condition [2; 9, Ch. 6]. This condition allows one to test for the existence of confounding between the variables \(X\) and \(Y\). A crucial observation is that, in many cases, the quantity itself may be of interest (instead of only its _null_), as it underpins the spurious variations.
Against this background, we note that tools that allow for decomposing the Exp-SE quantity currently do not exist in the literature. Our goal in this manuscript is to fill in this gap, and provide a formalism that allows for non-parametric decompositions of spurious variations. Specifically, our contributions are the following:
1. We introduce the notion of a partially abducted submodel (Def. 1), which underpins the inference procedure called Partial Abduction and Prediction (Alg. 2) (akin to Balke & Pearl 3-step procedure [9; Ch. 7]). Building on this new primitive, we prove the first non-parametric decomposition result for spurious effects in Markovian models (Thm. 1),
2. Building on the insights coming from the new procedure, we prove the decomposition result for settings when unobserved confounding is present (Semi-Markovian models) (Thm. 3).
3. We develop sufficient conditions for identification of spurious decompositions (Thm 2, 4).
## 2 Preliminaries
We use the language of structural causal models (SCMs) as our basic semantical framework [9]. A structural causal model (SCM) is a tuple \(\mathcal{M}:=\langle V,U,\mathcal{F},P(u)\rangle\), where \(V\), \(U\) are sets of endogenous (observables) and exogenous (latent) variables respectively, \(\mathcal{F}\) is a set of functions \(f_{V_{i}}\), one for each \(V_{i}\in V\), where \(V_{i}\gets f_{V_{i}}(\mathrm{pa}(V_{i}),U_{V_{i}})\) for some \(\mathrm{pa}(V_{i})\subseteq V\) and \(U_{V_{i}}\subseteq U\). \(P(u)\) is a strictly positive probability measure over \(U\). Each SCM \(\mathcal{M}\) is associated to a causal diagram \(\mathcal{G}\)[9] over the node set \(V\) where \(V_{i}\to V_{j}\) if \(V_{i}\) is an argument of \(f_{V_{j}}\), and \(V_{i}\)\(\longleftrightarrow\)\(V_{j}\) if the corresponding
Figure 1: Exp-SE representation.
\(U_{V_{i}},U_{V_{j}}\) are not independent [2]. A model with no bidirected edges is called _Markovian_, while a model with bidirected edges is called Semi-Markovian. An instantiation of the exogenous variables \(U=u\) is called a _unit_. By \(Y_{x}(u)\) we denote the potential response of \(Y\) when setting \(X=x\) for the unit \(u\), which is the solution for \(Y(u)\) to the set of equations obtained by evaluating the unit \(u\) in the submodel \(\mathcal{M}_{x}\), in which all equations in \(\mathcal{F}\) associated with \(X\) are replaced by \(X=x\). We next introduce an important inferential procedure for solving different tasks in causal inference.
### Abduction, Action and Prediction
The steps of the _abduction-action-prediction_ method can be summarized as follows:
**Algorithm 1** (Abduction, Action and Prediction [9]).: _Given an SCM \(\langle\mathcal{F},P(u)\rangle\), the conditional probability \(P(Y_{C}\mid E=e)\) of a counterfactual sentence "if it were \(C\) then \(Y\)", upon observing the evidence \(E=e\), can be evaluated using the following three steps:_
1. _[label=()]_
2. **Abduction** _- update_ \(P(u)\) _by the evidence_ \(e\) _to obtain_ \(P(u\mid e)\)_,_
3. _Action_ _- modify_ \(\mathcal{F}\) _by the action_ \(do(C)\)_, where_ \(C\) _is an antecedent of_ \(Y\)_, to obtain_ \(\mathcal{F}_{C}\)_,_
4. _Prediction_ _- use the model_ \(\langle\mathcal{F}_{C},P(u\mid e)\rangle\) _to compute the probability of_ \(Y_{C}\)_._
In the first step, the probabilities of the exogenous variables \(U\) are updated according to the observed evidence \(E=e\). Next, the model \(\mathcal{M}\) is modified to a submodel \(\mathcal{M}_{C}\). The action step allows one to consider queries related to interventions or imaginative, counterfactual operations. In the final step, the updated model \(\langle\mathcal{F}_{C},P(u\mid e)\rangle\) is used to compute the conditional probability \(P(y_{C}\mid e)\). There are two important special cases of the procedure. Whenever the action step is empty, the procedure handles queries in the first, associational layer of the Pearl's Causal Hierarchy (PCH, [2]). Whenever the abduction step is empty, but the action step is not, the procedure handles _interventional_ queries in the second layer of the PCH. The combination of the two steps, more generally, allows one to consider queries in all layers of the PCH, including the third, _counterfactual_ layer. In the following example, we look at the usage of the procedure on some queries.
**Example 1** (Abduction, Action, Prediction).: _Consider the following SCM:_
\[\mathcal{F}:\begin{cases}X\leftarrow&f_{X}(U_{X},U_{XZ})\\ Z\leftarrow&f_{Z}(U_{Z},U_{XZ})\\ Y\leftarrow&f_{Y}(X,Z,U_{Y}),\end{cases} \tag{3}\]
_with \(P(U_{X},U_{XZ},U_{Z},U_{Y})\) the distribution over the exogenous variables. The causal diagram of the model is shown in Fig. 1(a), with an explicit representation of the exogenous variables in Fig. 1(b)._
_We are first interested in the query \(P(y\mid x)\) in the given model. Based on the abduction-prediction procedure, we can simply compute that:_
\[P(y\mid x)=\sum_{u}\mathbb{1}(Y(u)=y)P(u\mid x)=\sum_{u}\mathbb{1}(Y(u)=y)P(u _{z},u_{y})P(u_{x},u_{xz}\mid x). \tag{6}\]
_where the first step follows from the definition of the observational distribution, and the second step follows from noting the independence \(U_{Z},U_{Y}\bot U_{X},U_{XZ},X\). In the abduction step, we can compute the probabilities \(P(u_{x},u_{xz}\mid x)\). In the prediction step, query \(P(y\mid x)\) is computed based on Eq. 6._
Figure 2: Graphical representations of the SCM in Ex. 1.
_Based on the procedure, we can also compute the query \(P(y_{x})\) (see Fig. 2c):_
\[P(y_{x})=\sum_{u}\mathbb{1}(Y_{x}(u)=y)P(u)=\sum_{u}\mathbb{1}(Y(x,u_{xz},u_{z},u_ {y})=y)P(u). \tag{7}\]
_where the first step follows from the definition of an interventional distribution, and the second step follows from noting that \(Y_{x}\) does not depend on \(u_{x}\). In this case, the abduction step is void, since we are not considering any specific evidence \(E=e\). The value of \(Y(x,u_{xz},u_{z},u_{y})\) can be computed from the submodel \(\mathcal{M}_{x}\). Finally, using Eq. 7 we can perform the prediction step. We remark that_
\[\mathbb{1}(Y(x,u_{xz},u_{z},u_{y})=y)=\sum_{u_{x}}\mathbb{1}(Y(u_{x},u_{xz},u_ {z},u_{y})=y)P(u_{x}\mid x,u_{xz},u_{z},u_{y}), \tag{8}\]
_by the law of total probability and noting that \(X\) is a deterministic function of \(u_{x},u_{xz}\). Thus, \(P(y_{x})\) also admits an alternative representation_
\[P(y_{x}) =\sum_{u}\mathbb{1}(Y(u_{x},u_{xz},u_{z},u_{y})=y)P(u_{x}\mid x,u _{xz},u_{z},u_{y})P(u_{xz},u_{z},u_{y}) \tag{9}\] \[=\sum_{u}\mathbb{1}(Y(u)=y)P(u_{x}\mid x,u_{xz})P(u_{xz},u_{z},u_ {y}), \tag{10}\]
_where Eq. 10 follows from using the independencies among \(U\) and \(X\) in the graph in Fig. 2b. We revisit the representation in Eq. 10 in Ex. 2._
## 3 Foundations of Decomposing Spurious Variations
After getting familiar with the abduction-action-prediction procedure, our next task is to introduce a new procedure that allows us to decompose spurious effects. First, we define the concept of a _partially abducted submodel_:
**Definition 1** (Partially Abducted Submodel).: _Let \(U_{1},U_{2}\subseteq U\) be a partition of the exogenous variables. Let the partially abducted (PA, for short) submodel with respect to the exogenous variables \(U_{1}\) and evidence \(E=e\) be defined as:_
\[\mathcal{M}^{U_{1},E=e}:=\langle\mathcal{F},P(u_{1})P(u_{2}\mid u_{1},E)\rangle. \tag{11}\]
In words, in the PA submodel, the typically obtained posterior distribution \(P(u\mid e)\) is replaced by the distribution \(P(u_{2}\mid u_{1},e)\). Effectively, the exogenous variables \(U_{1}\) are _not updated according to evidence_. The main motivation for introducing the PA model is that spurious variations arise whenever we are comparing units of the population that are different, a realization dating back to Pearson in the 19th century [11]. To give a formal discussion on what became known as _Pearson's shock_, consider two sets of differing evidence \(E=e\) and \(E=e^{\prime}\). After performing the abduction step, the variations between posterior distributions \(P(u\mid e)\) and \(P(u\mid e^{\prime})\) will be explained by _all the exogenous variables that precede the evidence \(E\)_. In a PA submodel, however, the posterior distribution \(P(u_{1})P(u_{2}\mid u_{1},e)\) will differ from \(P(u_{1})P(u_{2}\mid u_{1},e^{\prime})\) only in variables that are in \(U_{2}\), while the variables in \(U_{1}\) will induce no spurious variations. Note that if \(U_{1}=U\), then the PA submodel will introduce no spurious variations, a point to which we return in the sequel.
We now demonstrate how the definition of a PA submodel can be used to obtain partially abducted conditional probabilities:
**Proposition 1** (PA Conditional Probabilities).: _Let \(P(Y=y\mid E=e^{U_{1}})\) denote the conditional probability of the event \(Y=y\) conditional on evidence \(E=e\), while the exogenous variables \(U_{1}\) are not updated according to the evidence. Then, we have that:_
\[P(Y=y\mid E=e^{U_{1}})=\sum_{u_{1}}P(U_{1}=u_{1})P(Y=y\mid E=e,U_{1}=u_{1}). \tag{12}\]
### Partial Abduction and Prediction
Based on the notion of a PA submodel, we can introduce the partial-abduction and prediction procedure:
**Algorithm 2** (Partial Abduction and Prediction).: _Given an SCM \(\langle\mathcal{F},P(u)\rangle\), the conditional probability \(P(Y=y\mid E=e^{U_{1}})\) of an event \(Y=y\) upon observing the evidence \(e\), in a world where variables \(U_{1}\) are unresponsive to evidence, can be evaluated using the following two steps:_
1. _Partial Abduction_ _- update_ \(P(u)\) _by the evidence_ \(e\) _to obtain_ \(P(u_{1})P(u_{2}\mid u_{1},e)\)_, where_ \((u_{1},u_{2})\) _is a partition of the exogenous variables_ \(u\)_,_
2. _Prediction_ _- use the model_ \(\langle\mathcal{F},P(u_{1})P(u_{2}\mid u_{1},e)\rangle\) _to compute the probability of_ \(Y=y\)_._
In the first step of the algorithm, we only perform _partial abduction_. The exogenous variables \(U_{2}\) are updated according to the available evidence \(E=e\), while the variables \(U_{1}\) retain their original distribution \(P(u_{1})\) and remain unresponsive to evidence. This procedure allows us to consider queries in which only a subset of the exogenous variables respond to the available evidence. We next explain what kind of queries fall within this scope, beginning with an example:
**Example 2** (Partial Abduction and Prediction).: _Consider the model in Eq. 3-5. We are interested in computing the query:_
\[P(y\mid x^{U_{xz},U_{z}}) =\sum_{u}\mathbb{1}(Y(u)=y)P(u_{xz},u_{z})P(u_{x},u_{y}\mid u_{xz },u_{x},x) \tag{13}\] \[=\sum_{u}\mathbb{1}(Y(u)=y)P(u_{xz},u_{z})P(u_{y})P(u_{x}\mid u_{ xz},u_{x},x)\] (14) \[=\sum_{u}\mathbb{1}(Y(u)=y)P(u_{xz},u_{z},u_{y})P(u_{x}\mid u_{ xz},u_{x},x), \tag{15}\]
_where the first step follows from Prop. 1, and the remaining steps from conditional independencies between the \(U\) variables and \(X\). Crucially, the query yields the same expression as in Eq. 10 that we obtained for \(P(y_{x})\) in Ex. 1. Therefore, the conditional probability \(P(y\mid x^{U_{xz},U_{z}})\) in a world where \(U_{XZ},U_{Z}\) are unresponsive to evidence is equal to the interventional probability \(P(y_{x})\)._
As the example illustrates, we have managed to find another procedure that mimics the behavior of the interventional (\(do(X=x)\)) operator in the given example. Interestingly, however, in this procedure, we have not made use of the submodel \(\mathcal{M}_{x}\) that was used in the abduction-action-prediction procedure. We next introduce an additional example that shows how the new procedure allows one to decompose spurious variations in causal models:
**Example 3** (Spurious Decomposition).: _Consider an SCM compatible with the graphical representation in Fig. 2(b) (with exogenous variables \(U\) shown explicitly in red), and the corresponding Semi-Markovian causal diagram in Fig. 2(a). We note that, based on the partial abduction-prediction procedure, the following two equalities hold:_
\[P(y\mid x) =P(y\mid x^{\emptyset}) \tag{16}\] \[P(y_{x}) =P(y\mid x^{U_{xz_{1}},U_{xz_{2}}}), \tag{17}\]
_which shows that_
\[\text{Exp-SE}_{x}(y)=P(y\mid x^{\emptyset})-P(y\mid x^{U_{xz_{1}},U_{xz_{2}}}). \tag{18}\]
_The experimental spurious effect can be written as a difference of conditional probabilities \(y\mid x\) in a world where all variables \(U\) are responsive to evidence vs. a world in which \(U_{XZ_{1}},U_{XZ_{2}}\) are
Figure 3: Graphical representations of the SCM in Ex. 1.
unresponsive to evidence. Furthermore, we can also consider a refinement that decomposes the effect_
\[\text{Exp-SE}_{x}(y)=\underbrace{P(y\mid x^{\emptyset})-P(y\mid x^{U_{xz_{1}}})} _{\text{variations of }U_{xz_{1}}}+\underbrace{P(y\mid x^{U_{xz_{1}}})-P(y\mid x^{U_{xz_{1}}},U_{ xz_{2}})}_{\text{variations of }U_{xz_{2}}}, \tag{19}\]
_allowing for an additive, non-parametric decomposition of the experimental spurious effect._
The first term in Eq. 19, shown in Fig. 8(a), encompasses spurious variations explained by the variable \(U_{XZ_{1}}\). The second term, in Fig. 3(b), encompasses spurious variations explained by \(U_{XZ_{2}}\).
For an overview, in Tab. 1 we summarize the different inferential procedures discussed so far, indicating the structural causal models associated with them.
## 4 Non-parametric Spurious Decompositions
We now move on to deriving general decomposition results for the spurious effects. Before doing so, we first derive a new decomposition result for the TV measure, not yet appearing in the literature (due to space constraints, all proofs are given in Appendix A):
**Proposition 2**.: _The total variation measure can be decomposed as:_
\[\text{TV}_{x_{0},x_{1}}(y)=\text{TE}_{x_{0},x_{1}}(y)+(\text{Exp-SE}_{x_{1}}( y)-\text{Exp-SE}_{x_{0}}(y)). \tag{20}\]
The above result clearly separates out the causal variations (measured by the TE) and the spurious variations (measured by Exp-SE terms) within the TV measure. The seminal result from [10] can be used to further decompose the TE measure. In the sequel, we show how the Exp-SE terms can be further decomposed, thereby reaching a full non-parametric decomposition of the TV measure.
### Spurious Decompositions for the Markovian case
When using the definition of a PA submodel, the common variations between \(X,Y\) can be attributed to (or explained by) the unobserved confounders \(U_{1},\dots,U_{k}\). In order to do so, we first define the notion of an experimental spurious effect for a set of latent variables:
**Definition 2** (Spurious effects for Markovian models).: _Let \(\mathcal{M}\) be a Markovian model. Let \(Z_{1},\dots,Z_{k}\) be the confounders between variables \(X\) and \(Y\) sorted in any valid topological order, and denote the corresponding exogenous variables as \(U_{1},\dots,U_{k}\), respectively. Let \(Z_{[i]}=\{Z_{1},\dots,Z_{i}\}\) and \(U_{[i]}=\{U_{1},\dots,U_{i}\}\). Define the experimental spurious effect associated with variable \(U_{i+1}\) as_
\[\text{Exp-SE}_{x}^{U_{[i]},U_{[i+1]}}(y)=P(y\mid x^{U_{[i]}})-P(y\mid x^{U_{[ i+1]}}). \tag{21}\]
\begin{table}
\begin{tabular}{|c|c|c|} \hline Procedure & SCM & Queries \\ \hline Abduction-Prediction & \(\langle\mathcal{F},P(u\mid E)\rangle\) & Layer 1 \\ \hline Action-Prediction & \(\langle\mathcal{F}_{x},P(u)\rangle\) & Layer 2 \\ \hline Abduction-Action-Prediction & \(\langle\mathcal{F}_{x},P(u\mid E)\rangle\) & Layers 1, 2, 3 \\ \hline Partial Abduction-Prediction & \(\langle\mathcal{F},P(u_{1})P(u_{2}\mid E)\rangle\) & Layers 1, 2, 3 \\ \hline \end{tabular}
\end{table}
Table 1: Summary of the different procedures and the corresponding probabilistic causal models.
Figure 4: Graphical representation of how the Exp-SE effect is decomposed in Ex. 3.
The intuition behind the quantity \(\text{Exp-SE}_{x}^{U_{[i]},U_{[i+1]}}(y)\) can be explained as follows. The quantity \(P(y\mid x^{U_{[i]}})\) captures all the variations in \(Y\) induced by observing that \(X=x\) apart from those explained by the latent variables \(U_{1},\ldots,U_{i}\), which are fixed a priori and not updated. Similarly, the quantity \(P(y\mid x^{U_{[i+1]}})\) captures the variations in \(Y\) induced by observing that \(X=x\), apart from those explained by \(U_{1},\ldots,U_{i},U_{i+1}\). Therefore, taking the difference of the two quantities measures the variation in \(Y\) induced by observing that \(X=x\) that is explained by the latent variable \(U_{i+1}\).
Based on this definition, we can derive the first key non-parametric decomposition of the experimental spurious effect that allows the attribution of the spurious variations to the latent variables \(U_{i}\):
**Theorem 1** (Latent spurious decomposition for Markovian models).: _The experimental spurious effect \(\text{Exp-SE}_{x}(y)\) can be decomposed into latent variable-specific contributions as follows:_
\[\text{Exp-SE}_{x}(y)=\sum_{i=0}^{k-1}\text{Exp-SE}_{x}^{U_{[i]},U_{[i+1]}}(y) =\sum_{i=0}^{k-1}P(y\mid x^{U_{[i]}})-P(y\mid x^{U_{[i+1]}}). \tag{22}\]
An illustrative example of applying the theorem is shown in Appendix B.1. Thm. 1 allows one to attribute spurious variations to latent variables influencing both \(X\) and \(Y\). The key question is when such an attribution, as shown in Eq. 22, can be computed from observational data in practice (known as an _identifiability_ problem [9]). In fact, when variables are added to the PA submodel in topological order, the attribution of variations to the latents \(U_{i}\) is identifiable, as we prove next:
**Theorem 2** (Spurious decomposition identification in topological ordering).: _The quantity \(P(y\mid x^{U_{[i]}})\) can be computed from observational data using the expression_
\[P(y\mid x^{U_{[i]}})=\sum_{z}\!P(y\mid z,x)P(z_{-[i]}\mid z_{[i]},x)P(z_{[i]}), \tag{23}\]
_rendering each term of decomposition in Eq. 22 identifiable from the observational distribution \(P(v)\)._
We discuss in Appendix B.2 why a decomposition that does not follow a topological order of the variables \(U_{i}\) is not identifiable.
### Spurious Decompositions in Semi-Markovian Models
In the Markovian case, considered until now, there was a one-to-one correspondence between the observed confounders \(Z_{i}\) and their latent variables \(U_{i}\). This, however, is no longer the case in Semi-Markovian models. In particular, it can happen that there exist exogenous variables \(U_{j}\) that induce common variations between \(X,Y\), but affect more than one confounder \(Z_{i}\). We are interested in \(U_{j}\subseteq U\) that have causal (directed) paths to both \(X,Y\), described by the following definition:
**Definition 3** (Trek).: _Let \(\mathcal{M}\) be an SCM corresponding to a Semi-Markovian model. Let \(\mathcal{G}\) be the causal diagram of \(\mathcal{M}\). A trek \(\tau\) in \(\mathcal{G}\) (from \(X\) to \(Y\)) is an ordered pair of causal paths (\(g_{l}\), \(g_{r}\)) with a common exogenous source \(U_{i}\in U\). That is, \(g_{l}\) is a causal path \(U_{i}\rightarrow\cdots\to X\) and \(g_{r}\) is a causal path \(U_{i}\rightarrow\cdots\to Y\). The common source \(U_{i}\) is called the top of the trek (ToT for short), denoted \(top(g_{l},g_{r})\). A trek is called spurious if \(g_{r}\) is a causal path from \(U_{i}\) to \(Y\) that is not intercepted by \(X\)._
When decomposing spurious effects, we are in fact interested in all the exogenous variables \(U_{i}\) that lie on top of a spurious trek between \(X\) and \(Y\). It is precisely these exogenous variables that induce common variations between \(X\) and \(Y\). Using any subset of the variables that are top of spurious treks, we define a set-specific notion of a spurious effect:
**Definition 4** (Exogenous set-specific spurious effect).: _Let \(U_{sToT}\subseteq U\) be the subset of exogenous variables that lie on top of a spurious trek between \(X\) and \(Y\). Suppose \(A,B\subseteq U_{sToT}\) are two nested subsets of \(U_{sToT}\), that is \(A\subseteq B\). We then define the exogenous experimental spurious effect with respect to sets \(A,B\) as_
\[\text{Exp-SE}_{x}^{A,B}(y)=P(y\mid x^{A})-P(y\mid x^{B}). \tag{24}\]
The above definition is analogous to Def. 2, but we are now fixing different subsets of the tops of spurious treks. We present the quantity \(\text{Exp-SE}_{x}^{A,B}(y)\) as a graphical contrast in Fig. 5. In particular, the set of tops of spurious treks \(U_{sToT}\) is partitioned into three parts \((U_{A},U_{B\setminus A},U_{B^{C}})\). The causal diagram in the figure is informal, and the dots \((\cdots)\) represent arbitrary possible observed confounders
that lie along indicated pathways. On the l.h.s. of the figure, the set \(U_{A}\) does not respond to the conditioning \(X=x\), whereas \(U_{B\setminus A},U_{B^{C}}\) do. This is contrasted with the r.h.s., in which neither \(U_{A}\) nor \(U_{B\setminus A}\) respond to \(X=x\), whereas \(U_{B^{C}}\) still does respond to the \(X=x\) conditioning. The described contrast thus captures the spurious effect explained by the tops of spurious treks in \(U_{B\setminus A}\).
Analogous to Thm. 1, we next state a variable-specific decomposition of the spurious effect, which is now with respect to exogenous variables that are top of spurious treks:
**Theorem 3** (Semi-Markovian spurious decomposition).: _Let \(U_{sToT}=\{U_{1},\ldots,U_{m}\}\subseteq U\) be the subset of exogenous variables that lie on top of a spurious trek between \(X\) and \(Y\). Let \(U_{[i]}\) denote the variables \(U_{1},\ldots,U_{i}\) (\(U_{[0]}\) denotes the empty set \(\emptyset\)). The experimental spurious effect Exp-SE\({}_{x}(y)\) can be decomposed into variable-specific contributions as follows:_
\[\text{Exp-SE}_{x}(y)=\sum_{i=0}^{m-1}\text{Exp-SE}_{x}^{U_{[i]},U_{[i+1]}}(y )=\sum_{i=0}^{k-1}P(y\mid x^{U_{[i]}})-P(y\mid x^{U_{[i+1]}}). \tag{25}\]
An example demonstrating the Semi-Markovian decomposition is given in Appendix B.3. We next discuss the question of identification.
**Definition 5** (Top of trek from the causal diagram).: _Let \(\mathcal{M}\) be a Semi-Markovian model and let \(\mathcal{G}\) be the associated causal diagram. The set of variables \(U_{sToT}\) can be constructed from the causal diagram in the following way:_
1. _initialize_ \(U_{sToT}=\emptyset\)_,_
2. _for each bidirected edge_ \(V_{i}\leftarrow\joinrel\joinrel\joinrel\joinrel\joinrel\joinrel\joinrel\joinrel\joinrel\joinrel\joinrel\joinrel\joinrel\joinrel\joinrel\joinrel\joinrel\joinrel\joinrel\joinrel\joinrel\joinrel\joinrel\joinrel\joinrel\joinrel\joinrel\joinrel\join\joinrel\joinrel\joinrel\join\joinrel\joinrel\join\joinrel\join\joinrel\join\joinrel\join\joinrel\join\joinrel\join\joinrel\join\joinrel\join\joinrel\join\joinrel\join\joinrel\join\join\joinrel\join\joinrel\join\join\joinrel\join\join\joinrel\join\join\joinrel\join\join\joinrel\join\join\join\joinrel\join\join\join\joinrel\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\joinjoin\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\joinjoin\join\join\join\join\join\join\join\join\join\joinjoin\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\join\joinjoin\join\join\join\join\join\join\join\join\join\join\join\join\joinjoin\join\join\join\joinjoin\join\join\joinjoin\join\join\joinjoin\join\join\join\join\join\joinjoin\join\join\joinjoin\join\join\join\join\joinjoin\join\join\joinjoin\join\join\joinjoin\join\join\joinjoin\join\join\join\joinjoin\join\join\join\joinjoin\join\join\join\join\join\join\join\join\joinjoin\join\join\join\join\join\join\joinjoin\join\join\join\joinjoin\join\join\join\join\join\join\joinjoin\join\join\join\join\joinjoin\join\join\join\join\join\join\joinjoin\join\join\joinjoin\join\join\joinjoin\join\join\join\joinjoin\join\join\joinjoin\join\join\join\join\joinjoin\join\joinjoin\join\join\joinjoin\joinjoin\join\joinjoin\join\joinjoin\join\join\join\joinjoin\join\joinjoin\join\join\join\joinjoin\join\join\joinjoin\join\joinjoin\join\joinjoin\join\joinjoin\join\join\join\joinjoin\join\joinjoin\join\join\joinjoin\join\joinjoin\join\join\joinjoin\join\join\joinjoin\join\join\joinjoin\join\join\join\joinjoin\join\joinjoin\join\joinjoin\join\joinjoin\join\joinjoin\join\joinjoin\join\joinjoin\join\joinjoin\join\joinjoin\joinjoin\join\joinjoin\join\joinjoin\joinjoin\join\joinjoin\joinjoin\joinjoin\join\joinjoin\join\joinjoin\joinjoin\joinjoin\joinjoin\join\joinjoin\joinjoin\joinjoin\join\joinjoin\joinjoin\join\joinjoin\joinjoin\joinjoin\joinjoin\joinjoin\join\joinjoin\joinjoin\joinjoin\joinjoin\joinjoin\joinjoin\joinjoin\joinjoin\joinjoin\joinjoin\joinjoin\joinjoin\joinjoin\joinjoin\joinjoin\joinjoin\joinjoin\joinjoin\joinjoin\joinjoin\joinjoin\joinjoinjoin\joinjoin\joinjoin\joinjoin\joinjoin\joinjoinjoin\joinjoin\joinjoin\joinjoinjoin\joinjoin\joinjoinjoin\joinjoin\joinjoinjoin\joinjoin\joinjoin\joinjoinjoin\joinjoinjoin\joinjoinjoin\joinjoinjoin\joinjoinjoin\joinjoinjoinjoin\joinjoinjoinjoin\joinjoinjoinjoin\joinjoinjoinjoinjoin\
Based on the above, we provide a sufficient condition for identification in the Semi-Markovian case:
**Theorem 4** (ID of variable spurious effects in Semi-Markovian models).: _Let \(U_{s}\subseteq U_{sToT}\). The quantity \(P(y\mid x^{U_{s}})\) is identifiable from observational data \(P(V)\) if the following hold:_
1. \(Y\notin\text{AS}(U_{s})\)_,_
2. \(U_{s}\) _satisfies anchor set exogenous ancestral closure,_ \(U_{s}=\operatorname{an}_{sToT}^{\text{ex}}(AS(U_{s}))\)_._
Some instructive examples grounding the above-introduced definitions and results can be found in Appendix B.4. In words, the conditional expectation of \(Y\) given \(X\) in the exogenous integrated submodel w.r.t. a set \(U_{s}\) is identifiable whenever (i) \(Y\) is not an element of the anchor set of \(U_{s}\) and (ii) the set \(U_{s}\) satisfies the anchor set exogenous ancestral closure. The reader may have noticed that Thm. 4 does not give an explicit identification expression for the spurious effects. The reason for this is the possible complexity of the causal diagram, with an arbitrary constellation of bidirected edges. The identification expressions need to be derived on a case-to-case basis, whereas we hope to address in future work an algorithmic way for identifying such spurious effects.
## 5 Experiment
We now apply Thm. 3 to the COMPAS dataset [1], as described in the following example. Courts in Broward County, Florida use machine learning algorithms, developed by Northpointe, to predict whether individuals released on parole are at high risk of re-offending within 2 years (\(Y\)). The algorithm is based on the demographic information \(Z\) (\(Z_{1}\) for gender, \(Z_{2}\) for age), race \(X\) (\(x_{0}\) denoting White, \(x_{1}\) Non-White), juvenile offense counts \(J\), prior offense count \(P\), and degree of charge \(D\). The causal diagram is shown in Fig. 6. We first estimate the \(\text{Exp-SE}_{x_{0}}(y)\) and obtain:
\[\text{Exp-SE}_{x_{0}}(y)=P(y\mid x_{0})-P(y_{x_{0}})=-0.026\pm 0.004. \tag{28}\]
Further, following Thm. 1, we decompose the \(\text{Exp-SE}_{x_{0}}(y)\) into contributions from sex and age:
\[\text{Exp-SE}_{x_{0}}(y) =\text{Exp-SE}_{x_{0}}^{0,U_{Z_{1}}}(y)+\text{Exp-SE}_{x_{0}}^{U _{Z_{1}},\{U_{Z_{2}}\}}(y) \tag{29}\] \[=\underbrace{-0.004\pm 0.002}_{Z_{1}\text{sex}}+\underbrace{-0.022 \pm 0.004}_{Z_{2}\text{ age}}, \tag{30}\]
showing that most of the spurious effect (about 85%) is explained by the confounder age (\(Z_{2}\)), as visualized in Fig. 7. The indicated 95% confidence intervals of the estimates were obtained by taking repeated bootstrap samples of the dataset. The source code of the experiment can be found here.
## 6 Conclusions
In this paper, we introduced a general toolkit for decomposing spurious variations in causal models. In particular, we introduced a new primitive called _partially abducted submodel_ (Def. 1), together with the procedure of partial abduction and prediction (Alg. 2). This procedure allows for new machinery for decomposing spurious variations in Markovian (Thm. 1) and Semi-Markovian (Thm. 3) models. Finally, we also developed sufficient conditions for identification of such spurious decompositions (Thms. 2, 4), and demonstrated the approach on a real-world dataset (Sec. 5). |
2307.02437 | Graphical CSS Code Transformation Using ZX Calculus | In this work, we present a generic approach to transform CSS codes by
building upon their equivalence to phase-free ZX diagrams. Using the ZX
calculus, we demonstrate diagrammatic transformations between encoding maps
associated with different codes. As a motivating example, we give explicit
transformations between the Steane code and the quantum Reed-Muller code, since
by switching between these two codes, one can obtain a fault-tolerant universal
gate set. To this end, we propose a bidirectional rewrite rule to find a (not
necessarily transversal) physical implementation for any logical ZX diagram in
any CSS code.
Then we focus on two code transformation techniques: code morphing, a
procedure that transforms a code while retaining its fault-tolerant gates, and
gauge fixing, where complimentary codes can be obtained from a common subsystem
code (e.g., the Steane and the quantum Reed-Muller codes from the [[15,1,3,3]]
code). We provide explicit graphical derivations for these techniques and show
how ZX and graphical encoder maps relate several equivalent perspectives on
these code-transforming operations. | Jiaxin Huang, Sarah Meng Li, Lia Yeh, Aleks Kissinger, Michele Mosca, Michael Vasmer | 2023-07-05T17:04:49Z | http://arxiv.org/abs/2307.02437v2 | # Graphical CSS Code Transformation Using ZX Calculus
###### Abstract
In this work, we present a generic approach to transform CSS codes by building upon their equivalence to phase-free ZX diagrams. Using the ZX calculus, we demonstrate diagrammatic transformations between encoding maps associated with different codes. As a motivating example, we give explicit transformations between the Steane code and the quantum Reed-Muller code, since by switching between these two codes, one can obtain a fault-tolerant universal gate set. To this end, we propose a bidirectional rewrite rule to find a (not necessarily transversal) physical implementation for any logical ZX diagram in any CSS code.
We then focus on two code transformation techniques: _code morphing_, a procedure that transforms a code while retaining its fault-tolerant gates, and _gauge fixing_, where complimentary codes can be obtained from a common subsystem code (e.g., the Steane and the quantum Reed-Muller codes from the \(\llbracket 15,1,3,3\rrbracket\) code). We provide explicit graphical derivations for these techniques and show how ZX and graphical encoder maps relate several equivalent perspectives on these code transforming operations.
## 1 Introduction
Quantum computation has demonstrated its potential in speeding up large-scale computational tasks [3, 68] and revolutionizing multidisciplinary fields such as drug discovery [11], climate prediction [60], chemistry simulation [47], and the quantum internet [25]. However, in a quantum system, qubits are sensitive to interference and information becomes degraded [50]. To this end, quantum error correction [55, 57] and fault tolerance [33, 41] have been developed to achieve large-scale universal quantum computation [34].
Stabilizer theory [32] is a mathematical framework to describe and analyze properties of quantum error-correcting codes (QECC). It is based on the concept of stabilizer groups, which are groups of Pauli operators whose joint \(+1\) eigenspace corresponds to the code space. Stabilizer codes are a specific type of QECC whose encoder can be efficiently simulated [1, 31]. As a family of stabilizer codes, Calderbank-Shor-Steane (CSS) codes permit simple code constructions from classical codes [9, 10, 57, 58].
As a language for rigorous diagrammatic reasoning of quantum computation, the ZX calculus consists of ZX diagrams and a set of rewrite rules [16, 66]. It has been used to relate stabilizer theory to graphical normal forms: notably, efficient axiomatization of the stabilizer fragments for qubits [4, 36, 49],
qutrits [61, 65], and prime-dimensional qudits [8]. This has enabled various applications, such as measurement-based quantum computation [49, 56], quantum circuit optimization [19, 30] and verification [46], as well as classical simulation [14, 40]. Beyond these, ZX-calculus has been applied to verify QECC [23, 26], represent Clifford encoders [38], as well as study various QECC such as tripartite coherent parity check codes [12, 13] and surface codes [27, 28, 29, 54]. Specific to CSS codes, ZX-calculus has been used to visualize their encoders [39], code maps and code surgeries [22], their correspondence to affine Lagrangian relations [20], and their constructions in high-dimensional quantum systems [21].
In this paper, we seek to answer some overarching questions about QECC constructions and fault-tolerant implementations. We focus on CSS codes and leverage the direct correspondence between phase-free ZX diagrams and CSS code encoders [39]. Given an arbitrary CSS code, based on its normal form, we propose a bidirectional rewrite rule to find a (not necessarily transversal) physical implementation for any logical ZX diagram. Furthermore, we demonstrate diagrammatic transformations between encoding maps associated with different codes. Here, we focus on two code transformation techniques: _code morphing_, a procedure that transforms a code while retaining its fault-tolerant gates [62], and _gauge fixing_, where complimentary codes (such as the Steane and the quantum Reed-Muller codes) can be obtained from a common subsystem code [2, 51, 53, 64]. We provide explicit graphical derivations for these techniques and show how ZX and graphical encoder maps relate several equivalent perspectives on these code transforming operations.
The rest of this paper is organized as follows. In Sec. 2, we introduce notions and techniques used to graphically transform different CSS codes using the ZX calculus. In Sec. 3, we generalize the ZX normal form for CSS stabilizer codes to CSS subsystem codes, and provide generic bidirectional rewrite rules for any CSS encoder. In Sec. 4, we provide explicit graphical derivations for morphing the Steane and the quantum Reed-Muller codes. In Sec. 5, we focus on the switching protocol between these two codes. Through ZX calculus, we provide a graphical interpretation of this protocol as gauge-fixing the \(\llbracket 15,1,3,3\rrbracket\) subsystem code, followed by syndrome-determined recovery operations. We conclude with Sec. 6.
## 2 Preliminaries
We start with some definitions. The Pauli matrices are \(2\times 2\) unitary operators acting on a single qubit. Let \(i\) be the imaginary unit.
\[I=\begin{bmatrix}1&0\\ 0&1\end{bmatrix},\quad X=\begin{bmatrix}0&1\\ 1&0\end{bmatrix},\quad Z=\begin{bmatrix}1&0\\ 0&-1\end{bmatrix},\quad Y=iXZ=\begin{bmatrix}0&-i\\ i&0\end{bmatrix}.\]
Let \(\mathcal{P}_{1}\) be the single-qubit Pauli group, \(\mathcal{P}_{1}=\left\langle i,X,Z\right\rangle\), \(I,Y\in\mathcal{P}_{1}\).
**Definition 2.1**.: _Let \(U\in\mathcal{U}(2)\). In a system over \(n\) qubits, \(1\leq i\leq n\),_
\[U_{i}=I\otimes\ldots\otimes I\otimes U\otimes I\otimes\ldots\otimes I\]
_denotes \(U\) acting on the \(i\)-th qubit, and identity on all other qubits._
Let \(\mathcal{P}_{n}\) be the \(n\)-qubit Pauli group. It consists of all tensor products of single-qubit Pauli operators.
\[\mathcal{P}_{n}=\big{\langle}i,X_{1},Z_{1},\ldots,X_{n},Z_{n}\big{\rangle}.\]
The stabilizer formalism is a mathematical framework to describe and analyze the properties of certain QECC, called stabilizer codes [32, 33]. Consider \(n\) qubits and let \(m\leq n\). A stabilizer group \(\mathcal{S}=\left\langle S_{1},\ldots,S_{m}\right\rangle\) is an Abelian subgroup of \(\mathcal{P}_{n}\) that does not contain \(-I\). The codespace of the corresponding stabilizer code, \(\mathcal{C}\), is the joint \(+1\) eigenspace of \(\mathcal{S}\), i.e.,
\[\mathcal{C}=\{|\psi\rangle\in\mathbb{C}^{2^{n}};\,S|\psi\rangle=|\psi\rangle, \forall S\in\mathcal{S}\}.\]
The number of encoded qubits in a stabilizer code is \(k=n-m\), where \(m\) is the number of independent stabilizer generators [32]. Moreover, we can define the _centralizer_ of \(\mathcal{S}\) as
\[\mathcal{N}(\mathcal{S})=\{U\in\mathcal{P}_{n};\,[U,S]=0,\forall S\in\ \mathcal{ S}\}.\]
One can check that \(\mathcal{N}(\mathcal{S})\) is a subgroup of \(\mathcal{P}_{n}\) and \(\mathcal{S}\subset\mathcal{N}(\mathcal{S})\). We remark that the notions of normalizer and centralizer coincide for any stabilizer group. In what follows, we will use them interchangeably. As we will see later, \(\mathcal{N}(\mathcal{S})\) provides an algebraic structure for the subsystem codes. The code distance, \(d\), of a stabilizer code is the minimal weight of operators in \(\mathcal{N}(\mathcal{S})/\langle iI\rangle\) that is not in \(\mathcal{S}\). We summarize the properties of a stabilizer code with the shorthand \([\![n,k,d]\!]\).
Finally, we introduce some notation for subsets of \(n\)-qubit Pauli operators, which will prove useful for defining CSS codes.
**Definition 2.2**.: _Let \(M\) be an \(m\times n\) binary matrix and \(P\in\mathcal{P}_{1}/\langle iI\rangle\). In the stabilizer formalism, \(M\) is called the stabilizer matrix, and \(M^{P}\) defines \(m\) P-type stabilizer generators._
\[M^{P}\coloneqq\left\{\bigotimes_{j=1}^{n}P^{[M]_{j}};\ 1\leq i\leq m\right\}.\]
CSS codes are QECC whose stabilizers are defined by two orthogonal binary matrices \(G\) and \(H\)[9, 57]:
\[\mathcal{S}=\langle G^{X},H^{Z}\rangle,\quad GH^{\intercal}=\mathbf{0},\]
\(H^{\intercal}\) is the transpose of \(H\). This means that the stabilizer generators of a CSS code can be divided into two types: X-type and Z-type. For example, the \([\![7,1,3]\!]\) Steane code [57] in Fig. 0(a) is specified by
\[G=H=\begin{bmatrix}1&0&1&0&1&0&1\\ 0&1&1&0&0&1&1\\ 0&0&0&1&1&1&1\end{bmatrix}_{3\times 7}. \tag{1}\]
Accordingly, the X-type and Z-type stabilizers are defined as
\[S_{1}^{X}=X_{1}X_{3}X_{5}X_{7},\ S_{2}^{X}=X_{2}X_{3}X_{6}X_{7},\ S_{3}^{X}=X_ {4}X_{5}X_{6}X_{7},\ S_{1}^{Z}=Z_{1}Z_{3}Z_{5}Z_{7},\ S_{2}^{Z}=Z_{2}Z_{3}Z_{6 }Z_{7},\ S_{3}^{Z}=Z_{4}Z_{5}Z_{6}Z_{7}.\]
The logical operators \(\overline{X}\) and \(\overline{Z}\) are defined as
\[\overline{X}=X_{1}X_{4}X_{5}\qquad\text{and}\qquad\overline{Z}=Z_{1}Z_{4}Z_{5}. \tag{2}\]
In Sec. 2.1, we define CSS subsystem codes. In Sec. 2.2, we define several CSS codes that will be used in subsequent sections. In Sec. 2.3, we introduce the basics of the ZX calculus and the phase-free ZX normal forms.
### CSS Subsystem Codes
Subsystem codes [43, 52] are QECC where some of the logical qubits are not used for information storage and processing. These logical qubits are called gauge qubits. By fixing gauge qubits to some specific states, the same subsystem code may exhibit different properties, for instance, having different sets of transversal gates [7, 44, 45, 51, 67]. This provides a tool to circumvent restrictions on transversal gates such as the Eastin-Knill theorem [24].
Based on the construction proposed in [52], we describe a subsystem code using the stabilizer formalism.
**Definition 2.3**.: _Given a stabilizer group \(\mathcal{S}\), a gauge group \(\mathcal{G}\) is a normal subgroup of \(\mathcal{N}(\mathcal{S})\), such that \(\mathcal{S}\subset\mathcal{G}\) and that \(\mathcal{G}/\mathcal{S}\) contains anticommuting Pauli pairs. In other words, one can write_
\[\mathcal{S}=\big{\langle}S_{1},\ldots,S_{m}\big{\rangle},\quad\mathcal{G}= \big{\langle}S_{1},\ldots,S_{m},g_{1}^{X},g_{1}^{Z},\ldots,g_{r}^{X},g_{r}^{Z} \big{\rangle},\quad 1\leq m+r\leq n.\]
\((\mathcal{S},\mathcal{G})\) _defines an \(\llbracket n,k,r,d\rrbracket\) subsystem code where \(n=m+k+r\). The logical operators are elements of the quotient group \(\mathcal{L}=\mathcal{N}(\mathcal{S})/\mathcal{G}\)._
Under this construction, \(n\) physical qubits are used to encode \(k\) logical qubits with \(r\) gauge qubits. Alternatively, we can think of the gauge group \(\mathcal{G}\) as partitioning the code space \(\mathcal{C}\) into two subsystems: \(\mathcal{C}=\mathcal{A}\otimes\mathcal{B}\). Logical information is encoded in \(\mathcal{A}\) and \(\mathcal{L}\) serves as the group of logical operations. Gauge operators from \(\mathcal{G}\) act trivially on subsystem \(\mathcal{A}\), while operators from \(\mathcal{L}\) act trivially on subsystem \(\mathcal{B}\). Therefore, two states \(\rho^{\mathcal{A}}\otimes\rho^{\mathcal{B}}\) and \(\rho^{\prime\mathcal{A}}\otimes\rho^{\prime\mathcal{B}}\) are considered equivalent if \(\rho^{\mathcal{A}}=\rho^{\prime\mathcal{A}}\), regardless of the states \(\rho^{\mathcal{B}}\) and \(\rho^{\prime\mathcal{B}}\). When \(r=0\), \(\mathcal{G}=\mathcal{S}\). In that case, an \(\llbracket n,k,0,d\rrbracket\) subsystem code is essentially an \(\llbracket n,k,d\rrbracket\) stabilizer code.
CSS subsystem codes are subsystem codes whose stabilizer generators can be divided into X-type and Z-type operators. In what follows, we provide an example to illustrate their construction.
### Some Interesting CSS Codes
We start by defining the stabilizer groups for the \(\llbracket 7,1,3\rrbracket\) Steane code, the \(\llbracket 15,1,3\rrbracket\) extended Steane code [2], and the \(\llbracket 15,1,3\rrbracket\) quantum Reed-Muller code [42]. They are derived from the family of \(\llbracket 2^{m}-1,1,3\rrbracket\) quantum Reed-Muller codes, with a recursive construction of stabilizer matrices [59]. The Steane code has transversal logical Clifford operators, and the quantum Reed-Muller code has a transversal logical T gate. Together these operators form a universal set of fault-tolerant gates. In Sec. 5, the relations between these codes are studied from a diagrammatic perspective.
For brevity, their corresponding stabilizer groups are denoted as \(\mathcal{S}_{steanne}\), \(\mathcal{S}_{ex}\), and \(\mathcal{S}_{qrm}\). As per Def. 2.2, consider three stabilizer matrices \(F\), \(H\), and \(J\). Note that \(G\) is defined in Eq. (1). \(\mathbf{0}\) and \(\mathbf{1}\) denote blocks of \(0\)s' and \(1\)s' respectively. Their dimensions can be inferred from the context.
\[F=\left[\begin{array}{cccccccc}G&0&G\\ \mathbf{0}&1&\mathbf{1}\end{array}\right]_{4\times 15},\quad H=\left[ \begin{array}{cccccccc}G&\mathbf{0}\end{array}\right]_{3\times 15},\] \[J=\left[\begin{array}{cccccccccccc}1&0&1&0&0&0&0&0&1&0&1&0&0&0&0 \\ 0&1&1&0&0&0&0&0&0&1&1&0&0&0&0\\ 0&0&1&0&0&0&1&0&0&0&1&0&0&0&1\end{array}\right]_{3\times 15}.\]
Then, the stabilizer groups are defined as
\[\mathcal{S}_{steanne}=\big{\langle}G^{X},G^{Z}\big{\rangle},\quad\mathcal{S}_ {ex}=\big{\langle}F^{X},F^{Z},H^{X},H^{Z}\big{\rangle},\quad\mathcal{S}_{qrm }=\big{\langle}F^{X},F^{Z},H^{Z},J^{Z}\big{\rangle}. \tag{3}\]
Geometrically, one can define \(\mathcal{S}_{steane}\) and \(\mathcal{S}_{qrm}\) with the aid of Fig. 1. In Fig. 0(a), the Steane code is visualized on a 2D lattice. Since the Steane code is self-dual, every coloured face corresponds to an X-type and Z-type stabilizer. In Fig. 0(b), the quantum Reed-Muller code is visualized on a 3D lattice. Every coloured face corresponds to a weight-4 Z-type stabilizer. Every coloured cell corresponds to a weight-8 X-type and Z-type stabilizer respectively. For the Steane code, the logical operators defined in Eq. (2) correspond to an edge in the triangle. For the quantum Reed-Muller code, the logical X operator corresponds to a weight-7 triangular face, and the logical Z operator corresponds to a weight-3 edge of the entire tetrahedron. An example is shown below.
\[\overline{X}=X_{1}X_{2}X_{3}X_{4}X_{5}X_{6}X_{7}\qquad\text{and}\qquad \overline{Z}=Z_{1}Z_{4}Z_{5} \tag{4}\]
Given such representations, the Steane code and the quantum Reed-Muller code are also special cases of colour codes [5; 6; 44].
From Eq. (3), the extended Steane code is self-dual, and its encoded state is characterized by the lemma below. It shows that \(\mathcal{S}_{ex}\) and \(\mathcal{S}_{steane}\) are equivalent up to some auxiliary state.
**Lemma 2.1** ([2]).: _Any codeword \(\ket{\psi}\) of the extended Steane code can be decomposed into a codeword \(\ket{\phi}\) of the Steane code and a fixed state \(\ket{\eta}\). That is,_
\[\ket{\psi}=\ket{\phi}\otimes\ket{\eta},\]
_where \(\ket{\eta}=\frac{1}{\sqrt{2}}(\ket{0}\ket{\overline{0}}+\ket{1})\ket{\overline {1}})\), \(\ket{\overline{0}}\) and \(\ket{\overline{1}}\) are the logical 0 and 1 encoded in the Steane code._
Since the logical information \(\ket{\phi}\) encoded in the Steane code is not entangled with \(\ket{\eta}\), to switch between the Steane code and the extended Steane code, one may simply add or discard the auxiliary state \(\ket{\eta}\). This property will prove useful in Sec. 5.
Next, we define the \(\llbracket 15,1,3,3\rrbracket\) CSS subsystem code [64]. As per Def. 2.3, let \(\mathcal{S}_{sub}\) and \(\mathcal{G}\) be its stabilizer group and gauge group respectively.
\[\mathcal{S}_{sub}=\big{\langle}F^{X},F^{Z},H^{Z}\big{\rangle}, \quad\mathcal{G}=\big{\langle}F^{X},F^{Z},H^{X},H^{Z},J^{Z}\big{\rangle}. \tag{5}\]
Let \(\mathcal{L}_{g}=\mathcal{G}/\mathcal{S}\) and \(\mathcal{L}=\mathcal{N}(\mathcal{S})/\mathcal{G}\). One can verify that
\[\mathcal{L}_{g}=\big{\langle}H^{X},J^{Z}\big{\rangle},\quad \mathcal{L}=\big{\langle}\overline{X},\overline{Z}\big{\rangle}. \tag{6}\]
Thus, the CSS subsystem code has one logical qubit and three gauge qubits, and they are acted on by \(\mathcal{L}\) and \(\mathcal{L}_{g}\) respectively. From Sec. 3 onwards, we call operators in \(\mathcal{L}_{g}\) as _gauge operators_.
Figure 1: Each vertex represents a physical qubit. Each edge serves as an aid to the eye. They do not imply any physical interactions or inherent structures.
Moreover, \(\mathcal{S}_{sub}\) can be viewed as the stabilizer group of a \(\llbracket 15,4,3\rrbracket\) CSS code, with logical operators \(\mathcal{L}^{\prime}\). This code appears in an intermediary step of the gauge fixing process in Sec. 5.
\[\mathcal{L}^{\prime}\coloneqq\mathcal{L}_{g}\cup\mathcal{L}=\big{\langle}H^{X}, J^{Z},\overline{X},\overline{Z}\big{\rangle}. \tag{7}\]
### ZX Calculus
The qubit ZX-calculus [15, 16, 17, 66] is a quantum graphical calculus for diagrammatic reasoning of any qubit quantum computation. Every diagram in the calculus is composed of two types of generators: Z spiders, which sum over the eigenbasis of the Pauli Z operator:
\[{}^{n}\!\big{\langle}\ \raisebox{-1.0pt}{\includegraphics[width=14.226378pt]{ ZX-calculus.pdf}}\ \big{\rangle}{}^{n}\coloneqq|0\rangle^{\otimes n}\langle 0|^{\otimes m}\ +\ e^{i\alpha}|1\rangle^{\otimes n} \langle 1|^{\otimes m}, \tag{8}\]
and X spiders, which sum over the eigenbasis of the Pauli X operator:
\[{}^{n}\!\coloneqq|+\rangle^{\otimes n}\langle+|^{\otimes m}\ +\ e^{i\alpha}|- \rangle^{\otimes n}\langle-|^{\otimes m}. \tag{9}\]
The ZX-calculus is _universal_[16] in the sense that any linear map from \(m\) qubits to \(n\) qubits corresponds exactly to a ZX diagram, by the construction of Eqs. (8) and (9) and the composition of linear maps.
Furthermore, the ZX-calculus is _complete_[35, 37]: Any equality of linear maps on any number of qubits derivable in the Hilbert space formalism, is derivable using only a finite set of rules in the calculus. The smallest complete rule set to date [63] is shown in Fig. 2. Some additional rules, despite being derivable from this rule set, will be convenient to use in this paper. They are summarized in Fig. 3.
When a spider has phase zero, we omit its phase in the diagram, as shown below. A ZX diagram is _phase-free_ if all of its spiders have zero phases. For more discussions on phase-free ZX diagrams, we refer readers to consult [39].
Figure 2: These eight equations suffice to derive all other equalities of linear maps on qubits [63]. \(k\in\mathbb{Z}_{2}\). \(\alpha_{i}\), \(\beta_{i}\) and \(\gamma\) are real numbers satisfying the trigonometric relations derived in [18]. Each equation still holds when we replace all spiders with their corresponding spiders of the opposite colour. Whenever there are any two wires with... between them, the rule holds when replacing this with any number of wires (i.e., 0 or greater).
Due to the universality of the ZX calculus, quantum error-correcting code encoders, as linear isometries, can be drawn as ZX diagrams [38]. Moreover, the encoder for a CSS code corresponds exactly to the phase-free _ZX (and XZ) normal form_[39].
**Definition 2.4**.: _For a CSS stabilizer code defined by \(\mathcal{S}\), let \(\left\{S_{i}^{x};1\leq i\leq m\right\}\subset\mathcal{S}\) be the X-type stabilizer generators and \(\left\{\overline{X_{j}};1\leq j\leq k\right\}\) be the logical X operators, \(m+k<n\). Its ZX normal form can be found via the following steps:_
1. _For each physical qubit, introduce an X spider._
2. _For each X-type stabilizer generator_ \(S_{i}^{x}\) _and logical operator_ \(\overline{X_{j}}\)_, introduce a Z spider and connect it to all X spiders where this operator has support._
3. _Give each X spider an output wire._
4. _For each Z spider representing_ \(\overline{X_{j}}\)_, give it an input wire._
As an example, the ZX normal form for the Steane code is drawn in Fig._ 4_. The XZ normal form can be constructed based on Z-type stabilizer generators and logical Z operators by inverting the roles of X and Z spiders in the above procedure. In_ _[_39_]__, Kissinger gave an algorithm to rewrite any phase-free ZX diagram into both the ZX and XZ normal forms, and pointed out that it is sufficient to represent a CSS code encoder using either one of the forms.
## 3 Graphical Construction of CSS Encoders
### ZX Normal Forms for CSS Subsystem Codes
We generalize the ZX normal form for CSS stabilizer codes to CSS subsystem codes as follows.
Figure 4: The Steane code encoder in the ZX normal form.
Figure 3: Some other useful rewrite rules, each derivable from the rules in Figure 2. \(k\in\mathbb{Z}_{2}\). Each equation still holds when we interchange X and Z spiders.
**Definition 3.1**.: _For an \(\llbracket n,k,r,d\rrbracket\) CSS subsystem code defined by \((\mathcal{S},\mathcal{G})\), let \(\left\{S_{i}^{x};1\leq i\leq m\right\}\) be the X-type stabilizer generators, \(\left\{L_{g}^{x};1\leq t\leq r\right\}\) be the X-type gauge operators, and \(\left\{\overline{X_{j}};1\leq j\leq k\right\}\) be the logical X operators, \(m+k+r<n\). Its ZX normal form can be found via the following steps:_
1. _For each physical qubit, introduce an_ \(X\) _spider._
2. _For each stabilizer generator_ \(S_{i}^{x}\)_, logical operator_ \(\overline{X_{j}}\) _and gauge operator_ \(L_{g_{t}}^{x}\)_, introduce a Z spider and connect it to all X spiders where this operator has support._
3. _Give each_ \(X\) _spider an output wire._
4. _For each_ \(Z\) _spider representing_ \(\overline{X_{j}}\)_, give it an input wire._
5. _For all_ \(Z\) _spiders representing_ \(L_{g_{t}}^{x}\)_, attach to them a joint arbitrary input state (i.e., a density operator_ \(\rho\)_)._
Similar to CSS stabilizer codes, CSS subsystem codes also have an equivalent XZ normal form, which can be found by inverting the role of Z and X in the above procedure.
For \(n>3\), below we exemplify the ZX normal form for an \(\llbracket n,1,2,d\rrbracket\) CSS subsystem code with three X-type stabilizers generators \(\left\{S_{1}^{x},S_{2}^{x},S_{3}^{x}\right\}\), two X-type gauge operators \(\left\{L_{g_{1}}^{x},L_{g_{2}}^{x}\right\}\), and one logical operator \(\left\{\overline{X}\right\}\). For simplicity, we substitute wires connecting Z and X spiders by \(\prec\) :. The detailed connectivities are omitted here, but they should be clear following step (b) in Def. 3.1. This notation will be used in the remainder of this paper.
### Pushing through the Encoder
For any \(\llbracket n,k,d\rrbracket\) CSS code, its encoder map \(E\) is of the form:
\[k\left\{\begin{array}{c|c}\hline\vdots&E&\vdots\\ \hline\end{array}\right\}n.\]
**Definition 3.2**.: _Let \(\overline{X_{i}}\) and \(\overline{Z_{i}}\) be the X and Z operators acting on the \(i\)-th logical qubit. Let \(\overline{\mathcal{X}_{i}}\) and \(\overline{\mathcal{Z}_{i}}\) be the physical implementation of \(\overline{X_{i}}\) and \(\overline{Z_{i}}\) respectively. Diagrammatically, they can be represented as_
\[\begin{array}{c|c}\hline\vdots&\vdots&E&\vdots&\overline{\mathcal{X}_{1}} \vdots&\text{and}&\text{\raisebox{-1.29pt}{\includegraphics[height=14.226378pt]{.}} }&E&\vdots&\overline{\mathcal{Z}_{1}}\vdots&\text{.}\\ \hline\end{array}\]
In other words, pushing \(\overline{X_{i}}\) (or \(\overline{Z_{i}}\)) through \(E\) yields \(\overline{\mathcal{X}_{i}}\) (or \(\overline{Z_{i}}\)). Using ZX rewrite rules along with the ZX (or XZ) normal form, we can prove the following lemma.
**Lemma 3.1**.: _For any CSS code, all \(\overline{X_{i}}\) and \(\overline{Z_{i}}\) are implementable by multiple single-qubit Pauli operators. In other words, all CSS codes have transversal \(\overline{X_{i}}\) and \(\overline{Z_{i}}\)._
Proof.: Consider an arbitrary CSS code. Without loss of generality, represent its encoder \(E\) in the ZX normal form following Def. 2.4. Then proceed by applying the \(\pi\)-copy' rule on every \(\overline{X_{i}}\) (the X spider with a phase \(\pi\) on the left-hand side of the encoder \(E\)).
Below we illustrate the proof using the \([\![4,2,2]\!]\) code as an example.
**Example 3.1**.: _For the \([\![4,2,2]\!]\) code, \(\overline{X_{1}}=X_{1}X_{2}\)._
Beyond just X or Z spiders, one can push _any_ ZX diagram acting on the logical qubits through the encoder. Such pushing is bidirectional, and the left-to-right direction is interpreted as finding a physical implementation for a given logical operator.
**Proposition 3.1**.: Let \(E\) be the encoder of a CSS code. For any ZX diagram \(L\) on the left-hand side of \(E\), one can write down a corresponding ZX diagram \(P\) on the right-hand side of \(E\), such that \(EL=PE\). In other words, \(P\) is a valid physical implementation of \(L\) for that CSS code.
Proof.: We proceed as follows. First, unfuse all spiders on the logical qubit wires of \(L\), whenever they are not phase-free or have more than one external wire:
For each X (or Z) spider on the logical qubit wire, rewriting \(E\) to be in ZX (or XZ) normal form and applying the strong complementarity (sc) rule yields:
On the left-hand side, a phase-free X (or Z) spider acts on the \(i\)-th logical qubit; on the right-hand side, phase-free X (or Z) spiders act on all physical qubits wherever \(\overline{X}_{i}\) (or \(\overline{Z}_{i}\)) has support. Therefore, any type of \(L\) can be pushed through \(E\), resulting in a diagram \(P\) which satisfies \(EL=PE\).
In [26], it was proved that a physical implementation \(P\) of a logical operator \(L\) satisfies \(L=E^{\dagger}PE\). This is implied by \(EL=PE\) as \(E^{\dagger}E=I\).
## 4 Graphical Morphing of CSS Codes
One way to transform CSS codes is known as _code morphing_. It provides a systematic framework to construct new codes from an existing code while preserving the number of logical qubits in the morphed code. Here, we present this procedure through the rewrites of the encoder diagram using the ZX calculus. Let us start by revisiting the code morphing definition in [62].
**Definition 4.1**.: _Let \(\mathcal{S}\) be a stabilizer group and \(\mathcal{C}\) be its joint \(+1\) eigenspace. \(\mathcal{C}\) is called the parent code. Let \(Q\) denote the set of physical qubits of \(\mathcal{C}\) and \(R\subseteq Q\). Then \(\mathcal{S}(R)\) is a subgroup of \(\mathcal{S}\) generated by all stabilizers of \(\mathcal{S}\) that are fully supported on \(R\). Let \(\mathcal{C}(R)\) be the joint \(+1\) eigenspace of \(\mathcal{S}(R)\), and \(\mathcal{C}(R)\) is called the child code. Given the parent code encoder \(E_{\mathcal{C}}\), concatenate it with the inverse of the child code encoder \(E_{\mathcal{C}(R)}^{\dagger}\). This gives the morphed code \(\mathcal{C}_{\setminus R}\)._
Fig. 5 provides two equivalent interpretations for the code morphing process. In Fig. 4(a), Def. 4.1 is depicted by the circuit diagram. Since \(E_{\mathcal{C}(R)}\) is an isometry, \(E_{\mathcal{C}(R)}^{\dagger}E_{\mathcal{C}(R)}=I\). By construction, the equation shown in Fig. 4(a) holds [62]. Moreover, the parameters of \(\mathcal{C}=\llbracket n,k,d\rrbracket\), \(\mathcal{C}(R)=\llbracket n_{1},k_{1},d_{1}\rrbracket\), and \(\mathcal{C}_{\setminus R}=\llbracket n_{2},k_{2},d_{2}\rrbracket\) are characterized below. Let \(m,m_{1},m_{2}\) be the number of stabilizer generators for \(\mathcal{C}\), \(\mathcal{C}(R)\), and \(\mathcal{C}_{\setminus R}\) respectively. Then
\[n_{2}=n-n_{1}+k_{1},\quad k_{2}=k,\quad m_{2}=(n-k)-(n_{1}-k_{1})=m-m_{1}, \quad d_{1},d_{2}\in\mathbb{N}.\]
Fig. 4(b) provides a concrete example of applying Def. 4.1 to the \(\llbracket 7,1,3\rrbracket\) Steane code, where \(S=\{1,2,3,4,5,6,7\}\) and \(R=\{2,3,6,7\}\). As a result, the \(\llbracket 5,1,2\rrbracket\) code is morphed from the parent code along with the \(\llbracket 4,2,2\rrbracket\) child code. This morphed code inherits a fault-tolerant implementation of the Clifford group from the \(\llbracket 7,1,3\rrbracket\) code, which has a transversal implementation of the logical Clifford operators. This morphing process is represented in the ZX diagram by cutting the edges labelled by \(\overline{1}\) and \(\overline{2}\) adjacent to the X spider. This is equivalent to concatenating the ZX diagram of \(E_{\llbracket 4,2,2\rrbracket}^{\dagger}\) in Fig. 4(a).
Figure 5: Code morphing can be visualized using both circuit and ZX diagrams. In Fig. 4(a), code morphing is viewed as a concatenation of the parent code encoder \(E_{\mathcal{C}}\) and the inverse of the child code encoder \(E_{\mathcal{C}(R)}^{\dagger}\). In Fig. 4(b), the encoder \(E_{\mathcal{C}}\) of the Steane code is represented in the ZX normal form. As described in Proc. 4.1, by applying ZX rules (id) and (fusion) in Fig. 2, we can perform code morphing by bipartitioning it into the encoder \(E_{\mathcal{C}_{\setminus R}}\) of the morphed code \(\mathcal{C}_{\setminus R}=\llbracket 5,1,2\rrbracket\), and the encoder \(E_{\mathcal{C}(R)}\) of the child code \(\mathcal{C}(R)=\llbracket 4,2,2\rrbracket\).
Next, we generalize the notion of code morphing and show how ZX calculus could be used to study these relations between the encoders of different CSS codes. More precisely, we provide an algorithm to morph a new CSS code from an existing CSS code.
**Procedure 4.1**.: _Given a parent code \(\mathcal{C}\) and a child code \(\mathcal{C}(R)\) satisfying Def. 4.1, construct the encoder of \(\mathcal{C}\) in the ZX normal form. Then the code morphing proceeds as follows:_
1. _[label=()]_
2. _Unfuse every Z spider which is supported on_ \(c\) _qubits within_ \(R\) _and_ \(f\) _qubits outside_ \(R\)_,_ \(c\neq 0\)_,_ \(f\neq 0\)_._
3. _Add an identity X spider between each pair of Z spiders being unfused in step (a)._
4. _Cut the edge between every identity X spider and the Z spiders supported on the_ \(f\) _qubits in_ \(R\)_._
It follows that the subdiagram containing \(R\) corresponds to the ZX normal form of \(E_{\mathcal{C}(R)}\). It has the same number of X spiders as R, so \(n_{1}=|R|\). Suppose that there are \(h\) Z spiders being unfused. Then \(h\) must be bounded by the number of Z spiders in the ZX normal form of \(E_{\mathcal{C}}\). As each spider unfusion introduces a logical qubit to \(\mathcal{C}(R)\), \(k_{1}=h\). On the other hand, the complement subdiagram contains \(n-n_{1}+k_{1}\) X spiders as each edge cut introduces a new X spider into the complement subdiagram. It also contains \(k\) logical qubits as the input edges in the ZX normal form of \(E_{\mathcal{C}}\) are invariant throughout the spider-unfusing and edge-cutting process. This gives the ZX normal form for the encoder of the morphed code \(\mathcal{C}_{\setminus\mathcal{R}}=\llbracket n_{2},k_{2},d_{2}\rrbracket\), where \(n_{2}=n-n_{1}+k_{1}\), \(k_{2}=k\), \(d_{2}\in\mathbb{N}\). As a result, the ZX normal form of \(E_{\mathcal{C}}\) is decomposed into the ZX normal forms of \(E_{\mathcal{C}(R)}\) and \(E_{\mathcal{C}_{\setminus\mathcal{R}}}\) respectively.
As the XZ and ZX normal forms are equivalent for CSS codes, Proc. 4.1 can be carried out for the XZ normal form by inverting the roles of Z and X at each step.
Here, we exemplify the application of Proc. 4.1 by morphing two simple CSS codes. Unlike Fig. 4(b), Ex. 4.1 chooses a different subset of qubits, \(R=\{4,5,6,7\}\), to obtain the \(\llbracket 6,1,1\rrbracket\) morphed code. In Ex. 4.2, we visualize the \(\llbracket 10,1,2\rrbracket\) code morphing from the \(\llbracket 15,1,3\rrbracket\) quantum Reed-Muller code. The \(\llbracket 10,1,2\rrbracket\) code is interesting because it inherits a fault-tolerant implementation of the logical \(T\) gate from its parent code, which has a transversal implementation of the logical \(T\) gate.
**Example 4.1**.: _Let the parent code \(\mathcal{C}\) be the Steane code and the child code be \(C(R)=\llbracket 4,3,1\rrbracket\). By Proc. 4.1, we obtain the morphed code \(\mathcal{C}_{\setminus R}=\llbracket 6,1,1\rrbracket\). Note that for \(C(R)\), there is one X-type stabilizer generator and no Z-type stabilizer generator. This means that \(C(R)\) cannot detect a single-qubit \(X\) error, so it has a distance of \(1\). In \(\mathcal{C}_{\setminus R}\), the physical qubit labelled \(\overline{3}\) is not protected by any X-type stabilizer. Therefore, \(\mathcal{C}_{\setminus R}\) is of distance \(1\)._
**Example 4.2**.: _Let the parent code \(\mathcal{C}\) be the quantum Reed-Muller and the child code be \(\mathcal{C}(R)=\llbracket 8,3,2\rrbracket\). By Proc. 4.1, we obtain the morphed code \(\mathcal{C}_{\setminus R}=\llbracket 10,1,2\rrbracket\). For brevity, the X spiders representing physical qubits and the logical qubit wires inputting to the Z spiders are omitted._
## 5 Graphical Code Switching of CSS Codes
Another way to transform CSS codes is known as _code switching_. It is a widely studied technique in quantum error correction. Codes with complementary fault-tolerant gate sets are switched between each other to realize a universal set of logical operations. As a case study, we focus on the code switching protocol between the Steane code and the quantum Reed-Muller code [2, 51, 53]. Since this process is bidirectional, the reasoning for one direction can be simply adjusted for the opposite direction. Recall in Lem. 2.1, we showed that the extended Steane code is equivalent to the Steane code up to some auxiliary state. In what follows, we focus on the _backward switching_ from the quantum Reed-Muller code to the extended Steane code.
Using the ZX calculus, we provide a graphical interpretation for the backward code switching. More precisely, it is visualized as gauge-fixing the \(\llbracket 15,1,3,3\rrbracket\) subsystem code, followed by a sequence of syndrome-determined recovery operations.
We first characterize the relations between the quantum Reed-Muller code, the extended Steane code, and the \(\llbracket 15,1,3,3\rrbracket\) subsystem code. For brevity, we denote these codes as \(\mathcal{C}_{qrm},\ \mathcal{C}_{ex}\) and \(\mathcal{C}_{sub}\), and their respective encoders as \(E_{qrm},\ E_{ex}\), and \(E_{sub}\).
**Lemma 5.1**.: _When the three gauge qubits are in the \(|\overline{++}\rangle\) state, \(\mathcal{C}_{sub}\) is equal to \(\mathcal{C}_{ex}\), as shown in Fig. 6._
Proof.: According to Def. 2.3, represent \(E_{sub}\) in the XZ normal form, with Z-type stabilizer generators \(S_{i}^{z}\), Z-type gauge operators \(L_{g_{j}}^{z}\), and one logical Z operator \(\overline{Z}\), \(1\leq i\leq 7,\ 1\leq j\leq 3\). After applying a
Figure 6: \(\mathcal{C}_{sub}\) is equivalent to \(\mathcal{C}_{ex}\) up to a fixed state of gauge qubits.
sequence of rewrite rules, we obtain exactly the XZ normal form for \(E_{ex}\).
Alternatively, if one chooses to represent \(E_{sub}\) in the ZX normal form, the proof proceeds by applying the (fusion) rule to the Z spiders and identifying the gauge operators \(L^{x}_{8_{1}},\,L^{x}_{8_{2}},\,L^{x}_{8_{3}}\) of \(\mathcal{C}_{sub}\) as the stabilizers \(S^{x}_{5},\,S^{x}_{6},\,S^{x}_{7}\) of \(\mathcal{C}_{ex}\), respectively:
**Corollary 5.1**.: When the three gauge qubits are in the \(|\overline{000}\rangle\) state, \(\mathcal{C}_{sub}\) is equal to \(\mathcal{C}_{qrm}\).
In [2, 51], code switching is described as a _gauge fixing_ process. Further afield, [64] provides a generic recipe to gauge-fix a CSS subsystem code. Here, we generalize Lem. 5.1 and describe how to gauge-fix \(\mathcal{C}_{sub}\) to \(\mathcal{C}_{ex}\) using the ZX calculus.
**Proposition 5.2**.: Gauge-fixing \(\mathcal{C}_{sub}\) in the following steps results in \(\mathcal{C}_{ex}\), as shown in Fig. 7.
1. Measure three X-type gauge operators \(L^{X}_{g_{i}}\) and obtain the corresponding outcomes \(k_{1},k_{2},k_{3}\in\mathbb{Z}_{2}\).
2. When \(k_{i}=1\), the gauge qubit \(i\) has collapsed to the wrong state \(|\overline{-}\rangle\). Apply the Z-type recovery operation \(L^{Z}_{g_{i}}\).
Figure 7: Gauge-fixing \(\mathcal{C}_{sub}\) to \(\mathcal{C}_{ex}\) in the circuit diagram.
Proof.: By Def. 3.1, construct the ZX normal form of \(E_{sub}\) in the blue dashed box of (i). Then the three gauge operators \(L^{X}_{g_{i}}\) are measured in step (a). The subsequent equalities follow from Figs. 2 and 3. Next, we observe that the purple dashed box in (iii) is exactly the encoder of the \(\llbracket 15,4,3\rrbracket\) stabilizer code. By Lemma 3.2 in [39], it can be equivalently expressed in the XZ normal form, as in (iv). By Prop. 3.1, pushing each Z spider with the phase \(k_{i}\pi\) across \(E_{\llbracket 15,4,3\rrbracket}\) results in (v). In step (b), Pauli Z operators are applied based upon the measurement outcome \(k_{i}\), which corresponds to the recovery operations in the red dashed box of (v). After that, the gauge qubits of \(\mathcal{C}_{sub}\) are set to the \(\lvert\overline{+++}\rangle\) state. By Lem. 5.1, we obtain the XZ normal form for \(E_{ex}\), as shown in the orange dashed box of (vi). Therefore, the equation in Fig. 7 holds.
We sum up by explaining how to obtain \(\mathcal{C}_{ex}\) and \(\mathcal{C}_{qrm}\) by gauge-fixing \(\mathcal{C}_{sub}\). In Prop. 5.2, we showed that measuring the X-type gauge operators \(L^{X}_{g_{i}}\) followed by the Z-type recovery operations \(L^{Z}_{g_{i}}\) is equivalent to adding \(L^{X}_{g_{i}}\) to the stabilizer group \(\mathcal{S}_{sub}\). This results in the formation of \(\mathcal{C}_{ex}\). Analogously, measuring the Z-type gauge operators \(L^{Z}_{g_{i}}\) followed by the X-type recovery operations \(L^{X}_{g_{i}}\) is equivalent to adding \(L^{Z}_{g_{i}}\) to \(\mathcal{S}_{sub}\). Thus, we obtain \(\mathcal{C}_{qrm}\).
Alternatively, gauge-fixing \(\mathcal{C}_{sub}\) can be viewed as a way of switching between \(\mathcal{C}_{ex}\) and \(\mathcal{C}_{qrm}\)[2, 53]. As an example, in Fig. 8, we visualize the measurement of \(L^{X}_{g_{1}}\coloneqq X_{1}X_{3}X_{5}X_{7}\) in order to switch from \(\mathcal{C}_{qrm}\) to \(\mathcal{C}_{ex}\). The effect of measuring other X-type gauge operators can be reasoned analogously.
By Def. 3.1, construct the XZ normal form of \(E_{qrm}\) in (i). Then measure \(L_{g_{1}}^{X}\) and apply a sequence of rewrite rules to the ZX diagram. In (v), the stabilizer \(L_{g_{1}}^{Z}:=Z_{2}Z_{3}Z_{10}Z_{11}\) is removed from the stabilizer group \(\mathcal{S}_{qrm}\). Meanwhile, the recovery operation can be read off from the graphical derivation: \((Z_{2}Z_{3}Z_{10}Z_{11})^{k_{1}}=\left(L_{g_{1}}^{Z}\right)^{k_{1}}\), \(k_{1}\in\mathbb{Z}_{2}\).
Overall, ZX visualization provides a deeper understanding of the gauge fixing and code switching protocols. On top of revealing the relations between different CSS codes' encoders, it provides a simple yet rigorous test for various fault-tolerant protocols. Beyond this, it will serve as an intuitive guiding principle for the implementation of various logical operations.
## 6 Conclusion
In this paper, we generalize the notions in [39] and describe a normal form for CSS subsystem codes. Built upon the equivalence between CSS codes and the phase-free ZX diagrams, we provide a bidirectional rewrite rule to establish a correspondence between a logical ZX diagram and its physical implementation. With these tools in place, we provide a graphical representation of two code transformation techniques: code morphing, a procedure that transforms a code through unfusing spiders for the stabilizer generators, and gauge fixing, where different stabilizer codes can be obtained from a common subsystem code. These explicit graphical derivations show how the ZX calculus and graphical encoder maps relate several equivalent perspectives on these code transforming operations, allowing potential utilities of ZX to simplify fault-tolerant protocols and verify their correctness.
Looking ahead, many questions remain. It is still not clear how to present the general code deformation of CSS codes using phase-free ZX diagrams. Besides, understanding code concatenation through the lens of ZX calculus may help derive new and better codes. In addition, it would be interesting to look at other code modification techniques derived from the classical coding theory [48].
Figure 8: The switching from \(\mathcal{C}_{qrm}\) to \(\mathcal{C}_{ex}\) provides an alternative interpretation of Prop. 5.2. After measuring \(L_{g_{1}}^{X}\), \(L_{g_{1}}^{Z}\) is removed from the stabilizer group \(\mathcal{S}_{qrm}\) and the recovery operation is performed based on the measurement syndrome. Note that unrelated X and Z spiders are omitted from the ZX diagrams.
## 7 Acknowledgement
The authors would like to thank Thomas Scruby for enlightening discussions. SML and MM wish to thank NTT Research for their financial and technical support. This work was supported in part by Canada's NSERC. Research at IQC is supported in part by the Government of Canada through Innovation, Science and Economic Development Canada. Research at Perimeter Institute is supported in part by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Colleges and Universities. LY is supported by an Oxford - Basil Reeve Graduate Scholarship at Oriel College with the Clarendon Fund.
|
2305.03150 | Theory and simulation of multiphase coexistence in biomolecular mixtures | Biomolecular condensates constitute a newly recognized form of spatial
organization in living cells. Although many condensates are believed to form as
a result of phase separation, the physicochemical properties that determine the
phase behavior of heterogeneous biomolecular mixtures are only beginning to be
explored. Theory and simulation provide invaluable tools for probing the
relationship between molecular determinants, such as protein and RNA sequences,
and the emergence of phase-separated condensates in such complex environments.
This review covers recent advances in the prediction and computational design
of biomolecular mixtures that phase-separate into many coexisting phases.
First, we review efforts to understand the phase behavior of mixtures with
hundreds or thousands of species using theoretical models and statistical
approaches. We then describe progress in developing analytical theories and
coarse-grained simulation models to predict multiphase condensates with the
molecular detail required to make contact with biophysical experiments. We
conclude by summarizing the challenges ahead for modeling the inhomogeneous
spatial organization of biomolecular mixtures in living cells. | William M. Jacobs | 2023-05-04T21:01:30Z | http://arxiv.org/abs/2305.03150v1 | # Theory and simulation of multiphase coexistence in biomolecular mixtures
###### Abstract
Biomolecular condensates constitute a newly recognized form of spatial organization in living cells. Although many condensates are believed to form as a result of phase separation, the physicochemical properties that determine the phase behavior of heterogeneous biomolecular mixtures are only beginning to be explored. Theory and simulation provide invaluable tools for probing the relationship between molecular determinants, such as protein and RNA sequences, and the emergence of phase-separated condensates in such complex environments. This review covers recent advances in the prediction and computational design of biomolecular mixtures that phase-separate into many coexisting phases. First, we review efforts to understand the phase behavior of mixtures with hundreds or thousands of species using theoretical models and statistical approaches. We then describe progress in developing analytical theories and coarse-grained simulation models to predict multiphase condensates with the molecular detail required to make contact with biophysical experiments. We conclude by summarizing the challenges ahead for modeling the inhomogeneous spatial organization of biomolecular mixtures in living cells.
## I Introduction
The discovery that intracellular "organelles" can exist without membranes has revolutionized molecular and cellular biology [1; 2]. Many such intracellular structures, now collectively referred to as "biomolecular condensates," have been proposed to form via phase separation [2; 3; 4; 5]. Physically, this means that a surface tension holds the phase-separated condensate together, while individual biomolecules--including proteins, RNAs, and other small molecules--exchange between the condensate and the surrounding fluid in dynamic equilibrium. Phase-separated condensates represent a unique form of biological organization compared to traditional membrane-bound organelles, since the absence of a membrane allows for rapid assembly and disassembly in response to stimuli [3].
Over the past 15 years, an increasingly large number of biomolecular condensates have been identified [6]. Because of the wide range of biological phenomena in which condensates play a role, including both fundamental biological processes [7; 8; 9; 10; 11; 12; 13; 14] and a variety of pathological conditions [15; 16], it is important to understand the biophysical mechanisms that control which biomolecules partition into specific condensates. Theoretical advances are needed to guide experiments probing the relationship between the properties of individual biomolecules and emergent condensate structures in complex environments. In particular, the physicochemical determinants of condensate composition and stability in heterogeneous intracellular environments--where thousands of biomolecular species are present--are only beginning to be explored. This review summarizes theoretical and simulation efforts in this direction using approaches based on equilibrium thermodynamics.
### Linking physicochemical properties and condensate thermodynamics
How do biomolecular determinants such as amino-acid or nucleotide primary sequence, secondary/tertiary structure, and chemical modifications control the compositions and spatial organization of phase-separated intracellular condensates (Fig. 1)? This question has been addressed primarily within the context of equilibrium thermodynamics, in which the phase behavior of a macromolecular mixture is governed by free energies at thermal equilibrium. Within this framework, the partitioning of biomolecules into phase-separated condensates is determined by equilibrium chemical potentials, while condensate (dis)assembly dynamics are governed by free-energy gradients close to equilibrium and/or transitions between metastable states. Predictions based on this near-equilibrium assumption generally hold up well when tested against _in vitro_ experiments [3; 4; 17]. Thus, while living systems may be more accurately characterized as nonequilibrium steady states under some conditions [18], we will restrict our attention to near-equilibrium approaches for predicting biomolecular phase separation in this review. We will also use the common terminology _liquid-liquid phase separation (LLPS)_[2; 19; 20; 21; 22] to describe reversible thermodynamic phase transitions between (potentially complex) fluid phases with different macromolecular concentrations, as our discussion will focus on static properties such as condensate composition and spatial organization. Nonetheless, we note that condensed phases in biology often exhibit viscoelastic dynamical properties and may irreversibly age into solid phases due to the complexity of the interactions among biological macromolecules [6; 23; 24; 25; 26; 27].
Concepts from polymer physics have helped shape the prevailing view that transient associations among biomolecules give rise to the overall net attractive interactions required to bring about LLPS [28]. These interactions are commonly referred to as "multivalent,"
since biomolecules can associate through multiple interaction sites via a variety of forms of noncovalent bonding. Particular attention has been given to conformationally heterogeneous proteins, including intrinsically disordered proteins (IDPs) and multidomain proteins containing intrinsically disordered regions (IDRs) [28]. In the context of IDPs, multivalency refers to the ability of an unfolded protein to engage in many residue-residue contacts with nearby proteins in a condensed phase. Folded domains within multidomain proteins can also contribute to the multivalency required to drive LLPS, either through protein-protein interactions (PPIs) [29] or, in the case of RNA binding domains (RBDs), through interactions with RNA [30]. Finally, nucleic acid mixtures can phase separate under certain conditions due to intermolecular base-pairing [31, 32, 33] and nonspecific association [34]. Importantly, the strengths of the net interactions among biopolymers in liquid-like condensates are typically comparable to the thermal energy, since the protein and nucleic acid constituents of biomolecular condensates can often remain fluid on biologically relevant timescales.
### Emergence of multiphase coexistence in complex biomolecular mixtures
Biological LLPS results in an enormous diversity of condensates in living cells. Each of these condensates is associated with a specific chemical composition [35] and may be enriched in many distinct biomolecules relative to the surrounding intracellular fluid [36]. The biological functions of condensates derive directly from this compositional specificity, since the biochemical reactions that take place within the spatial confines of a condensate are dependent on the molecular concentrations that define the local environment. Theoretical descriptions of _in vivo_ condensate assembly must therefore account for complex intracellular mixtures comprising thousands of protein and RNA species, which can all potentially interact with one another.
At the simplest level, it is important to distinguish between homotypic and heterotypic interactions between species of the same or different types, respectively. In multicomponent mixtures with strong heterotypic interactions, the tendency of any particular species to partition into a condensate depends on the concentrations of all its potential interaction partners [37]. A consequence is that the equilibrium compositions of coexisting phases may depend on the concentrations of all the components in the mixture, even when there are only two phases in coexistence. This feature can be used to detect the influence of multiple components on phase separation and to infer the relative strengths of homotypic and heterotypic interactions by measuring the volume fractions of coexisting phases at different overall mixture concentrations [33, 38].
Multiple immiscible condensates are commonly found to coexist within a single intracellular compartment [6]. Moreover, depending on the properties of the interfaces between pairs of condensates and between condensates and the surrounding fluid, immiscible condensates can self-organize into spatially organized structures [39]. Well characterized examples include the nucleolus [40, 11] and stress-granule/P-body condensates [41, 29, 42]. It has also become clear that subtle changes in protein and RNA concentrations can perturb the interfacial properties and thus dramatically alter the architecture of multiphasic condensates [43, 29]. Nonetheless, predicting multiphase coexistence in the context of heterogeneous intracellular fluids remains a formidable challenge.
### Aims and scope of this review
Developing theoretical and computational models of multiphasic, multicomponent biomolecular mixtures is essential for understanding the relationship between molecular determinants and biological self-organization via LLPS. The purpose of this article is to highlight a number of advances in this direction. Many recent reviews focusing on theory and simulation, including Refs. [44, 4], and [45], have described coarse-grained modeling approaches for IDPs, multidomain proteins, and nucleic acids. These approaches have primarily been applied to study the properties of single molecules and to mimic _in vitro_ experiments on condensate formation.
Figure 1: Multivalent interactions among a wide variety of biological macromolecules, including intrinsically disordered proteins (with amino acids represented by colored circles), multidomain proteins, and nucleic acids, contribute to the thermodynamic driving forces responsible for liquid–liquid phase separation. Phase-separated condensates, including higher-order structures composed of multiple immiscible phases, resemble “membraneless organelles” whose interfaces are stabilized by surface tensions. The molecular compositions within each phase (\(\alpha\)–\(\epsilon\)) are distinct as a result of specific interactions among the constituent biomolecules.
By contrast, we focus here on theoretical challenges that arise when considering multiphase coexistence, especially in mixtures with thousands of components. Studies along these lines have provided complementary insights that are needed to understand biomolecular condensates in an intracellular context (Fig. 2). For broader context, we encourage the reader to consult other recent works, including reviews that emphasize the biological functionality and regulation of condensates [15; 46], the interplay between physical gelation and phase separation of multivalent macromolecules [47], and the conformational dynamics of macromolecules within condensates [48].
In this review, we begin in Sec. II by covering the thermodynamic principles of phase separation in multicomponent fluids. We highlight recently devised numerical methods for calculating multiphase coexistence in both mean-field and classical molecular simulation models. We then discuss theoretical results obtained from mean-field multicomponent mixture models in Sec. III. These studies have provided important insights into phase-behavior scaling relations, although they lack molecular detail and, as such, require assumptions on the statistical properties of intermolecular interactions in complex fluids. In Sec. IV, we examine efforts to describe multicomponent condensates with both analytical and computational models that capture the molecular sequence dependence or the structure of a PPI network. The implications of these studies for the mean-field multicomponent mixture models introduced in Sec. III, and potential extensions thereof, are discussed. Finally, in Sec. V, we identify key challenges that must be overcome in order to describe inhomogeneous spatial organization in living cells with molecular realism.
## II Thermodynamic principles of multicomponent Llps
Phase coexistence describes an equilibrium state in which a material or fluid exists in multiple phases with distinct physicochemical properties, such as oil droplets suspended in aqueous solution. Thermodynamic equilibrium between coexisting phases is established when the temperature, (osmotic) pressure, and chemical potentials of all molecular species are constant throughout the system. Considering a biomolecular solution at constant volume and temperature, the thermodynamic state of the system can be described by the Helmholtz free-energy density, \(f\). This free energy is a function of the concentrations, \(\{\rho_{i}\}\), of all the molecular components in the mixture. (Latin indices will be used throughout to indicate molecular components, while Greek indices will be used to indicate phases. Analogous arguments apply to the Gibbs free-energy density in the case of fluids at constant pressure.) Phase separation can occur when the free-energy density is a nonconvex function of the molecular concentrations (Fig. 3). In such a case, the free energy can be minimized by forming two or more distinct phases--for example, a condensed droplet and the surrounding cytoplasm--each with different concentrations. A mixture phase separates when the overall concentrations of the solution lie within the _coexistence region_, which is bounded by the concentrations of the coexisting phases. Droplets that emerge as a result of this spontaneous process are stabilized by positive surface tensions at the interfaces that form between the coexisting phases. Whenever \(f\) is nonconvex, there is also a _spinodal region_ within which the free-energy surface has negative curvature.
In a heterogeneous system comprising many different types of biomolecules, the free-energy surface is a high-dimensional object. Nonetheless, coexistence and spinodal regions can still be determined by examining the convexity and local curvature of the free-energy surface. More precisely, the Hessian matrix \(\partial^{2}f/\partial\rho_{i}\partial\rho_{j}\) is not positive definite within the spinodal region, implying that a homogeneous mixture within this region is unstable with respect to concentration fluctuations in one or more directions of concentration space. These directions are described by the eigenvectors that correspond to the negative eigenvalues of \(\partial^{2}f/\partial\rho_{i}\partial\rho_{j}\). The region of a high-dimensional concentration space in which concentration fluctuations are locally unstable is bounded by a spinodal locus, where the determinant \(|\partial^{2}f/\partial\rho_{i}\partial\rho_{j}|=0\).
The molecular concentrations of coexisting bulk phases can be determined by considering the equal pressure and chemical potential conditions. In multicomponent fluids, the free-energy surface is a
Figure 2: Computational and theoretical complexity increases with both the level of molecular detail and the number of distinct components in a mixture. Simulation approaches to biomolecular LLPS range from pairwise-interaction mean-field models to sequence-specific coarse-grained (CG) models. However, mixtures with more than three non-solvent components have so far been studied almost exclusively using pairwise mean-field models.
ids, these conditions can be satisfied by performing a "common tangent plane construction," in which a hyperplane is tangent to the free-energy surface at each point in concentration space that corresponds to a coexisting stable phase. A homogeneous mixture with an overall, or "parent", concentration vector inside the convex hull of the coexisting-phase concentrations can lower its Helmholtz free energy by phase-separating. This convex hull therefore defines the coexistence region, which necessarily encompasses the spinodal region, in a multi-component fluid. Because the tangent at any point on the free-energy surface is equal to the chemical potential vector, \(\{\mu_{i}\}=\partial f/\partial\rho_{i}\), the common tangent plane construction ensures equal chemical potentials for each species across all phases that are in coexistence. Furthermore, the common tangent plane construction implies that the coexisting phases are all global minima of the grand potential density, \(\Omega(\{\rho_{i}\};\{\mu_{i}\})\equiv f(\{\rho_{i}\})-\sum_{i}\rho_{i}\mu_{i}\). This fact ensures equal pressures among all bulk phases.
In general, the coexistence concentrations in a multi-component fluid are not specified uniquely without also prescribing the parent concentrations, \(\{\rho_{i}\}^{\text{(parent)}}\). The connection between the parent and coexisting-phase concentrations is provided by the conservation law
\[\rho_{i}^{\text{(parent)}}=\sum_{\alpha=0}^{K}x^{(\alpha)}\rho_{i}^{(\alpha)}( \{\mu_{j}\})\quad\forall i, \tag{1}\]
where \(\alpha\) indexes the phases in a phase-separated state with \(K+1\) phases, the concentrations \(\{\rho_{i}^{(\alpha)}\}\) indicate coexisting phases with coexistence chemical potentials \(\{\mu_{j}\}\), the volume fractions of the bulk phases are given by \(\{x^{(\alpha)}\}\), and \(\sum_{\alpha=0}^{K}x^{(\alpha)}=1\). (This indexing convention is chosen for later convenience, since we are often interested in phase equilibria involving a solvent-majority phase, \(\alpha=0\).) Eq. (1) simplifies to the well-known lever rule for binary mixtures (e.g., fluids comprising one macromolecular component plus a solvent).
The spinodal locus coincides with the boundary of the coexistence region at a critical point, where the concentrations of two coexisting phases merge into a single stable phase. Unlike binary mixtures, there is typically no unique critical point in a multicomponent fluid. Instead, multicomponent critical points lie on a temperature-and-concentration-dependent manifold with dimension one less than the number of non-solvent components. Higher-order critical points, where more than two phases simultaneously merge into a single stable phase, are also possible in multicomponent fluids [49].
Multicomponent phase equilibria can equivalently be determined from the excess chemical potential, \(\mu_{\text{ex},i}\), of each molecular species \(i\). This quantity represents the contribution to the chemical potential that captures all interactions--both enthalpic and entropic--among the molecules, and is thus a function of all the component concentrations [50]. The excess chemical potential is directly related to the partition coefficient, PC, defined as the ratio of a molecule's concentration inside (in) and outside (out) of a phase-separated droplet:
\[\text{PC}_{i}\equiv\frac{\rho_{i}^{\text{(in)}}}{\rho_{i}^{\text{(out)}}}= \exp\left(\beta\mu_{\text{ex},i}^{\text{(out)}}-\beta\mu_{\text{ex},i}^{\text {(in)}}\right), \tag{2}\]
where \(\beta\equiv 1/k_{\text{B}}T\), \(k_{\text{B}}\) is the Boltzmann constant, and \(T\) is the absolute temperature. Partition coefficients are experimentally accessible and biologically relevant quantities, since they quantify the tendency of specific biomolecules to partition spontaneously into phase-separated condensates.
### Mean-field models with pairwise interactions
The simplest theoretical descriptions of LLPS are based on mean-field models, which introduce effective parameters to describe how molecules interact with one another. A mean-field model prescribes an approximate free-energy surface in terms of the effective interaction parameters and the component concentrations. The most widely used mean-field models, both in the condensate literature and more generally in biophysics and materials science, make the assumption that the excess chemical potential of species \(i\) can be written in the form
\[\mu_{\text{ex},i}(\{\rho_{j}\})=\mu_{\text{v}}(\{\rho_{j}\})+\beta^{-1}\sum_{ j=1}^{N}B_{ij}\rho_{j}, \tag{3}\]
where \(\mu_{\text{v}}\) is a monotonically increasing function that depends only on the concentrations and the excluded volume associated with each molecular species. The second term embodies the assumption of "pairwise interactions"
Figure 3: _Left_: Nonconvex free-energy surfaces lead to phase separation at thermodynamic equilibrium. The inflection points and global minima of the grand potential density, \(\Omega\equiv f-\sum_{i=1}^{N}\rho_{i}\mu_{i}\), determine the spinodal points and coexistence points, respectively. _Right_: Approximate phase diagrams can be obtained by computing the convex hull (solid line) of a discretized free-energy surface; points on the hull (filled circles) are in one-phase regions, while points not on the hull (empty and red-filled circles) are within a coexistence region. The approximate spinodal region can be determined by identifying points where the Hessian is not positive definite (red-filled circles). Approximate coexistence points can then be refined via nonlinear minimization (see text). This scheme generalizes to higher-dimensional concentration spaces.
among the \(N\) non-solvent components, where \(\{B_{ij}\}\) is an \(N\times N\) symmetric matrix of interaction parameters. This assumption underlies the regular solution model of phase-separating mixtures [51], the Flory-Huggins model of homopolymer phase separation [19], and the van der Waals model of non-ideal fluids [52].
The Flory-Huggins model [19] is commonly used to fit experimental data on biomolecular LIPS [53]. Assuming an incompressible fluid with \(N\) non-solvent species, the Flory-Huggins free-energy density is
\[\beta fv_{0}=\sum_{i=1}^{N}\frac{\phi_{i}}{L_{i}}\log\phi_{i}+\phi_{0}\log\phi _{0}+\frac{1}{2}\sum_{i=1}^{N}\sum_{j=1}^{N}\epsilon_{ij}\phi_{i}\phi_{j}, \tag{4}\]
where the volume fraction occupied by species \(i\) is \(\phi_{i}\!=\!L_{i}v_{0}\rho_{i}\), the degree of polymerization of species \(i\) is \(L_{i}\), the size of a monomer is represented by \(v_{0}\), and the solvent-occupied volume fraction, \(\phi_{0}\), is determined by the incompressibility constraint, \(\sum_{i=0}^{N}\phi_{i}=1\). We note that, within the context of this model, the "solvent" may itself represent a mixture including non-interacting macromolecules. The interaction parameters \(\{\epsilon_{ij}\}\) are dimensionless. Negative interaction parameters imply that molecules attract one another, while positive interaction parameters imply repulsion. Homotypic and heterotypic interactions are encoded in the on- and off-diagonal elements of \(\{\epsilon_{ij}\}\), respectively. Eq. (4) is consistent with Eq. (3), since the interaction parameters only enter the free-energy density in a quadratic form. The contribution to the free-energy density from the pairwise interactions can also be written in terms of Flory \(\chi\) parameters, \(\chi_{ij}=\epsilon_{ij}-(\epsilon_{ii}+\epsilon_{jj})/2\), by extending the sums in the final term of Eq. (4) to include the solvent (component 0) and replacing \(\epsilon_{ij}\) with \(\chi_{ij}\). This change of variables introduces terms that are linear in \(\{\phi_{i}\}\) into the free-energy density, which have no effect on the phase behavior. With this alternate notation, the on-diagonal elements \(\{\chi_{ii}\}\) are zero by definition, and the homotypic interactions are encoded by the interactions with the solvent, \(\{\chi_{i0}\}\).
Two non-solvent components are sufficient to reveal generic effects of homotypic versus heterotypic interactions. In such a mixture, two distinct types of phase transitions can occur: A "condensation" transition can occur if attractive heterotypic interactions are comparable to or stronger than any attractive homotypic interactions, while a "demixing" transition can occur if the heterotypic interactions are significantly less attractive than one or both of the homotypic interactions [54]. Both behaviors have been observed in numerical investigations of two-component-plus-solvent mean-field (e.g., [55]) and molecular simulation models (e.g., [56]). Condensation transitions are analogous to LLPS in simple one-component-plus-solvent fluids, implying that the phase diagram can be fully described by projecting the concentrations onto the parent composition vector [54]. By contrast, mixtures with dissimilar homotypic and heterotypic interaction strengths have more complex phase diagrams. For example, the implications of this complexity for concentration buffering have recently been explored in Ref. [57] using a two-component-plus-solvent Flory-Huggins model. Concentration buffering was shown to be effective when the tie lines connecting the coexisting condensed and dilute phases are parallel to the concentration "noise distribution." This observation follows from the generalized lever rule, Eq. (1), with \(K=1\), which implies that fluctuations of the parent concentrations in the direction \(\vec{\rho}^{(1)}-\vec{\rho}^{(0)}\) only modify the volume fraction of the condensed phase, \(x^{(1)}\), leaving the "buffered" concentrations of both non-solvent species in the dilute phase unchanged.
### Constructing phase diagrams of multicomponent mean-field models
Moving beyond two-solute scenarios, the construction of high-dimensional phase diagrams becomes considerably more challenging (Fig. 3). An elegant approach for solving this problem in mixtures with up to approximately five non-solvent components was provided in Ref. [58]. This method exploits the fact that the common tangent plane construction is equivalent to convexification of a non-convex free-energy surface. In this method, the free energy of a mean-field model is first evaluated at every point of an \(N\)-dimensional grid over the physical domain \(\phi_{i}\geq 0\,\forall i\) and \(\sum_{i=1}^{N}\phi_{i}\leq 1\). The volume fractions at each grid point and the corresponding free-energy value constitute a single point within an \((N+1)\)-dimensional space. The convex hull of all the points within this \((N+1)\)-dimensional space can then be determined using standard algorithms [59]. Importantly, grid points that lie within coexistence regions are _not_ part of the convex hull. Furthermore, the facets of the convex hull can be analyzed to determine the number of coexisting phases in a coexistence region. This algorithm can be used as "black-box" method for identifying coexistence regions, up to the resolution specified by the concentration-space grid, for any mean-field model.
In order to perform coexistence calculations to greater precision, it is necessary to identify the coexistence chemical potential vector that results in a grand potential with multiple global minima. An efficient approach described in Refs. [29] and [60] involves an iterative two-step algorithm. First, assuming a fixed chemical potential vector, the local minima of the grand potential are identified using initial guesses of each of the coexisting-phase concentrations. Then, the chemical potential vector is adjusted to bring the variance among the values of the grand potential at these local minima to zero. This second step establishes the coexisting phases as global minima of the grand potential. It is advantageous to use estimates of the coexisting-phase concentrations obtained from the convex-hull method as initial guesses when performing these nonlinear minimizations. A similar approach, in which the initial guesses for the coexisting-phase concentrations are obtained from a grid-based search for the spinodal region, was proposed in Ref. [61].
An alternative strategy for calculating phase coexistence has been provided in Ref. [62]. This method uses a nonphysical dynamical scheme, inspired by swapping molecules between metastable phases, in order to eliminate differences between the chemical potentials and the pressures of the phases. Starting from an initial guess of the component volume fractions in each of the \(K+1\) coexisting phases, the dynamical scheme evolves the volume fractions in each phase \(\alpha\) according to
\[\frac{\partial\phi_{i}^{(\alpha)}}{\partial t}=\phi_{i}^{(\alpha)}\beta\sum_{ \gamma=0}^{K}\Big{[}\phi_{i}^{(\gamma)}\Big{(}\mu_{i}^{(\gamma)}-\mu_{i}^{( \alpha)}\Big{)}+\Big{(}P^{(\gamma)}-P^{(\alpha)}\Big{)}\Big{]}, \tag{5}\]
where \(\{\mu_{i}^{(\alpha)}\}\) and \(P^{(\alpha)}\) are the component chemical potentials and the pressure, respectively, evaluated in the \(\alpha\) phase with the instantaneous volume fractions \(\{\phi_{i}\}^{(\alpha)}\), and \(t\) is the fictitious time associated with these dynamics. At steady state, when \(\partial\phi_{i}^{(\alpha)}/\partial t=0\), Eq. (5) ensures that the phases meet the thermodynamic criteria for coexistence. Crucially, the results of this numerical approach, like the nonlinear minimization scheme described above, depend sensitively on the initial guesses for the coexisting-phase concentrations. In particular, if a candidate phase is not represented in the \(K+1\) initial concentration vectors, then it is unlikely to be captured in the final set of coexisting phases.
### Multicomponent phase coexistence in molecular simulation models via free-energy calculations
Efficient approaches for calculating coexistence among an arbitrary number of fluid phases have also been devised for molecular simulation models. Such models specify a potential energy function that depends on the coordinates of all particles in the simulation volume. As such, Monte Carlo or molecular dynamics (MD) simulation methods are required to sample the configurational phase space. A wide variety of methods are available for computing coexistence between pairs of phases [63]. Within the condensate literature, direct coexistence simulations utilizing a "slab geometry" [44] have become popular due to the ease with which this approach can be implemented. However, in order to compute phase coexistence among a larger number of phases, it is advantageous to work in the grand-canonical ensemble. Grand-canonical phase-coexistence calculations are also ideal for minimizing finite size effects [64].
A robust approach for carrying out multiphase coexistence calculations utilizes a generalization of the multicanonical sampling method [65]. Influenced by earlier simulations of polydisperse fluids [66], Ref. [67] introduced a method to sample an isolated pair of phases in a grand-canonical simulation with multiple free-energy basins (Fig. 4). First, an order parameter \(\Delta\rho_{\alpha\beta}\equiv(\vec{\rho}-\vec{\rho}^{(\alpha)})\cdot\hat{ \nu}_{\alpha\beta}\), where \(\hat{\nu}_{\alpha\beta}\equiv(\vec{\rho}^{(\beta)}-\vec{\rho}^{(\alpha)})/| \vec{\rho}^{(\beta)}-\vec{\rho}^{(\alpha)}|\), is defined to measure the distance along a linear path between the \(\alpha\) and \(\beta\) phases, with concentration vectors \(\vec{\rho}^{(\alpha)}\) and \(\vec{\rho}^{(\beta)}\), respectively. A biasing potential is then added to constrain fluctuations in orthogonal directions of concentration space,
\[U_{\perp}(\vec{\rho})\equiv k_{\perp}\big{|}(\vec{\rho}-\vec{\rho}^{(\alpha)} )-[(\vec{\rho}-\vec{\rho}^{(\alpha)})\cdot\hat{\nu}_{\alpha\beta}]\hat{\nu}_{ \alpha\beta}\big{|}^{p_{\perp}}, \tag{6}\]
where \(k_{\perp}>0\) and \(p_{\perp}>0\) are user-defined constants. An additional biasing potential in the direction of concentration space parallel to \(\hat{\nu}_{\alpha\beta}\), \(U_{\parallel}(\Delta\rho_{\alpha\beta})\), can then be calculated using grand-canonical Wang-Landau simulations [68],
\[\beta U_{\parallel}(\Delta\rho^{\prime})=\text{log}\!\int\!d\mathbf{x}\,\mathbf{1} _{\Delta\rho_{\alpha\beta}[\vec{\rho}(\mathbf{x})],\Delta\rho^{\prime}}e^{-\beta \mathcal{H}(\mathbf{x})-\beta U_{\perp}[\vec{\rho}(\mathbf{x})]}, \tag{7}\]
where \(\mathbf{x}\) represents a particle configuration, \(\mathcal{H}\) is the Hamiltonian of the unbiased model, and \(\mathbf{1}\) is the indicator function. The biasing potential \(U_{\parallel}\) is optimal for "flattening" the free-energy barrier between the \(\alpha\) and \(\beta\)-phase regions of phase space [68]. Finally, performing a multicanonical simulation under the combined potential \(\mathcal{H}+U_{\perp}+U_{\parallel}\) allows the simulation to transit reversibly between the \(\alpha\) and \(\beta\) phases.
Refs. [69] and [60] have demonstrated how this method can be applied to calculate multiphase coexistence points for multicomponent lattice models. Samples obtained from multicanonical simulations between different pairs of phases can be combined via reweighting methods such as MBAR [70] as long as one of the phases is sampled in every simulation. Grand potential differences between all pairs of phases can then be determined, and the chemical potentials can be adjusted in order to find the coexistence point at which all phases have identical pressures at equilibrium. This approach has been successfully applied to compute coexistence points involving more than five phases. Nonetheless, this method also requires prior knowledge of the approximate concentrations of all phases in order to construct the required biasing potentials and sample all the coexisting phases.
Figure 4: Multiphase coexistence points can be determined from molecular simulations by sampling the grand-potential landscape. In order to sample two specific phases \(\alpha\) and \(\beta\), biasing potentials parallel, \(U_{\parallel}\), and perpendicular, \(U_{\perp}\), to \(\hat{\nu}_{\alpha\beta}\) are introduced. Reweighting techniques can then be used to tune the component chemical potentials in order to establish equal grand potentials among all coexisting phases (see text).
## III Predicting and designing phase behavior in multicomponent fluids
We now turn to theoretical studies of mixtures governed by pairwise interactions. We first discuss efforts to predict phase behavior in mixtures with hundreds or thousands of components based on the statistical properties of the pairwise interactions. We then describe recently devised methods to design or "evolve" pairwise interactions in order to stabilize a target phase diagram.
### Multicomponent mixtures with random pairwise interactions
Pairwise interaction models, due to their simplicity, are a natural place to begin exploring how the presence of many distinct molecular components influence the phase behavior of a mixture. However, theoretical progress cannot be made without specifying the form of the interaction matrix, and limited systematic experimental data exist for parameterizing heterotypic interactions. To deal with this lack of information, Ref. [71] proposed that the pairwise interactions can be modeled using a random matrix. Specifically, Ref. [71] considered symmetric random matrices in which the elements are chosen independently from a Gaussian distribution with a prescribed mean and standard deviation. An ensemble of "random mixtures" is thus associated with a particular Gaussian distribution and the number of distinct components \(N\), such that each mixture in the ensemble is defined by a particular realization of the \(N\times N\) interaction matrix.
Ref. [71] assumed for simplicity that the mixture free-energy density can be described by Eq. (3) with \(\mu_{\rm v}=0\). The resulting free-energy density, \(f\), is applicable to solutions in which all components are present at low concentrations, and the \(\{B_{ij}\}\) elements in Eq. (3) are referred to as second-virial coefficients [52]. By restricting the study to mixtures with equimolar parent concentrations, \(\rho_{i}^{\rm(parent)}=\bar{\rho}^{\rm(parent)}\,\forall i\), it was shown that the spinodal locus can be predicted directly from the second-virial matrix. The central idea is that unstable concentration fluctuations can be determined from a linear stability analysis of the mean-field free-energy landscape (Fig. 5). With the equimolar parent-concentration assumption, the eigenvalue spectrum of the Hessian matrix, \(\partial^{2}f/\partial\rho_{i}\partial\rho_{j}\), is equal to the spectrum of \(\{B_{ij}\}\) plus a constant \(1/\bar{\rho}^{\rm(parent)}\). Instabilities therefore occur when the minimum eigenvalue of \(\{B_{ij}\}\) is less than \(-1/\bar{\rho}^{\rm(parent)}\). Applying results from random matrix theory, it was shown that the existence and nature of the dominant instability, which coincides with the minimum eigenvalue of the Hessian matrix, can be determined from the mean, \(b\), and standard deviation, \(\sigma\), of the Gaussian distribution of matrix elements in the limit of large \(N\). Two distinct cases were observed. If the standard deviation among the matrix elements is sufficiently small, such that \(N^{1/2}b/\sigma\lesssim-1\), then the dominant instability involves concentration fluctuations that are parallel to the equimolar parent concentration vector. This type of instability is consistent with a condensation transition driven by similar homotypic and heterotypic interaction strengths. By contrast, if the standard deviation among the matrix elements is sufficiently large, such that \(N^{1/2}b/\sigma\gtrsim-1\), then the dominant instability is orthogonal to the parent concentration vector, and individual components demix into phases with differing compositions. Importantly, these behaviors are self-averaging, meaning that the tendency of any particular random-mixture realization to undergo a condensation or demixing transition converges in probability as \(N\to\infty\).
Ref. [72] extended these results to mixtures with non-equimolar parent compositions. This work considered a regular-solution free-energy density, in which \(\mu_{\rm v}=-\log\rho_{0}\) in Eq. (3). This additional contribution to the free energy accounts for the entropy of the solvent, providing a better physical model of solutions at non-dilute concentrations. Modifying the free-energy density in this way does not qualitatively alter the conclusions of Ref. [71] regarding condensation and demixing in equimolar mixtures. However, consideration of non-equimolar parent compositions reveals a third type of spinodal instability: Demixing transitions can now be classified as either "random," in which all the components of the eigenvector associated with the instability are of similar order, or "localized," in which the demixing transition is dominated by only a few species. Random mixtures with a large interaction-parameter variance and equimolar parent compositions tend to undergo random demixing. By contrast, mixtures in which one component has a much higher parent concentration than all the others can undergo a composition-driven transition, in which demixing is localized to the dominant species. The authors emphasized that the direction of composition-driven instabilities cannot be predicted sim
Figure 5: The spinodal locus, where the mixture becomes unstable with respect to concentration fluctuations, can be predicted using a linear stability analysis. _Left:_ Computing the eigenspectrum of the Hessian matrix, \(\partial^{2}f/\partial\rho_{i}\partial\rho_{j}\), at the parent concentrations reveals the number of unstable modes, each of which is associated with an orthogonal direction in concentration space. _Right:_ Analytical predictions in the large-\(N\) limit provide insight into the relationship between the structure and statistical properties of an interaction matrix and the phase behavior of the associated biomolecular mixture.
ative parent concentrations of the components; instead, the interplay between entropic effects and random pairwise interactions tends to amplify the contribution of the dominant component to the unstable concentration fluctuations. In other words, the nature of the instabilities at the spinodal locus of a random mixture depends on both the interaction matrix and the parent concentrations.
Simulation support for the qualitative predictions of Ref. [71] was provided in Refs. [67] and [54]. In these studies, the free-energy calculation strategy described in Sec. II.3 was applied to compute coexistence between an equimolar dilute phase and a condensed phase in random mixtures with up to 64 non-solvent components. Simulations were conducted using a multicomponent lattice model, with the nearest-neighbor interactions between particles on the lattice specified by a random interaction matrix generated according to the Gaussian prescription of Ref. [71]. Coexistence calculations were then performed to investigate the nature of the phase transition that occurs at the lowest total parent concentration, meaning that the simulated coexistence point represents the lowest-concentration intersection of the equimolar parent concentration vector with any coexistence region. The average phase behavior of the random-mixture ensemble was analyzed by repeating these calculations for many independent realizations of random mixtures with the same interaction mean and variance.
Although the lattice-based coexistence calculations of Refs. [67] and [54] are not directly comparable to theoretical predictions regarding instabilities at the spinodal locus, analogous condensation and demixing transitions were observed in this molecular simulation model. First, the phase behavior at each simulated coexistence point was classified as condensation or demixing according to the angle, \(\theta\), between the equimolar parent concentration vector and the unit vector connecting the coexisting phases, \(\hat{\nu}_{\alpha\beta}\). This angle was found to be self-averaging with respect to the number of components, \(N\), as suggested by random matrix theory [67]. Second, Ref. [54] observed that the distribution of \(\theta\) is bimodal, signifying a sharp transition between these two qualitatively distinct types of phase transitions as the mean and/or variance of the random-interaction distribution was changed. Third, increasing the number of components was found to shift the phase behavior at the simulated coexistence points towards condensation transitions, in line with the predictions of Ref. [71]. This finding implies that the mixing entropy of multicomponent fluids acts to suppress demixing instabilities. However, by contrast with Ref. [71], simulation results indicated that the extreme values of the interaction matrix are more predictive of the simulated coexistence concentrations than the eigenspectrum of the mean-field Hessian matrix. This observation was exploited to propose a scaling relation for the transition between condensation and demixing behaviors at the phase boundary, \((\log N)^{1/2}\sim\sigma\), that differs from the random-matrix-theory prediction for the condensate-demixing crossover at the spinodal locus, \(N^{1/2}\sim\sigma/b\). This idea has since been followed up in Ref. [73], which suggested that the coexistence points can be strongly influenced by the tails of the distribution from which the elements of the random interaction matrix are chosen.
Phase separation in mean-field models of mixtures with many components has also been analyzed using phase-field simulations [74]. Deterministic phase-field simulations evolve the spatially varying component volume fractions, \(\{\phi_{i}(\vec{r})\}\), on a three-dimensional grid in accordance with linear irreversible thermodynamics [75]. As such, phase-field simulations reach a steady state when the free energy of the simulated volume reaches a local minimum; this steady state may be spatially inhomogeneous if phase separation occurs. Ref. [74] considered a regular-solution free-energy density consistent with Eq. (3), with \(\mu_{\nu}=-\log\phi_{0}(\vec{r})-\kappa\nabla^{2}\phi_{i}(\vec{r})\). The second term in \(\mu_{\nu}\), which penalizes the formation of interfaces between phases in a component-independent manner, arises from square-gradient contributions to a Cahn-Hilliard free-energy functional with \(\kappa>0\)[76]. Simulations then implemented "Model B dynamics" [77], where \(\partial\phi_{i}/\partial t=\nabla\cdot(M\phi_{i}\nabla\mu_{i})\), with a component-independent mobility coefficient \(M>0\). Upon reaching steady state, compositionally distinct phases were identified by performing a principal component analysis of the spatially varying component concentrations.
Since phase separation in a deterministic phase-field model proceeds via spinodal decomposition, Ref. [74] was able to provide a direct test of the analytical predictions of Ref. [71]. Both condensation and demixing were observed in simulations initialized with equimolar parent concentrations. Consistent with a linear stability analysis at these initial conditions (Fig. 5), Ref. [74] found that the number of phases identified at steady state correlates with the number of negative eigenvalues of the Hessian matrix. Furthermore, the number of steady-state phases could be estimated from the limiting (\(N\to\infty\)) spectral density predicted by random matrix theory. This trend was shown to hold for a variety of random-mixture ensembles in which the standard deviation of the independently sampled interaction-matrix elements was either held constant or scaled proportionally to \(N^{1/2}\). Nonetheless, some caution is warranted in interpreting these results, since the steady-state found via spinodal decomposition may reflect a metastable configuration that does not represent all the equilibrium phases. We shall return to this important consideration below in Sec. III.3.
### Multicomponent mixtures with structured pairwise interactions
Although random-mixture models are useful for investigating generic features of high-dimensional phase diagrams, they may not reflect the structure of pairwise interactions among real biomolecules. In particular, the assumption that the elements of a \(\{B_{ij}\}\) matrix are independently and identically distributed im
plies that \(\mathcal{O}(N^{2})\) pairwise coefficients characterize the mixture, even though there are only \(N\) chemically distinct biomolecules. Physical interactions arising from the physicochemical features of the biomolecules are instead likely to introduce correlations into the \(\{B_{ij}\}\) matrix.
To address this critical issue, "structured" pairwise interaction models have been introduced and studied using linear stability analysis. Ref. [78] took the approach of grouping components into distinct families, whereby all members within a particular family have similar physicochemical properties. The authors proposed that this relationship could be described via by an interaction matrix of the form \(B=D+C*Z\), where \(*\) indicates element-wise multiplication. \(D\) and \(C\) are block matrices specifying the mean and standard deviation of the interactions between families, respectively, while \(Z\) is a Gaussian random matrix with zero mean and unit variance. This model reduces to the random-mixture model of Ref. [71] when there is only one family, in which case all interactions have the same mean and variance. Intuitively, a single family of components can demix from a mixture with equimolar parent concentrations if the intra-family interactions are sufficiently more attractive than inter-family interactions. Such "family demixing" tends to dominate over random demixing when the noise amplitude, governed by \(C\), is small.
Ref. [79] explored an alternative approach in which structured interaction matrices are assumed to have a low matrix rank. This assumption implies that the interaction matrix can be written in the form \(B_{ij}=\sum_{r=1}^{r}c^{(l)}s_{i}^{(l)}s_{j}^{(l)}\), where the index \(l\) is bounded by the matrix rank, \(r\). This low-rank decomposition was inspired by a toy model in which each molecular species can be described by \(r\) "molecular features," which interact according to diagonalized coupling coefficients \(\{c^{(l)}\}\). The matrix \(\{s_{i}^{(l)}\}\) specifies the value of each molecular feature for each component \(i\). In fact, any \(N\times N\) interaction matrix can be written in this form via eigendecomposition, assuming that \(N-r\) of its eigenvalues are negligible. If all the nonzero eigenvalues of \(\{B_{ij}\}\) are negative, representing net attractive interactions among the molecular features, then the linear-stability condition for the spinodal locus can be recast in terms of a feature covariance matrix. Specifically, this rank-\(r\) matrix measures the covariance among the values of the molecular features, weighted by the concentrations of the components expressing these features, in a homogeneous mixture with fixed parent concentrations. The directions of the unstable concentration fluctuations can then be determined from the first principal component of the concentration-weighted molecular-feature distribution. This result bears resemblance to related studies of polydisperse fluids, in which phase transitions have been predicted using so-called "moment free energies" [80; 81]. When \(\{B_{ij}\}\) has both positive and negative eigenvalues, covariance matrices for the net-attractive and net-repulsive molecular features must be considered separately. The extent to which the net-repulsive features modify the phase behavior depends on whether their concentration-weighted distribution correlates with that of the net-attractive feature distribution. The authors also showed that this analysis can be extended to predict ordinary and higher-order critical points, whose occurrence depends on higher-order cumulants of the concentration-weighted feature distribution.
An important insight gained from this theory [79] is that the phase behavior of a mixture can be predicted by analyzing properties of the \(r\)-dimensional feature space, which may be much simpler than the \(N\)-dimensional concentration space if \(r\ll N\). Since intermolecular interactions among conformationally disordered biomolecules are widely believed to arise from a limited number of chemical interactions, such as electrostatic interactions among charged amino acids and hydrophobic forces involving amino acids with aromatic side chains, it is plausible that this is indeed the case. The relationship between this ansatz and findings from sequence-dependent theories will be discussed in Sec. IV. The work of Ref. [79] has also suggested a useful method for coarse-graining a multicomponent fluid into an equivalent binary mixture with the same spinodal and critical points by preserving the second and third cumulants along the first principal component of the concentration-weighted feature distribution. However, it is unclear whether the coexistence manifolds of multicomponent mixtures with low-rank interaction matrices can be simplified in the same way.
### Iterative design of multicomponent phase behavior
Taking the next step towards biologically realistic mixtures requires consideration of specific interactions that have emerged due to evolutionary processes. Recent efforts [60; 62; 69] to explore the thermodynamic consequences of evolved interaction specificity have shown that multicomponent mixtures can be designed with the goal of stabilizing a prescribed number of condensed phases. The logic behind this approach is that immense size of the space of possible biomolecular interactions limits the probability that a random-mixture model will produce a phase diagram comparable to the observed complexity of intracellular phase-separated condensates. Indeed, even in the simplest pairwise-interaction models, the "design space" has a dimension of \(N(N+1)/2\) when all interactions are independently controllable. By contrast, treating multicomponent LLPS as an optimization problem in which the interactions can be systematically tuned has the potential to discover regions of this design space that are relevant to multiphasic condensates.
Ref. [62] demonstrated that the number of coexisting phases in a mean-field pairwise-interaction model can be designed by iterative application of a genetic algorithm. This design process necessitates finding all coexisting phases given a candidate interaction matrix at each iteration. The genetic algorithm is then applied to
evolve a population of interaction matrices in order to identify matrices that result in a target "phase count" of condensed phases. It turns out that this goal is surprisingly easy to achieve owing to the size of the design space when all pairwise interactions are independently tunable. An intuitive strategy of designing block-diagonal matrices, along the lines of Ref. [78], reliably results in phase counts equal to the number of blocks of strongly attractive interactions. However, the genetic algorithm finds solutions to this design problem that are less obviously structured. The authors further showed that designed mixtures with low phase counts tend to be stable with respect to small random perturbations in the interaction energies and that the genetic algorithm can rapidly alter the phase count of a designed mixture, finding new solutions within a few tens or hundreds of iterations.
This iterative design approach comes with a number of caveats, however. First, optimizing for a target phase count does not guarantee that different solutions identified by the genetic algorithm correspond to condensates with similar molecular compositions. Second, although the phase count of a candidate interaction matrix should depend on the parent concentrations according to Eq. (1), Ref. [62] employed a strategy of sampling coexistence points at random parent concentrations. This approach suggests an implicit design goal of maximizing the volume of the \((K+1)\)-phase coexistence region within the \(N\)-dimensional concentration space. Third, the reliability and performance of the iterative design algorithm are sensitive to the computational cost and accuracy, respectively, of the intermediate phase-coexistence calculations, which must be repeated for each candidate interaction matrix. This is in fact a very general problem: Regardless of the mixture model, phase-coexistence calculations first require a search for candidate phases, whether by exhaustive grid-based sampling (e.g., [58]; see Sec. II.2), randomized initial conditions (e.g., [62]; see Sec. II.2), Monte Carlo sampling (e.g., [54]; see Sec. II.3), or physical dynamics (e.g., [74]; see Sec. III.1). The computational cost of this search problem scales exponentially with the dimension of the concentration space.
### Inverse design of multicomponent phase behavior
Many of the drawbacks of iterative design approaches can be overcome by directly solving the _inverse problem_--designing interactions to yield target phase behavior. Inverse design entails working out constraints on the solution space of biomolecular interactions that correspond to desired collective properties, such as the compositions of condensed phases (Fig. 6). Suitable interactions can be identified in this way without explicitly performing phase-coexistence calculations. As a result, the computational requirements may scale more favorably with the number of components, in particular because the initial search for candidate phases can be avoided.
An inverse design strategy for mixtures with pairwise interactions was first introduced in Ref. [69]. Because Eq. (3) is linear with respect to \(\{B_{ij}\}\), the inverse problem can be solved approximately using a convex relaxation. It is therefore possible to prove, within the convex relaxation, whether a pairwise interaction matrix exists for a prescribed set of immiscible phases, and if so, to calculate a suitable interaction matrix with efficient convex programming algorithms [82]. Ref. [69] showed that the thermodynamic requirements for establishing metastable phases with prescribed compositions yield a convex relaxation known as a semidefinite program (SDP). The SDP constraints comprise both affine and eigenvalue inequalities, since the Hessian matrix must be positive definite in each target phase. Solutions to this SDP were shown to result in metastable phases with the desired compositions in mixtures with up to 200 distinct components, both in the context of a Flory-Huggins mean-field model and in Monte Carlo simulations of an associated multi-component lattice model.
Exploiting the ability to prove feasibility of the SDP, Ref. [69] then studied the probability of finding a feasible solution for an inverse problem with randomly assigned target-phase compositions. This probability was found to drop sharply beyond a certain number of target phases, revealing a thresholding transition reminiscent of the storage capacity in the Hopfield model of neural networks [83] and "multifarious" self-assembly of finite-sized structures [84]. The critical number of condensed phases
Figure 6: In the inverse design approach, restrictions on the solution space of pairwise interaction matrices are determined directly from the concentrations of the target phases and the thermodynamic criteria for phase coexistence. _Left_: The target phase diagram consisting of condensed phases \(\{\vec{\phi}^{(\alpha)}\}\). Any mixture with parent concentrations inside the convex hull of the target phases will phase-separate at equilibrium to establish coexisting phases with the prescribed concentrations. _Right:_ Convex programming can be applied to compute the subspace containing interaction matrices that are consistent with the target phase behavior. The convex volume (red dashed line) bounded by the convex-optimization constraints (black lines) closely approximates the solution space to the inverse problem (red solid line). Because many interaction matrices may yield the same phase behavior, regularization is needed to select a particular matrix from the solution space.
associated with this thresholding transition could be predicted using graph-theoretic arguments and depends on both the number of components in the mixture and the fraction of components whose concentrations are enriched in each target phase relative to the surrounding fluid.
A similar convex optimization approach was then applied to design mixtures with prescribed equilibrium phases [60]. A two-step procedure for designing pairwise interaction matrices was proposed. First, a convex relaxation was used to specify an SDP for both the interaction matrix and the approximate coexistence chemical potential vector. Then, the chemical potentials were adjusted to ensure coexistence among the target phases using the nonlinear algorithm described in Sec. II.2. A regularization heuristic was also introduced to pick out a unique interaction matrix from within the solution space, eliminating competing condensed phases that were not specified in the phase-diagram design problem. Applying this approach to the Flory-Huggins model, Eq. (4), Ref. [60] provided numerical evidence that while the feasibility of the SDP is independent of the degree of polymerization, the convex relaxation becomes a better approximation of the phase-diagram design problem as the degree of polymerization increases (Fig. 6). Interestingly, coexistence regions with more condensed phases than distinct mixture components can be designed in this way. Furthermore, this inverse design approach is easily extended to include additional optimization goals or constraints on the interactions; for example, it is possible to compute the minimum number of matrix-elements that must be changed in order to switch from one phase diagram to another using this method. Ref. [60] also demonstrated that by mapping interaction matrices to molecular pair potentials, interactions designed using mean-field models can be used to establish coexistence among phases with prescribed compositions in molecular simulation models.
In another application of inverse design, Ref. [85] devised an algorithm to engineer pairwise interactions that produce phase-separated condensates with target morphologies, such as those observed in the nucleolus [40]. At equilibrium, surface tensions control the tendency of macroscopic droplets to exist in nonwetting, partial wetting, or complete wetting configurations (separated, fused, and enveloped droplets, respectively, in Fig. 1). Furthermore, within the Cahn-Hilliard framework [76], the surface tensions between phases of mutually immiscible components are directly related to the pairwise interactions. Ref. [85] showed that predicting multiphase morphologies in multicomponent fluids corresponds to a graph decomposition problem, in which vertices indicate phases and edges indicate shared interfaces between phases. Designing interaction matrices for multicomponent mixtures that phase separate into droplets with prescribed (non)wetting architectures can therefore be achieved by encoding the desired morphology in a graph, enumerating affine inequality constraints on the interactions via graph decomposition, and solving the resulting linear program. Phase-field simulations were then used to demonstrate the efficacy of this design algorithm.
## IV Sequence-dependent theories and coarse-grained molecular models
In parallel with efforts to understand the phase behavior of simplified mixtures with many components, theoretical models have been developed to describe LLPS at a greater level of chemical detail in solutions with a small number of distinct biomolecular species (Fig. 2). In the condensate literature, such models can be broadly classified as sequence-specific coarse-grained (CG) IDP models, which represent nonbonded interactions between amino acids [86; 87; 88; 89; 90; 91] or chemical functional groups [92] using pair potentials, and "patchy-particle" [93; 94; 95] or "patchy-polymer" [96; 97] CG models, which encode specific interactions between discrete binding sites on each molecule (Fig. 7). We first describe key insights into multicomponent phase behavior from theoretical analyses of these types of models before reviewing recent multicomponent molecular simulation studies.
### Multicomponent field-theoretic approaches
Field-theoretic approaches have been used to predict the sequence-dependent phase diagrams of heteropolymers, with a particular emphasis on polyampholytes. By accounting for chain connectivity, and thus the primary sequence of the heteropolymer, these approaches improve upon mean-field treatments that consider all monomer
Figure 7: _Left:_ Sequence-dependent CG models represent IDPs as chains of simplified amino acids. Typically, the nonbonded interactions between amino acids of types \(A\) and \(B\) are modeled using a pair potential, \(u_{\text{AB}}(r)\). Analytical theories of sequence-dependent heteropolymer interactions also require a model of the spatial correlations between monomers that are spaced a distance \(|a-b|\) apart in the primary sequence. _Right:_ Patchy-particle models of multidomain proteins implement a higher level of coarse-graining by treating the PPI or RNA-binding interfaces on folded domains as specific binding sites on simplified particles; binding sites engage in at most one interaction at a time. Analytical theories associate an interaction volume with each pair of distinct binding-site types.
monomer interactions in a polymer solution independently [45]. Field-theoretic approaches incorporate sequence information by modeling the correlations between monomers within a single chain, which decay with increasing separation between monomers along the primary sequence (Fig. 7).
Refs. [98] treated the spatial correlations between monomers with the random-phase approximation (RPA) by assuming that the polymer configurations obey the Gaussian statistics of ideal chains. This assumption means that monomers on different chains are not spatially correlated and that the heteropolymer sequences affect the potential energy, but not the polymer conformations, of a mixture at finite concentration. Despite this simplification, RPA predictions correlate well with experimental measurements of the phase behavior [98] and single-chain properties [99] of charge-neutral polyamphytes. Of particular importance, the RPA theory rationalizes the observed increase in LLPS propensity of charge-neutral sequences with "blocky" as opposed to homogeneous charge patterns [100]. Blocky charge patterns also correlate with smaller radii of gyration of chains in the dilute phase, in line with prior studies using the "sequence charge decoration" order parameter [101, 102] and related blockiness metrics [103, 104]. The RPA theory was extended to charged polyelectrolytes in Ref. [105].
Of relevance to multicomponent mixtures, Ref. [106] applied RPA to mixtures of two distinct charge-neutral polyampholytes. Because RPA ignores spatial correlations between monomers on different chains, the electrostatic contribution to the RPA free energy can be factored into terms arising from each chain individually. The RPA free energy can therefore be mapped at low concentrations to a pairwise-interaction model in which the effective heterotypic interaction, \(B_{12}\), is the geometric mean of the two homotypic interactions, \(B_{11}\) and \(B_{22}\). The homotypic interaction coefficients can be calculated by applying RPA to each heteropolymer sequence individually. In light of the discussion in Sec. III.2, these results indicate that RPA predicts a rank-1 pairwise interaction matrix for charge-neutral polyampholytes mixtures, since \(B_{ij}=B_{ii}^{1/2}B_{jj}^{1/2}\,\forall i,j\). This observation further suggests that spatial correlations between different chains are needed to predict higher-rank interaction matrices for charge-neutral polyampholytes.
RPA has also been applied to mixtures of polyelectrolyte mixtures. In solutions with two positively charged and one negatively charged polymer, Ref. [107] predicted that multiphase coacervates can form due to the repulsive heterotypic interactions between two positively charged sequences with differing linear charge densities. Ref. [108] then predicted that differences in the charge patterning between two positively charged sequences with identical linear charge densities is sufficient to drive the formation of two immiscible condensed phases.
An analogous field-theoretic treatment of heteropolymers interacting via short-ranged hydrophobic forces revealed that the leading order contribution to the interaction free energy is given by the sum of the interactions between all pairs of monomers in the mixture [109]. Although this model was not explicitly applied to multicomponent solutions, it suggests that, to leading order, the pairwise interaction matrix for heteropolymers interacting via short-ranged interactions is independent of their primary sequences. In other words, only the frequency of each monomer type in a heteropolymer sequence is relevant at this level of theory [110], and the rank of the interaction matrix cannot exceed the rank of the monomer-monomer interaction matrix, which may itself be rank-deficient [111, 109].
### Multicomponent associating fluid models
Concepts from associating fluid theory [112, 113] have been adopted to describe the interactions between binding sites on biomolecules that can only engage in one physical bond at a time. While the methods of Refs. [112] and [113] were originally developed to describe site-specific associative interactions between small molecules, this physical picture extends naturally to multidomain proteins or protein complexes whose constituent domains contain interfaces that interact specifically with other proteins or RNA sequences [114, 93]. The number of such binding sites therefore establishes the coarse-grained "valence" of the multidomain protein or complex (Fig. 7).
Associating fluid theory treats the attractive interactions between pairs of binding sites as perturbations to the free energy of a reference model, which represents the molecular mixture in the absence of binding sites. For example, the Flory-Huggins homopolymer model can serve as a reference model for a mixture of multidomain proteins, with the degree of polymerization \(L_{i}\) taken to be equal to the number of domains in each protein species \(i\)[93]. The concentration-dependent site-site binding probabilities are then determined from the chemical equilibrium equations
\[X_{iA}+X_{iA}\sum_{j=1}^{N}\rho_{j}\sum_{B=1}^{m_{j}}X_{jB}\Delta_{iA,jB}=1 \quad\forall i,A, \tag{8}\]
where \(X_{iA}\) represents the probability that the binding site of type \(A\) on a molecule of type \(i\) is not engaged in any associative interaction, and \(m_{i}\) is the valence of molecule type \(i\). The matrix \(\{\Delta_{iA,jB}\}\) represents the interaction volumes (i.e., the reciprocals of the dissociation constants) for the associative interactions between binding sites \(A\) and \(B\), which can in principle depend on spatial correlations in the reference model. Finally, the contribution to the free-energy density due to associative interactions is [113]
\[\beta f_{\rm assoc}=\sum_{i=1}^{N}\left[\rho_{i}\sum_{A=1}^{m_{i}}\left(\log X _{iA}-\frac{X_{iA}}{2}\right)+\frac{m_{i}}{2}\right]. \tag{9}\]
Ref. [115] showed that Eq. (8) has a unique solution and that Eq. (9) leads to a particularly simple
expression for the associative contribution to the excess chemical potential when \(\Delta_{iA,jB}\) is concentration-independent, \(\beta\mu_{\text{assoc},i}=\sum_{A=1}^{m_{i}}\log X_{iA}\). Furthermore, in the limit of weak associative interactions, Eq. (9) reduces to a simple pairwise form, such that \(\beta\mu_{\text{assoc},i}\simeq-\sum_{j=1}^{N}\rho_{j}\sum_{A=1}^{m_{i}}\sum_{B =1}^{m_{j}}\Delta_{iA,jB}\). With regard to the discussion in Sec. III.2, the maximum rank of the pairwise interaction matrix is therefore given by the rank of \(\{\Delta_{iA,jB}\}\) in this limit.
The associating fluid framework has been widely applied to model biomolecular LLPS involving folded domains that interact via specific binding sites. A notable application of the associating fluid framework to multi-phase condensates was provided in Ref. [29], which used a simplified representation of an experimentally determined PPI network to predict the compositions and morphologies of coexisting stress granule and P-body condensates. Agreement between theory and experiment regarding the effects of concentration changes and binding-site modifications provided strong evidence that the phase behavior of these condensates is indeed governed by specific PPIs and interactions between RBDs and mRNA.
When the binding sites are assumed to represent individual amino acids of IDPs or short sequence motifs of IDPs and/or RNAs, associating fluid theory is commonly referred to as the "stickers-and-spacers" model of heteropolymer association [116, 117, 4, 118]. In this case, a Flory-Huggins homopolymer model with a degree of polymerization much greater than the binding-site valence (i.e., the number of "stickers") is typically taken as the reference model. Stickers-and-spacers applications of associating fluid theory have been successfully used to rationalize experimental observations of IDP-driven phase separation, including both thermodynamic and dynamical properties, in many contexts [119, 4, 120]. The assignment of the "stickers" to specific amino acids or short sequence motifs has varied depending on context across different studies, however, suggesting that additional contextual information may be required to predict the phase behavior of multicomponent IDP and RNA mixtures from their sequences. For further discussion of applications of associating fluid theory to biopolymers, we direct the reader to recent reviews on this subject, including Refs. [4] and [47].
### Insights from coarse-grained molecular simulations of multiphase condensates
#### iv.3.1 Polymer simulations with pair potentials
Molecular simulations have provided insights into the accuracy of analytical theories for describing sequence-dependent multicomponent phase behavior. In order to test the RPA predictions of Ref. [106] (see Sec. IV.1), Ref. [121] used a combination of field-theoretic and CG MD simulations to study the phase behavior of polyampholyte mixtures. These simulations demonstrated that pairs of charge-neutral sequences only exhibit demixing when the chains have sufficiently different (i.e., blocky versus uniform) charge distributions. These results are in line with the predictions of the RPA theory. Nonetheless, the authors found that excluded volume interactions--which are present in the MD simulations but are not included in the RPA calculations--are essential for observing demixing in MD simulations. The qualitative agreement with the theoretical predictions was therefore ascribed to the assumption of incompressibility in the RPA calculations. Nonetheless, this observation points to the need for more accurate theoretical treatments that account for excluded volume and interchain correlations.
Moving to systems with a third non-solvent component, Ref. [43] performed simulations of a three-component system comprising a prion-like polypeptide (PLP), an arginine-rich polypeptide (RRP), and RNA. In this system, competition between PLP and RNA for binding to RRP results in the demixing of PLP+RRP condensates into immiscible PLP and RNA+RRP phases when RNA is added. This experimental observation, which bears qualitative resemblance to the competing heterotypic model of Ref. [107] (see Sec. IV.1), was reproduced using MD simulations of a CG IDP/RNA model. These simulations also rationalized the experimental observation that the RNA parent concentration controls the morphology of the coexisting condensates.
In an attempt to uncover general sequence determinants of multiphase mixtures, Ref. [122] proposed a computational approach to design IDP sequences that result in multilayered condensates. To this end, the authors used a genetic algorithm to optimize pairs of sequences that form immiscible phases and a stable shared interface, starting from naturally occurring IDP sequences. The authors found that the net homotypic and heterotypic interactions must differ between the optimized IDPs, as expected. In many cases, these net interactions were found to depend primarily on the monomer frequencies, such that the immiscibility of the two phases was not affected by randomizing the sequences of the designed IDPs. However, when the genetic algorithm was initialized using a particular naturally occurring IDP sequence in one of the coexisting phases, the patterning of the amino-acid residues in the optimized partner sequence was found to be crucial for achieving immiscibility. The reasons for this dependence on sequence patterning in some, but not all, optimization scenarios are poorly understood. Nonetheless, sequences generated via this approach could provide challenging test cases for the further development of analytical sequence-dependent theories.
#### iv.3.2 Patchy-polymer simulations
"Patchy-polymer" models, which encode one-to-one interactions between binding sites on specific monomers, are appropriate CG models for testing the predictions of associating fluid theory. To explore the design rules
underlying multiphasic systems with this class of models, Ref. [123] introduced a lattice-based CG model of poly-PRM and poly-SH3 multidomain proteins. Proline-rich modules (PRMs) are short IDR sequence motifs that engage in specific interactions with folded SH3 domains [124], and as such form one-to-one binding interactions. Meanwhile, the linkers between motifs in the poly-PRM molecules and between the folded domains in the poly-SH3 molecules, respectively, were modeled either implicitly, representing ideal chains with Gaussian conformational statistics, or explicitly, using a variable number of lattice-site-occupying monomers. Simulations were conducted using two types of poly-SH3 molecules, which competed for binding to the PRMs. The authors found that differences in the linker properties, which tune the effective pairwise interactions between the molecules in the absence of the associative PRM/SH3 interactions, strongly affect the ability of the mixture to form immiscible condensed phases. This observation is consistent with the finding of Ref. [121] that excluded volume interactions are necessary for demixing. By contrast, the interaction volume associated with the attractive PRM/SH3 interactions was found to play a less important role in determining the degree of immiscibility, in line with the predictions of associating fluid theory in the strong-binding limit (\(\rho\Delta\gg 1\)) of Eq. (8). Ref. [123] also showed that the interfaces of immiscible condensates are similarly affected by the linker properties, since molecules containing linkers with greater excluded volumes are preferentially driven towards interfaces with the dilute phase.
#### iv.3.3 Patchy-particle models
"Patchy-particle" models allow for simulations with a larger number of distinct molecular species, along with a greater diversity of associative interactions, due to their simplicity. In complex mixtures with a variety of different associative interactions, it is useful to describe the collection of all possible one-to-one binding interactions by introducing an "interaction network" [29]. Ref. [125] explored this network concept using MD simulations of a 6-component mixture comprising 2, 3, and 4-valent patchy particles. The authors considered a nearly fully connected network with almost all equivalent interaction strengths, leading to the formation of a single condensed phase in mixtures with equimolar parent concentrations. Unsurprisingly, the density of associative bonding interactions in the condensed phase was found to correlate with the condensate stability, as measured by the critical temperature. Simulations further revealed that high-valence molecules, which phase separate with high critical temperatures in single-component solutions, tend to increase the critical temperature of multicomponent condensates when added to mixtures of components with lower average valence. Of direct experimental relevance, positive correlations were observed between the critical point of a molecular species in a single-component solution, its binding-site valence, and its partition coefficient with respect to a multicomponent condensate in a mixture with equimolar parent concentrations.
These observations can be understood qualitatively within the framework of associating fluid theory. Making the simplification that all binding sites interact with one another via the same interaction volume, such that \(\Delta_{iA,jB}=\vec{\Delta}\,\forall i,A,j,B\), the solution to Eq. (8) simplifies to \(X_{iA}=\vec{X}\,\forall i,A\). The associative contribution to the excess chemical potential (see Sec. IV.2) is thus \(\beta\mu_{\text{associ},i}\approx m_{i}\log\vec{X}\) in the condensed phase and negligible in the dilute phase, implying that the partition coefficient, Eq. (2), is related to the binding-site valence by \(\text{PC}_{i}\propto\exp(m_{i})\). An approximate relationship between the stability of the condensed phase and the average valence of the mixture follows by a similar argument. The relatively small variations in interaction strengths in the simulated interaction network [125] can be considered as perturbations on these predictions. However, variations in the geometric arrangements of the binding sites, and their relatively minor effects on the partition coefficients, are not captured at this level of theory.
The patchy-particle model of Ref. [125] was then extended to examine multiphase mixtures in Ref. [126]. The authors modified the interaction network by eliminating heterotypic associative interactions between select molecular species in order to construct immiscible condensates and multilayered structures. Analogously to the results of Ref. [123], simulations demonstrated that strong homotypic associative interactions lead to the formation of multiple immiscible condensates, while the introduction of strong heterotypic associative interactions tends to stabilize a single condensed phase. However, mixtures with competing heterotypic interactions between weakly and strongly associating species showed evidence of multiphase condensate formation.
## V Outlook and Challenges
We have reviewed recent progress in the development of statistical and sequence-specific theories of multicomponent fluids and multiphase condensate formation. Further advances in this area have the potential to reveal quantitative relationships between the molecular determinants of biomolecules, whether naturally occurring or engineered, and phase-separated self-organization in heterogeneous mixtures. In particular, inverse-design strategies offer a promising approach for rationally and systematically identifying the physicochemical properties of biomolecular mixtures responsible for the assembly of complex--and biologically functional--condensates. These theoretical and computational efforts will help to provide a roadmap for future experiments on heterogeneous biomolecular mixtures.
Nonetheless, many significant theoretical challenges remain to be explored, particularly with regard to the as
sumption of thermal equilibrium. Future directions for theoretical and simulation advances in this field include:
1. _Structuring and parameterizing multicomponent mixture models._ Further development of statistical mixture models (see Sec. III) will require incorporating information from sequence-dependent theories and simulations. In this way, it will be possible to investigate the thermodynamic consequences of physically motivated and biomolecularly relevant correlations among interaction parameters in multicomponent fluids, as well as to move beyond pairwise mixture models. Physicochemically motivated constraints should also be incorporated into inverse design approaches.
2. _Extending sequence-dependent coarse-grained simulations and theories to multicomponent mixtures._ Complementary insights can be gained by increasing the number of components in biomolecular mixtures treated using analytical theories or studied via coarse-grained molecular simulation (see Sec. IV). Simulations of recently developed CG IDP models [86; 87; 88; 89; 90; 91; 92] have demonstrated impressive agreement with experiments on both single-chain and individual condensed-phase properties, suggesting that multicomponent simulations using these models may also be capable of predicting multiphase coexistence [122] with similar accuracy. Further improvement in the chemical accuracy of multicomponent simulations is likely to be achieved through multiscale approaches that incorporate all-atom simulations of ribonuclecle condensates [127; 128; 129]. In future simulation studies, it will also be important to consider the role of competition between sequence-dependent clustering, aggregation, and LLPS behaviors, as observed in simple models of single-component heteropolymer solutions [130; 131; 132], in multicomponent mixtures.
3. _Accounting for nonequilibrium effects due to kinetic barriers._ Within the near-equilibrium framework, kinetic effects can lead to differences between the phase behavior that is observed in simulations and experiments and what is predicted at global thermodynamic equilibrium. For example, nucleation pathways [133] and slow rates of transitions between metastable states [134; 135; 136] can affect the molecular compositions and multiphasic organization of phase-separated condensates on biologically relevant timescales. The consequences of these nonequilibrium effects in systems with many components require further exploration.
4. _Exploring differences in phase behavior at nonequilibrium steady states._ Phase separation can also occur in fluids at nonequilibrium steady states (NESSs), which can arise due to chemostatted chemical reactions [18]. Differences between thermal equilibrium and a NESS can manifest, for example, in the nucleation behavior [137; 138] as well as the growth and coarsening dynamics [139; 140; 141; 142; 143] of phase-separated droplets. The implications of chemically driven NESSs for multiphase self-organization are largely unexplored.
5. _Developing theoretical tools for emerging experimental applications._ A variety of experimental platforms for manipulating biomolecular LLPS have recently been developed using "designer" peptides [144; 145; 146; 147], nucleic acids [148; 32; 149], and nonbiological polymers [149]. Chemically specific computational tools are needed to guide the rational design of multicomponent, multiphasic mixtures using these experimental platforms. With a better understanding of condensate compositional control in heterogeneous environments, combined theoretical and experimental engineering approaches have the potential to bring about practical techniques for manipulating complex biological processes _in vivo_[150].
In summary, LLPS can give rise to highly nontrivial spatial organization in multicomponent biomolecular fluids. Nevertheless, considerable gaps persist in our understanding of the relationship between molecular-level properties and emergent phase behavior in heterogeneous mixtures. Addressing this multifaceted question therefore represents an important way in which chemical theory and simulation can contribute to research at the forefront of molecular and cell biology, while helping to elucidate the origins of self-organization in living systems.
This work is supported by the National Science Foundation (DMR-2143670).
|
2304.00758 | Validation of GBS plasma turbulence simulation of the TJ-K stellarator | We present a validation of a three-dimensional, two-fluid simulation of
plasma turbulence in the TJ-K stellarator, a low temperature plasma experiment
ideally suited for turbulence measurements. The simulation is carried out by
the GBS code, recently adapted to simulate 3D magnetic fields. The comparison
shows that GBS retrieves the main turbulence properties observed in the device,
namely the fact that transport is dominated by fluctuations with low poloidal
mode number. The poloidal dependence of the radial $\text{E}\times\text{B}$
turbulent flux is compared on a poloidal plane with elliptical flux surfaces,
where a very good agreement between experiment and simulation is observed, and
on another with triangular flux surfaces, which shows a poorer comparison. The
fluctuation levels in both cases are underestimated in the simulations. The
equilibrium density profile is well retrieved by the simulation, while the
electron temperature and the electrostatic potential profiles, which are very
sensitive to the strength and localization of the sources, do not agree well
with the experimental measurements. | A. J. Coelho, J. Loizu, P. Ricci, M. Ramisch, A. Köhn-Seemann, G. Birkenmeier, K. Rahbarnia | 2023-04-03T07:19:27Z | http://arxiv.org/abs/2304.00758v1 | # Validation of GBS plasma turbulence simulation of the TJ-K stellarator
###### Abstract
We present a validation of a three-dimensional, two-fluid simulation of plasma turbulence in the TJ-K stellarator, a low temperature plasma experiment ideally suited for turbulence measurements. The simulation is carried out by the GBS code, recently adapted to simulate 3D magnetic fields. The comparison shows that GBS retrieves the main turbulence properties observed in the device, namely the fact that transport is dominated by fluctuations with low poloidal mode number. The poloidal dependence of the radial E\(\times\)B turbulent flux is compared on a poloidal plane with elliptical
flux surfaces, where a very good agreement between experiment and simulation is observed, and on another with triangular flux surfaces, which shows a poorer comparison. The fluctuation levels in both cases are underestimated in the simulations. The equilibrium density profile is well retrieved by the simulation, while the electron temperature and the electrostatic potential profiles, which are very sensitive to the strength and localization of the sources, do not agree well with the experimental measurements.
## 1 Introduction
As stellarators are becoming a viable option for a fusion reactor [1, 2], fluid codes are being extended to non-axisymmetric magnetic field geometries to study the properties of plasma turbulence in the stellarator boundary. BSTING, an extension of the BOUT++ code [3], simulated seeded filaments in a rotating ellipse [4]. More recently, the first global flux-driven simulations of a stellarator, performed by using the GBS code [5, 6, 7], considered a vacuum magnetic field generated with the Dommaschk potentials, reporting important differences with respect to tokamak simulations, namely the existence of a low-\(m\) mode, where \(m\) is the poloidal mode number, dominating the turbulent transport [8]. Such surprising result calls for the validation of turbulence simulations in stellarators.
In this paper, we present the first validation of a simulation of plasma turbulence in a stellarator configuration against experimental measurements. We compare helium discharges carried out in the TJ-K stellarator with a simulation performed using the GBS code. TJ-K is a stellarator experiment ideally suited for a detailed comparison with simulations [9]. Because of the low plasma density and electron temperature, Langmuir probes can access the entire plasma volume and provide equilibrium as well as turbulence measurements that can be easily compared with simulations. In addition, since collisionality in TJ-K is large in the whole plasma volume, the fluid equations evolved by GBS are valid both in the core and in the boundary regions. Finally, the small size of the machine makes its simulation attractive from the point of view of the
computational cost.
Previous turbulence modelling of TJ-K was accomplished either through modified Hasegawa-Wakatani models in slab geometry [10, 11] or by using a fluid model in a simplified geometry with the characteristic plasma parameters of TJ-K [12]. Due to their simplicity, a detailed one-to-one comparison of the simulations against experimental results was not attempted. The present work leverages previous validations of GBS against experiments carried out in axisymmetric configurations [13, 14, 15, 16]. Thanks to the full-f nature of the GBS simulation code, which does not make a separation between background and fluctuations, we validate equilibrium as well as fluctuating quantities (density and electrostatic potential).
The paper is organized as follows. Section 2 describes the TJ-K experiment. In Section 3, the physical model implemented in GBS is presented. In Section 4, the simulation results are presented and their validation with the TJ-K experiment is reported. Finally, we discuss our results and draw our conclusions in Section 5.
## 2 The TJ-K experiment
TJ-K is a six-field period stellarator with a major radius of 0.6 m and a minor radius of, approximately, 0.1 m [9]. The vacuum magnetic field is generated by a helical coil that loops around the vessel six times, and two vertical-field circular coils, as shown in Fig. 1. The magnetic field strength is, approximately, 70 mT. The Poincare plots at four different toroidal angles are shown in Fig. 2 with the toroidal vessel depicted in grey.
Continuous lines correspond to closed flux surfaces, while dashed lines correspond to open flux surfaces. The last closed flux surface (LCFS) is represented in red. In fact, the plasma is limited at the top and bottom part of the vessel at two different poloidal planes for every field period. The profile of the rotational transform is approximately flat, with a value \(\iota=0.28\) at the magnetic axis (henceforth defined by the coordinates \(R_{\rm axis}\) and \(Z_{\rm axis}\), that vary along the toroidal angle \(\phi\)).
In TJ-K, the plasma breakdown and heating is achieved with electron cyclotron resonant heating (ECRH), using a 2.45 GHz microwave system with up to 6 kW heating power [17]. When working with hydrogen or helium gases, this yields typical line-averaged plasma densities of order \(10^{17}\,\rm m^{-3}\) and the electron temperature around \(10\,\rm eV\), while ions are cold (the ion temperature is less than \(1\,\rm eV\)). This results in a plasma collisionality \(\nu^{*}\simeq 10\), with \(\nu^{*}\) defined as the ratio between the trapped particle collision frequency and the banana bounce frequency. As a consequence, TJ-K plasmas are in the Pfirsch-Schluter regime [18].
The TJ-K Langmuir probes enable measurements with good spatial resolution. The radial profiles of density and electron temperature are measured with a radially movable Langmuir probe, and the radial profile of the electrostatic potential with an emissive probe. In addition, density and plasma potential fluctuations are measured with two multi-probe arrays consisting of 64 Langmuir probes each, located at two different toroidal locations, one at an outer port (OPA) at \(\phi=30^{\circ}\) and the other at a top port (TPA) at \(\phi=10^{\circ}\). The probe tips are aligned to the same flux surface as shown
in Fig. 1. This surface, referred to as the _reference surface_ in the following, corresponds to the orange surface in the Poincare plots of Fig. 2. These two arrays allow for detailed measurements of the potential and density fluctuations as a function of the poloidal angle, \(\theta\). In particular, the radial \(\mathrm{E}\times\mathrm{B}\) turbulent particle flux is evaluated as
\[\Gamma^{\mathrm{exp}}_{E\times B}(\theta_{i})=\frac{1}{B_{i}}\left\langle-\frac {\widetilde{\Phi}_{\mathrm{fl},i+1}-\widetilde{\Phi}_{\mathrm{fl},i-1}}{2 \Delta y}\widetilde{I}_{\mathrm{sat},i}\right\rangle_{t}, \tag{1}\]
where \(\widetilde{\Phi}_{\mathrm{fl},i}\) and \(\widetilde{I}_{\mathrm{sat},i}\) are, respectively, the floating potential and ion saturation current fluctuations as measured by the \(i\)-th probe at the poloidal position \(\theta_{i}\), \(B_{i}\) is the magnetic field strength at the same position, and \(\Delta y\approx 8\) mm is the distance between adjacent probe tips covering the flux surface. The 64 tips alternate between measurements of \(\widetilde{\Phi}_{\mathrm{fl}}\) and \(\widetilde{I}_{\mathrm{sat}}\), hence the tips \(\{i-1,i,i+1\}\) are used to compute the flux at \(\theta_{i}\). The temporal average is carried out over \(1024\) ms of the data sampled at \(1\) MHz, resulting in a very small uncertainty of the mean value. Systematic errors due to possible probe misalignments can be estimated to have maximum relative values of \(13\%\)[19]. The use of the floating potential instead of the plasma potential in the evaluation of \(\Gamma^{\mathrm{exp}}_{E\times B}\) is justified by the negligible temperature fluctuations present in the experiment [20].
## 3 The GBS simulation
GBS [5, 6, 7] is a three-dimensional, global, two-fluid, flux-driven code that solves the drift-reduced Braginskii equations [21], valid in the high-collisionality regime that often characterizes the plasma boundary of magnetic fusion devices as well as the core of
low-temperature devices such as TJ-K. GBS evolves all quantities in time, without separation between equilibrium and fluctuating parts. We consider here the cold ion and the electrostatic limits, we apply the Boussinesq approximation [5] and neglect gyroviscous terms as well as the coupling to the neutral dynamics [22]. Within these approximations, the drift-reduced model evolved by GBS for the simulation considered in this paper is:
\[\frac{\partial n}{\partial t}=-\frac{\rho_{*}^{-1}}{B}\left[\Phi,n\right]- \nabla_{\parallel}(nV_{\parallel e})+\frac{2}{B}\left[C(p_{e})-nC(\Phi) \right]+D_{n}\nabla_{\perp}^{2}n+D_{n}^{\parallel}\nabla_{\parallel}^{2}n+ \mathcal{S}_{n} \tag{2}\]
\[\frac{\partial T_{e}}{\partial t}= -\frac{\rho_{*}^{-1}}{B}\left[\Phi,T_{e}\right]-V_{\parallel e} \nabla_{\parallel}T_{e}+\frac{4T_{e}}{3B}\left[\frac{C(p_{e})}{n}+\frac{5}{2} C(T_{e})-C(\Phi)\right] \tag{3}\] \[+\frac{2T_{e}}{3n}\left[0.71\nabla_{\parallel}j_{\parallel}-n \nabla_{\parallel}V_{\parallel e}\right]+D_{T_{e}}\nabla_{\perp}^{2}T_{e}+ \chi_{\parallel e}\nabla_{\parallel}^{2}T_{e}+\mathcal{S}_{T_{e}}\]
Figure 1: Schematics of the TJ-K experiment. The magnetic field is generated by a helical coil (blue) and two vertical-field circular coils (orange and red). Two multi-Langmuir probe arrays are distributed poloidally along the same flux surface at two different toroidal angles (TPA and OPA).
\[\frac{\partial V_{\parallel e}}{\partial t}= -\frac{\rho_{*}^{-1}}{B}\left[\Phi,V_{\parallel e}\right]-V_{ \parallel e}\nabla_{\parallel}V_{\parallel e}+\frac{m_{i}}{m_{e}}\left[\nu j_{ \parallel}+\nabla_{\parallel}\Phi-\frac{\nabla_{\parallel}p_{e}}{n}-0.71 \nabla_{\parallel}T_{e}\right] \tag{4}\] \[+\eta_{0e}\nabla_{\parallel}^{2}V_{\parallel e}+D_{V_{\parallel e }}\nabla_{\perp}^{2}V_{\parallel e}\]
\[\frac{\partial V_{\parallel i}}{\partial t}=-\frac{\rho_{*}^{-1}}{B}\left[ \Phi,V_{\parallel i}\right]-V_{\parallel i}\nabla_{\parallel}V_{\parallel i} -\frac{1}{n}\nabla_{\parallel}p_{e}+\eta_{0i}\nabla_{\parallel}^{2}V_{ \parallel i}+D_{V_{\parallel i}}\nabla_{\perp}^{2}V_{\parallel i} \tag{5}\]
Figure 2: Poincaré plots of the TJ-K magnetic field at four different toroidal angles. The circular vessel is depicted in grey and the GBS simulation box in green. Closed flux surfaces are represented by continuous lines and open flux surfaces by dashed black lines. These are separated by the LCFS in red. The orange line refers to the surface where density and potential fluctuation measurements are performed (this is referred to as the _reference surface_).
\[\frac{\partial\omega}{\partial t}=-\frac{\rho_{*}^{-1}}{B}\left[\Phi,\omega\right]- V_{\parallel i}\nabla_{\parallel}\omega+\frac{B^{2}}{n}\nabla_{\parallel j \parallel}+\frac{2B}{n}C(p_{e})+D_{\omega}\nabla_{\perp}^{2}\omega+D_{\omega}^ {\parallel}\nabla_{\parallel}^{2}\omega \tag{6}\]
\[\nabla_{\perp}^{2}\Phi=\omega \tag{7}\]
In Eqs. (2-7) all quantities are normalized to reference values. Density \(n\) and electron temperature \(T_{e}\) are normalized to the reference values \(n_{0}\) and \(T_{e0}\); electron parallel velocity \(V_{\parallel e}\) and ion parallel velocity \(V_{\parallel i}\) are both normalized to the sound speed \(c_{s0}=\sqrt{T_{e0}/m_{i}}\); vorticity \(\omega\) and the electrostatic potential \(\Phi\) are normalized to \(T_{e0}/(e\rho_{s0}^{2})\) and \(T_{e0}/e\); time is normalized to \(R_{0}/c_{s0}\), where \(R_{0}\) is the machine major radius; perpendicular and parallel lengths are normalized to the ion sound Larmor radius, \(\rho_{s0}=\sqrt{T_{e0}m_{i}}/(eB_{0})\), and \(R_{0}\), respectively. The normalized parallel current is \(j_{\parallel}=n(V_{\parallel i}-V_{\parallel e})\) and the magnetic field \(B\) is normalized to its norm at the magnetic axis, \(B_{0}\).
The dimensionless parameters appearing in the equations are the normalized ion sound Larmor radius \(\rho_{*}=\rho_{s0}/R_{0}\); the normalized electron and ion parallel diffusivities, \(\chi_{\parallel e}\) and \(\chi_{\parallel i}\) (here considered constant); the normalized electron and ion viscosities, \(\eta_{0e}\) and \(\eta_{0i}\), which we also set to constant values; and the normalized Spitzer resistivity \(\nu=\nu_{0}T_{e}^{3/2}\) with \(\nu_{0}\) given in Ref. [23]. Small numerical diffusion terms such as \(D_{n}\nabla_{\perp}^{2}n\) and \(D_{n}^{\parallel}\nabla_{\parallel}^{2}n\) (and similar for the other fields) are introduced to improve the numerical
stability of the code, and the simulation results show that they lead to significantly lower perpendicular transport than the turbulent one. The terms \(\mathcal{S}_{n}\) and \(\mathcal{S}_{T_{e}}\) denote the normalized sources of density and electron temperature. Magnetic presheath boundary conditions, described in Refs. [24, 25], are applied to all quantities at the end of the field lines intersecting the walls, except for density and vorticity. These satisfy \(\partial_{s}n=0\) and \(\omega=0\), respectively, where \(s\) is the direction normal to the wall.
The normalized geometrical operators appearing in Eqs. (2-7) are the parallel gradient \(\nabla_{\parallel}u=\mathbf{b}\cdot\mathbf{\nabla}u\), the Poisson brackets \([\Phi,u]=\mathbf{b}\cdot[\mathbf{\nabla}\Phi\times\mathbf{\nabla}u]\), the curvature operator \(C(u)=(B/2)\left[\mathbf{\nabla}\times(\mathbf{b}/B)\right]\cdot\mathbf{\nabla}u\), the parallel Laplacian \(\nabla_{\parallel}^{2}u=\mathbf{b}\cdot\mathbf{\nabla}(\mathbf{b}\cdot\mathbf{\nabla}u)\) and the perpendicular Laplacian \(\nabla_{\perp}^{2}u=\mathbf{\nabla}\cdot[(\mathbf{b}\times\mathbf{\nabla}u)\times\mathbf{b}]\).
The simulation domain is a torus of radius \(R_{0}\) with a rectangular cross-section. Since the vessel of the experiment has, instead, a circular cross-section, we choose the size of the domain in order to limit the plasma at the same positions as in the experiment. This simulation domain is shown in Fig. 2 (green line). The physical model in Eqs. (2-7) is discretized using a regular cylindrical grid \((R,\phi,Z)\), with \(R\) the radial coordinate, \(\phi\) the toroidal angle and \(Z\) the vertical coordinate. Equations (2-6) are advanced in time with an explicit Runge-Kutta fourth-order scheme, while spatial derivatives are computed with a fourth-order finite difference scheme.
The magnetic field was initially computed numerically using the MAKEGRID code from the STELLOPT package [26], which uses the Biot-Savart law to determine the magnetic field at a specified location, using the coil geometry and coil currents as an
input. However, since the vessel of the experiment, as well as the helical coil, fall inside the GBS domain, singularities in the magnetic field at the position of the coil appear, hindering an appropriate implementation of the geometrical operators. To circumvent this issue, we select one of the open flux surfaces (obtained with the FIELDLINES code from the same package) and provide it as an input to REGCOIL [27]. This code seeks a surface current distribution on an arbitrary toroidal surface, which we choose to enclose the GBS rectangular domain, such that \(\mathbf{B}\cdot\mathbf{n}=0\) at the provided surface. Furthermore, since \(\nabla\times\mathbf{B}=0\) in vacuum, the problem is reduced to a Laplace equation with Neumann boundary conditions, that admits a unique solution (up to a re-scaling factor). This results in a magnetic field obtained by REGCOIL that is exactly the same as that of TJ-K inside the chosen surface (apart from the re-scaling factor). With this approach, we obtain a non-singular vacuum magnetic field that coincides with that of TJ-K from the magnetic axis up to the selected open flux surface.
TJ-K plasmas are not fully ionized. Hence, the electron-neutral collisions could affect the drift-reduced Braginskii model, in particular the parallel friction and parallel thermal conduction terms. This can be assessed by quantifying the ratio \(\nu_{\mathrm{en}}/\nu_{\mathrm{ei}}\), where \(\nu_{ei}\) is the Coulomb electron-ion collision frequency [28] and \(\nu_{en}=n_{n}\sigma_{\mathrm{en}}v_{Te}\) is the electron-neutral collision frequency, with \(n_{n}\) the neutrals density, \(\sigma_{\mathrm{en}}\) the momentum transfer cross-section for electrons impacting neutrals and \(v_{Te}\) the electron thermal velocity. The ratio is shown in Fig. 3 for electron temperatures ranging between \(5\,\mathrm{eV}\) and \(17\,\mathrm{eV}\), for which \(\sigma_{\mathrm{en}}\approx 10^{-19}\mathrm{m}^{-2}\)[29]. The different curves refer to different plasma
densities and neutral temperatures, and show that the electron-neutrals interaction can become important at large electron temperatures. Nevertheless, in this work we assume that these interactions do not play a role.
We consider that the source of density is due to ionization processes. This is modeled as \(\mathcal{S}_{n}=n_{n}n\left\langle\sigma_{\mathrm{ion}}v\right\rangle_{v}\). However, because the neutral pressure in TJ-K is approximately constant throughout the plasma volume and ionization is due to fast electrons (whose density is assumed proportional to the plasma density), the density source is recast as \(\mathcal{S}_{n}=\alpha_{n}n\), with \(\alpha_{n}\) a constant. We neglect recombination processes because of the typical TJ-K temperatures.
The electron temperature source is composed of the external energy input power and a term associated with the particle source, leading to
\[\mathcal{S}_{Te}=\frac{R_{0}/c_{s0}}{T_{e0}}\left[\frac{2}{3}\frac{\mathcal{P }}{n_{0}n}-\frac{2}{3}n_{0}\left\langle\sigma v\right\rangle_{\mathrm{ion}} \left(E_{\mathrm{ion}}+\frac{3}{2}T_{e}\right)\right], \tag{8}\]
Figure 3: Ratio between electron-neutral and electron-ion collision frequencies at different plasma densities and gas temperatures, \(T_{gas}\). We indicated with \(n_{17}\) the plasma density in units of \(10^{17}\,\mathrm{m}^{-3}\). The neutral pressure was assumed to be \(3\,\mathrm{mPa}\), the gas pressure of the TJ-K discharges presented in this paper.
where \(\mathcal{P}\) is the ECRH input power density, \(\left\langle\sigma v\right\rangle_{\mathrm{ion}}\) is the ionization reaction-rate, which is of the order of \(10^{-15}\,\mathrm{m}^{3}/\mathrm{s}\) at \(10\,\,\mathrm{eV}\)[30], and \(E_{\mathrm{ion}}\) is the ionization energy. As described in Ref. [17], power deposition in TJ-K occurs at the upper hybrid (UH) resonance layer. The space-dependent input power \(\mathcal{P}\) is given by
\[\mathcal{P}=\frac{\mathcal{P}_{\mathrm{ant}}}{\int_{V}P(\mathbf{r})d\mathbf{r }}P(\mathbf{r}), \tag{9}\]
where \(\mathcal{P}_{\mathrm{ant}}=1.8\,\,\mathrm{kW}\) is the power launched by the antennas in the TJ-K discharges considered here and \(P(\mathbf{r})\) is the UH resonant layer, shown in Fig. 4 at different poloidal planes. \(P(\mathbf{r})\) is obtained by matching the UH frequency with the antenna frequency, \(\sqrt{\omega_{\mathrm{pe}}^{2}+\omega_{\mathrm{ce}}^{2}}=2\pi f_{\mathrm{antenna}}\), and then assuming a Gaussian deposition profile located around the isocountours that solve this relation. The temperature source in Eq. (8) can thus be written as
\[\mathcal{S}_{Te}=\alpha_{Te}\left[\frac{P(\mathbf{r})}{n}-3\times 10^{-4}E_{ \mathrm{ion}}^{\mathrm{eV}}(1+T_{e})\right]. \tag{10}\]
For helium, we consider the first ionization energy, \(E_{\mathrm{ion}}^{\mathrm{eV}}=24.6\,\mathrm{eV}\). The constants \(\alpha_{n}\) and \(\alpha_{Te}\) are adjusted such that the peak values of density and temperature are close to the reference values, \(n\approx 1\) and \(T_{e}\approx 1\). This adjustment accounts for the uncertainties on the averaging of the ionization reaction-rate and on the effective absorption by the plasma of the ECRH power.
Turning now to the simulation parameters, we consider \(\rho_{s}^{-1}=60\) and
by using the reference values \(T_{e0}=10\) eV and \(n_{0}=10^{17}\)m\({}^{-3}\). We use an atomic number \(Z=1\) since helium is mostly single-ionized at the reference temperature. We further use \(m_{i}/m_{e}=900\), \(\chi_{\parallel e}=0.5\), \(\eta_{0e,i}=1.0\), \(D_{n}=D_{Te}=D_{V_{\parallel e}}=D_{V_{\parallel i}}=D_{\omega}=0.2\), \(D_{n}^{\parallel}=0.18\), \(D_{\omega}^{\parallel}=0.01\), \(\alpha_{n}=0.03\) and \(\alpha_{Te}=0.7\). Concerning the numerical parameters, the simulation is performed with a time-step of \(2.0\times 10^{-5}R_{0}/c_{s0}\) and a grid resolution of \(\Delta R=\Delta Z=0.5\rho_{s0}\) and \(\Delta\phi=2\pi/(20\times 6)\), i.e., 20 poloidal planes per field period. A convergence test in the toroidal direction, made by increasing the number of planes from 20 to 30 per field period, show results similar to the one presented here.
Figure 4: The UH resonant layer, \(P(\mathbf{r})\), considered in the simulation. The layer is obtained by matching the UH frequency with the frequency of the antenna, \(\sqrt{\omega_{\mathrm{pe}}^{2}+\omega_{\mathrm{ce}}^{2}}=2\pi f_{\mathrm{antenna}}\), being \(f_{\mathrm{antenna}}=2.45\) GHz.
The simulation of TJ-K we consider is started from an initial state with background noise and, after a transient, reaches a quasi-steady state where sources, parallel and perpendicular transport, and losses at the vessel balance each other. The analysis of the simulations results is performed during this quasi-steady state.
## 4 Simulation results and comparison with the experiment
We start by comparing the one-dimensional time-averaged (equilibrium) profiles of density, temperature, potential and radial electric field, \(E_{R}=-\nabla\Phi\cdot\widehat{\bf e}_{\bf R}\), along the \(R\) direction, at \(Z=0\) and \(\phi=30^{\circ}\). The comparison is shown in Fig. 5. The peak value as well as the profile of the simulated density agree well with the experiment. The experimental temperature profile is hollow and this is attributed to the fact that the resonance layer of the UH is located at the edge (see Fig. 4). On the other hand, the simulation displays a temperature profile that decays for \(R>R_{\rm axis}\). Furthermore, the temperature on axis is larger in the simulation than in the experiment. We note that the simulated temperature profile is sensitive to \(\alpha_{Te}\). Reducing this parameter lowers the value of the temperature, but the equilibrium gradients change, eventually suppressing turbulence. The magnitude of the electrostatic plasma potential is similar in both experiment and simulation (between 8 and 14 V), but the radial dependencies are different. In fact, the simulation reveals a hollow potential profile, whereas in the experiment the profile increases as the axis is approached. This makes the profiles of \(E_{R}\) also different, even if its order of magnitude is correctly captured.
Snapshots of the density and electrostatic potential on two different poloidal planes, \(\phi=10^{\circ}\) and \(\phi=30^{\circ}\), are shown in Fig. 6. We observe that a low-\(m\) coherent mode dominates the plasma dynamics, where \(m\) is the poloidal mode number. We Fourier decompose the fluctuations along the \(y\)-coordinate, where \(y\) is the arc length along the poloidal projection of the reference surface. Fig. 7 (left) shows the power spectrum of density fluctuations obtained at \(\phi=30^{\circ}\) and compares it with the experiment. The simulation retrieves the coherent mode present in the experiment at \(k_{y}\rho_{s0}\approx 0.4\), which corresponds to an \(m=4\) structure. The experimental spectrum decays with a power law \((k_{y}\rho_{s0})^{-1.9}\), consistent with the inverse cascade in two-dimensional fluid turbulence [31]. On the other hand, the observed power law in the simulation, \((k_{y}\rho_{s0})^{-1.2}\), is slightly
Figure 5: Radial profiles of the time-averaged (equilibrium) density, electron temperature, potential and radial electric field. Profiles are at \(Z=0\) and \(\phi=30^{\circ}\) (OPA). The vertical dashed line indicates the position of the magnetic axis, \(R_{\rm axis}\).
underestimated with respect to the experiment. The power spectrum at \(\phi=10^{\circ}\) (not shown) is similar to the one at \(\phi=30^{\circ}\), both in the experiment and simulation, an expected feature since turbulence is field-aligned. Regarding the power spectrum of the electrostatic potential (right panel in Fig. 7), the simulation has two dominant coherent modes at \(k_{y}\rho_{s0}\approx 0.4\) and \(k_{y}\rho_{s0}\approx 0.7\), although only the first one is present in the experiment. In addition, the spectrum shows a slightly faster decay in the simulation than in the experiment.
In Fig. 8, we show the density and potential fluctuation levels as a function of the poloidal angle on the reference surface, for the two considered poloidal planes. The
Figure 6: Snapshot of density, \(n\) (left) and electrostatic potential, \(\Phi\) (right), in the quasi-steady state of the GBS simulation. Top and bottom correspond to the toroidal planes \(\phi=10^{\circ}\) (TPA) and \(\phi=30^{\circ}\) (OPA), respectively. The reference surface is indicated with an orange line.
fluctuation level of the density is calculated as the standard deviation, \(\sigma_{n}\), normalized to the radially dependent equilibrium value, \(\left\langle n\right\rangle_{t}\). The normalizing factor of the potential is the equilibrium electron temperature, \(\left\langle T_{e}\right\rangle_{t}\), to avoid possible singularities. The simulation and experiment display a similar poloidal dependence at \(\phi=10^{\circ}\). However, the simulation underestimates the fluctuation levels by a factor two or more. This is similar to previous validation studies with GBS and other fluid codes, where a lower level of fluctuations with respect to experiments is reported [15, 16]. At \(\phi=30^{\circ}\), the simulated fluctuations are also significantly smaller than in the experiment and, in addition, fluctuations do not peak at the same poloidal angle. Regarding electron temperature fluctuations, they are negligible in TJ-K [20]. In the simulation, although not being negligible, they are smaller than density and potential fluctuations, as demonstrated in Fig. 8.
Figure 7: Power spectrum of density (left) and electrostatic potential (right) of the fluctuations. The spectra are computed by Fourier transforming density fluctuations along \(y\) at OPA (\(\phi=30^{\circ}\)). The simulation spectrum is normalized such that the maxima of experiment and simulation are the same.
The simulated radial turbulent \(\rm E\times B\) particle flux,
\[\Gamma_{E\times B}=\left\langle\widetilde{n}\widetilde{V}_{E\times B}^{r}\right\rangle _{t}=-\left\langle\frac{\widetilde{n}}{B}\left(\nabla\widetilde{\Phi}\times \mathbf{b}\right)_{r}\right\rangle_{t}, \tag{11}\]
where \(r\) denotes the direction normal to the flux surface, is compared with the experiment. The comparison is shown in Fig. 9. Both fluxes are normalized to their peak values since the experimental value of transport is based on the ion saturation current. The simulation and experimental results at \(\phi=10^{\circ}\) agree considerably better than at \(\phi=30^{\circ}\). At \(\phi=10^{\circ}\) the simulation retrieves the transport peak occurring at around \(\theta=\pi/2\), while the peak at \(\theta=0.2\pi\) is not retrieved at \(\phi=30^{\circ}\). This is partially explained by the different peak position of the fluctuation levels observed at
Figure 8: Density (left) and potential and electron temperature (right) fluctuation levels as a function of the poloidal angle, \(\theta\), at \(\phi=10^{\circ}\) (top) and \(\phi=30^{\circ}\) (bottom).
\(\phi=30^{\circ}\) (see Fig. 8). In addition, the cross-phase has an important role in setting the level of transport. In fact, the simulation shows a peak of the flux with a negative value, revealing that the phase-difference between density and potential also varies poloidally, hence having an important role in the turbulent transport asymmetries. It is worth mentioning that in the experiment there is a good correlation between the position where turbulent transport is maximum and the region where normal and geodesic curvatures are, respectively, negative and positive [19], something that could not be verified with the simulations.
Coherent structures originating from inside the LCFS and propagating outwards into the scrape-off layer (SOL) are typically observed in TJ-K by means of a 2D movable probe and/or a fast camera [32, 33, 34]. The origin of these structures is attributed to drift-waves turbulence. The presence of coherent structures propagating from the closed field line region towards the SOL is visible also in the simulation. This is shown
Figure 9: Comparison of the \(\mathrm{E}\times\mathrm{B}\) flux as given by experiment and simulation at \(\phi=10^{\circ}\) (TPA) and \(\phi=30^{\circ}\) (OPA).
in Fig. 10, where a time sequence of the density fluctuations is shown. The formation of this coherent structure occurs in the region marked with an arrow within a few \(\mu s\), as in the experiments [32, 33].
## 5 Discussion and conclusions
GBS captures the essential turbulence properties of TJ-K, for instance the \(k_{y}\) spectra, as shown in Fig. 7. We note that both the spectrum and the fluctuation levels are robust to the source localization and strength. In fact, a simulation was carried out with density and temperature _ad-hoc_ sources of equal magnitude and localized around a closed flux surface in the proximity of the LCFS, showing very similar spectrum and fluctuation levels to the ones presented here. In addition, although not presented in this paper, a simulation with an hydrogen plasma also shows good agreement with the experimental \(k_{y}\) spectrum.
The equilibrium profiles of temperature, potential and electric field, which reveal
Figure 10: Time sequence of density fluctuations in the simulation. The arrow points to a region where a large coherent structure, originated inside the LCFS, detaches and propagates towards the SOL within a few \(\mu s\).
significant differences between simulations and experiments, depend significantly on the details of the sources, whose localization and strength are, to some extent, uncertain. For example, using the _ad-hoc_ sources lead to an order magnitude difference of the \(E_{R}\) values between simulation and experiment. The origin of the electric field is unclear, both in the experiment and in the simulation. Since the ion temperature is small, neoclassical effects do not play a role in setting the electric field, in contrast to the case of high temperature stellarator plasmas [35]. Moreover, a test where finite ion temperature effects are introduced in the system by setting \(T_{i0}/T_{e0}=0.1\) as in the experiment, shows similar results to the cold ion simulation. We expect that an improvement of the source model could yield a better agreement between simulation and experimental radial electric field. In fact, plasma breakdown in TJ-K occurs at the electron cyclotron resonance layer [36], and therefore the fast electrons produced at this resonance are responsible for the further ionization of the background gas. A proper modelling of this species could thus improve our comparison. Furthermore, as shown in Fig 3, electron-neutrals interaction can become important at temperatures larger than 10 eV. Since TJ-K temperatures are between 8 eV and 17 eV (see Fig. 5), it might be important to take into account such interactions in future investigations.
Finally, we note that the role of boundary conditions is expected to be important in simulation of such small experiments such as TJ-K. The use of magnetic pre-sheath boundary conditions except for density and vorticity at the interface between the LCFS and the wall could affect the plasma dynamics and, for example, the peaking positions
of the fluctuation levels.
To conclude, in this paper we present the first validation of the GBS code with a stellarator experiment. Overall, GBS captures the main features of turbulence, in particular the \(k_{y}\) spectrum. The fluctuation levels are underestimated but their dependence on the poloidal angle is partially retrieved. Regarding the equilibrium quantities, GBS can retrieve the correct magnitudes of the potential and electric field, whose profiles are sensitive to the strength and localization of the sources. The equilibrium density profile agrees well with the experiment, but in the case of the electron temperature the value on axis is underestimated. As experimentally observed in TJ-K and confirmed by the GBS simulations presented here, the turbulent transport is mainly due to a low-\(m\) mode, something that was also recently observed in a GBS simulation of a stellarator with an island divertor [8]. This contrasts with the typical plasma turbulence in tokamaks, where more broad-band turbulence is observed. Utimatelly, the validation of the presence of these low-\(m\) modes calls for the detailed study of the difference between stellarators and tokamaks.
The authors thank Caoxiang Zhu and Matt Landreman for all the help with REGCOIL. The simulations presented herein were carried out in part at the Swiss National Supercomputing Centre (CSCS) under the projects ID s1118 and s1182, and in part on the CINECA Marconi super computer. This work has been carried out within the
framework of the EUROfusion Consortium, via the Euratom Research and Training Programme (Grant Agreement No 101052200 -- EUROfusion) and funded by the Swiss State Secretariat for Education, Research and Innovation (SERI). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union, the European Commission, or SERI. Neither the European Union nor the European Commission nor SERI can be held responsible for them.
|
2308.11580 | NIPG-DG schemes for transformed master equations modeling open quantum
systems | This work presents a numerical analysis of a master equation modeling the
interaction of a system with a noisy environment in the particular context of
open quantum systems. It is shown that our transformed master equation has a
reduced computational cost in comparison to a Wigner-Fokker-Planck model of the
same system for the general case of any potential. Specifics of a NIPG-DG
numerical scheme adequate for the convection-diffusion system obtained are then
presented. This will let us solve computationally the transformed system of
interest modeling our open quantum system. A benchmark problem, the case of a
harmonic potential, is then presented, for which the numerical results are
compared against the analytical steady-state solution of this problem. | Jose A. Morales Escalante | 2023-08-22T17:28:51Z | http://arxiv.org/abs/2308.11580v1 | # NIPG-DG schemes for transformed master equations modeling open quantum systems+
###### Abstract
This work presents a numerical analysis of a master equation modeling the interaction of a system with a noisy environment in the particular context of open quantum systems. It is shown that our transformed master equation has a reduced computational cost in comparison to a Wigner-Fokker-Planck model of the same system for the general case of any potential. Specifics of a NIPG-DG numerical scheme adequate for the convection-diffusion system obtained are then presented. This will let us solve computationally the transformed system of interest modeling our open quantum system. A benchmark problem, the case of a harmonic potential, is then presented, for which the numerical results are compared against the analytical steady-state solution of this problem.
Keywords:Open quantum systems Master Equations NIPG DG.
## 1 Introduction
Open quantum systems model the interaction (energy exchange, for example) of a system with a usually larger environment in a quantum setting. They are mathematically expressed in quantum information science by master equations in Lindblad form (when a Markovian dynamics in the interaction is assumed) for the density matrix of the system [20] (to describe, for example, a noisy quantum channel). These Lindblad master equations can be converted into Wigner-Fokker-Planck (WFP) equations by applying to them a Wigner transform [3, 6]. The density matrix is converted under this transformation into the Wigner function representing the system. The Wigner-Fokker-Planck formulation is then, mathematically, completely equivalent to the Lindblad master equation description of open quantum systems. The quantum Fokker-Planck operator terms (which are the analogs in the Wigner formulation of the Lindblad operators of the master equation) represent the diffusive behavior of the system plus a
friction term, both due to the interaction with the environment, via energy exchanges, for example, as abovementioned.
The following dimensionless model of an open quantum system will be first considered, which is given by the WFP equation with an arbitrary potential as below [3, 8]
\[w_{t}+k\cdot\nabla_{x}w+\Theta[V]w=Q_{FP}\{w\},\]
where \(Q_{FP}\) is the quantum Fokker-Planck operator that models the interaction of the system with its environment (representing it in the Wigner picture by a diffusion term plus a friction one, as abovementioned). The following particular case for the aforementioned operator will be considered,
\[Q_{FP}\{w\}=\Delta_{k}w+\nabla_{k}\cdot(kw)+\Delta_{x}w,\]
and the pseudo-differential operator (non-local) related to the potential \(V\) is given by
\[\Theta[V]\{w\}=\frac{-i}{(2\pi)^{d}}\int_{\mathcal{R}^{2d}}\delta V(x,\eta)w( x,k^{\prime},t)e^{i\eta\cdot(k-k^{\prime})}dk^{\prime}d\eta,\]
\[\delta V(x,\eta)=V(x+\eta/2)-V(x-\eta/2),\]
which can also be represented as
\[\Theta[V]\{w\}=\frac{-i}{(2\pi)^{d}}\int_{\mathcal{R}^{d}}[V(x+\eta/2)-V(x- \eta/2)]\mathcal{F}\{w\}e^{i\eta\cdot k}d\eta,\]
with \(\mathcal{F}\{w\}\) the Fourier transform of our Wigner function \(w\),
\[\mathcal{F}\{w\}(x,\eta,t)=\int_{\mathcal{R}^{d}}w(x,k^{\prime},t)e^{-i\eta \cdot k^{\prime}}dk^{\prime}.\]
The pseudo-differential operator is the computationally costliest term in a simulation of the WFP system, due to the extra integration needed to be performed in order to compute it [8]. However, it is clear that, if a Fourier transform is applied to it, it will amount just to a multiplication by convolution theorems. Therefore, this motivates the interest in applying a Fourier transform to the Wigner function to simply obtain a density matrix in the position basis, with conveniently transformed coordinates. If a Fourier transform is then applied to the WFP system, a master equation will simply be obtained in convenient coordinates that simplifies the expression of the costliest term, namely the one related to the potential. A similar evolution equation for a density in transformed coordinates has been presented in [12] in the context of electron ensembles in semiconductors. The above-mentioned is justified by recalling that the Wigner function is defined by the Fourier transform below,
\[w(x,k,t)=\int_{\mathcal{R}^{d}}\rho(x+\eta/2,x-\eta/2,t)\exp(-i\eta\cdot k)d\eta, \tag{1}\]
particularly as the Fourier transform of the function
\[u(x,\eta,t)=\rho(x+\eta/2,x-\eta/2,t), \tag{2}\]
which is a density matrix in terms of conveniently symmetrized position coordinates. Therefore,
\[w=\mathcal{F}\{u\},\quad u=\mathcal{F}^{-1}\{w\}, \tag{3}\]
and then
\[u(x,\eta,t)=\frac{1}{(2\pi)^{d}}\int_{\mathcal{R}^{d}}w(x,k,t)\exp(ik\cdot\eta) dk= \tag{4}\]
\[=\frac{1}{(2\pi)^{d}}\int_{\mathcal{R}^{d}}w(x,k,t)e^{-ik\cdot[-\eta]}dk=\frac {\mathcal{F}\{w\}(x,-\eta,t)}{(2\pi)^{d}}. \tag{5}\]
So our Fourier transform is just proportional to the density matrix evaluated at conveniently transformed position coordinates,
\[\mathcal{F}\{w\}(x,\eta,t)=(2\pi)^{d}\rho(x-\eta/2,x+\eta/2,t). \tag{6}\]
Different numerical methods have been used in the computational simulation of models for open quantum systems. Regarding computational modeling of open quantum systems via Wigner functions, stochastic methods such as Monte Carlo simulations of Fokker-Planck equations in Quantum Optics have been reported [5], as well as numerical discretizations of velocities for the stationary Wigner equation [7]. However, there is an inherent stochastic error in the solution of PDE's by Monte Carlo methods (where this error will decrease slowly, as \(N^{-1/2}\), by increasing the number of samples \(N\)). There is literature on the Wigner numerical simulation of quantum tunneling phenomena [21], as well as work on operator splitting methods for Wigner-Poisson [4], and on the semidiscrete analysis of the Wigner equation [10]. However, the nature of the phenomena for open quantum systems claims the need for a numerical method that reflects in its nature the physics of both transport and noise represented by diffusion (for Markovian interactions), therefore the interest on Discontinuous Galerkin (DG) methods that can mimic numerically convection-diffusion problems, such as Local DG or Interior Penalty methods, for example. There is work in [8] on an adaptable DG scheme for Wigner-Fokker-Planck, where a Non-symmetric Interior Penalty Galerkin method is used for the computational modeling. The main issue of the use of a Wigner model for open quantum systems though is the fact that, except for the case of harmonic potentials, the costliest term computationally in a Wigner formulation is the pseudo-differential integral operator, whereas a simple Fourier transformation of the Wigner equation might render this term as just a simple multiplication between the related density matrix and the (non-harmonic) potential and therefore not computationally costly anymore in a master equation setting. There are literature reports of the application of Discontinuous Galerkin Methods onto Quantum-Liouville type equations [9], and numerical simulations of the Quantum Liouville-Poisson system as well [26]. However this kind of equations consider only the quantum transport part of the
problem, since diffusion does not appears in Liouville transport, so noise due to an environment is missing to be modeled in a DG setting for master equations up to the author's knowledge.
The contribution of the work presented in this paper then is to provide a computational solver for open quantum systems, modeled by transformed master equations, whose underlying numerics reflects inherently in its methodology the convection-diffusion nature of the physical phenomena of interest, solving the resulting system of master equations by means of a Non-symmetric Interior Penalty Discontinuous Galerkin method to handle both the quantum transport and the diffusive noise due to environment interactions, at a less expensive computational cost than a Wigner model for the open quantum problem of interest.
The summary of our work is the following: convolution theorems of Fourier transforms will be used to convert our pseudo-differential operator into a product, transforming then the WFP into a master equation for the density matrix in symmetrized position coordinates. The remaining terms (transport and diffusion) will be analyzed, and the transformed master equation will be studied numerically via an NIPG-DG method, as an initial/boundary value problem (IBVP) in a 2D position space, reducing the computational cost of the time evolution problem over its process with respect to WFP conveniently via our transformed master equation formulation. Finally, convergence studies are presented for a problem with a harmonic potential, for which the analytical steady-state solution is known (the harmonic oscillator case), comparing it to the numerical solution obtained via the NIPG-DG method.
## 2 Math Model: Transformed Master Equation for open quantum systems
It is known that the pseudo-differential operator has the following possible representation,
\[\Theta[V]\{w\}=-w\star\Im(F^{-1}[\delta V]),\]
\[F^{-1}\{F\{w\}\cdot F\{\Im(F^{-1}[\delta V])\}\}=w\star\Im(F^{-1}[\delta V])=- \Theta[V],\]
which simplifies to
\[\Theta[V]\{w\}=-F^{-1}\{\widehat{w}\cdot\frac{\delta V-F\{(F^{-1}[\delta V])^{ *}\}}{2i}\}.\]
Since it holds that
\[F\{(F^{-1}[\delta V])^{*}\}=F\{\frac{\int_{\mathcal{R}^{d}}[V(x+\frac{\eta^{ \prime}}{2})-V(x-\frac{\eta^{\prime}}{2})]e^{-ik\cdot\eta^{\prime}}d\eta^{ \prime}}{(2\pi)^{d}}\},\]
\[F\{\frac{\int_{\mathcal{R}^{d}}[V(x-\frac{\eta^{\prime}}{2})-V(x+\frac{\eta^{ \prime}}{2})]e^{ik\cdot\eta^{\prime}}d\eta^{\prime}}{(2\pi)^{d}}\}=-\delta V(x,\eta),\]
then
\[w_{t}+k\cdot\nabla_{x}w+iF^{-1}\{\widehat{w}\cdot\delta V\}=\Delta_{k}w+\nabla _{k}\cdot(kw)+\Delta_{x}w,\]
and applying a Fourier transform to WFP, it acquires the form
\[\widehat{w_{t}}+\widehat{k\cdot\nabla_{x}w}+i\widehat{w}\cdot\delta V=\widehat{ \Delta_{k}w}+\widehat{\nabla_{k}\cdot(kw)}+\Delta_{x}\widehat{w}.\]
The forward Fourier transform of the WFP equation is then the transformed master equation below,
\[\widehat{w}_{t}+i\nabla_{\eta}\cdot\nabla_{x}\widehat{w}+i\widehat{w}\cdot \delta V=-\eta^{2}\widehat{w}-\eta\cdot\nabla_{\eta}\widehat{w}+\Delta_{x} \widehat{w}. \tag{7}\]
It is clear that the pseudo-differential operator over the potential represented by a convolution has been transformed back into a product of functions by Fourier transforming the WFP equation. The main change is that the transport term is represented now as a derivative of higher order (\(k\) was exchanged for \(i\nabla_{\eta}\) when transforming into the Fourier space).
### Decomposition into real and imaginary parts
If one starts with
\[\widehat{w}_{t}+\nabla_{x}\cdot(i\nabla_{\eta}\widehat{w})+\eta\cdot\nabla_{ \eta}\widehat{w}=\widehat{w}\cdot\left(\frac{\delta V}{i}-\eta^{2}\right)+ \nabla_{x}^{2}\widehat{w},\]
it can be noticed that the transport term involves a complex number, so \(\widehat{w}=R+iI\) will be decomposed into real and imaginary parts to rather better understand this term. So
\[[R+iI]_{t}+\nabla_{x}\cdot(i\nabla_{\eta}[R+iI])+\eta\cdot\nabla_{\eta}[R+iI]=\]
\[[R+iI]\cdot\left(\frac{\delta V}{i}-\eta^{2}\right)+\nabla_{x}^{2}[R+iI],\]
then
\[R_{t}+iI_{t}+\nabla_{x}\cdot\nabla_{\eta}[-I+iR]+\eta\cdot\nabla_{\eta}[R+iI]=\]
\[[R+iI]\cdot\left(\frac{\delta V}{i}\right)-[R+iI]\cdot\left(\eta^{2}\right)+ \nabla_{x}^{2}[R+iI].\]
Let's focus now on the benchmark case of dimension \(d=1\), for which many terms simplify. In this case
\[R_{t}+iI_{t}+\partial_{x}\partial_{\eta}[-I+iR]+\eta\partial_{\eta}[R+iI]=\]
\[[R+iI]\cdot\left(\frac{\delta V}{i}\right)-[R+iI]\eta^{2}+\nabla_{x}^{2}[R+iI].\]
Then
\[R_{t}+iI_{t}+\partial_{x\eta}[-I+iR]+\eta\partial_{\eta}[R+iI]=[I-iR]\cdot \left(\frac{\delta V}{2}\right)\]
\[+[-I+iR]\cdot\left(\frac{\delta V}{2}\right)-[R+iI]\eta^{2}+\nabla_{x}^{2}[R+iI],\]
so
\[R_{t}+iI_{t}+\partial_{x\eta}[-I+iR]+\eta\partial_{\eta}[R+iI]=[I-iR]\cdot( \delta V)\]
\[-[R+iI]\eta^{2}+\nabla_{x}^{2}[R+iI],\]
and now the equation can be decomposed into real and imaginary parts. The real part of the equation is
\[R_{t}-\partial_{x\eta}I+\eta\partial_{\eta}R=I\delta V-R\eta^{2}+\nabla_{x}^{2 }R,\]
and the imaginary part is
\[I_{t}+\partial_{x\eta}R+\eta\partial_{\eta}I=-R\cdot(\delta V)-I\eta^{2}+ \nabla_{x}^{2}I,\]
where, since \(d=1\), the Laplacian reduces to \(\partial_{x}^{2}\), so we have the following system
\[R_{t}+\eta\partial_{\eta}R=I\left(\delta V\right)-R\eta^{2}+\partial_{x}^{2}R +\partial_{x\eta}I,\]
\[I_{t}+\eta\partial_{\eta}I=-R\cdot(\delta V)-I\eta^{2}+\partial_{x}^{2}I- \partial_{x\eta}R.\]
The system above has the transport terms on the left-hand side. The "source", decay, and diffusive terms represented by second-order partials are on the right-hand side. If the transport is to be expressed in divergence form, one can pass the extra term to the other side, rendering
\[R_{t}+\partial_{\eta}(\eta R)=I\left(\delta V\right)+(1-\eta^{2})R+\partial_{ x}^{2}R+\partial_{x\eta}I,\]
\[I_{t}+\partial_{\eta}(\eta I)=-R\cdot(\delta V)+(1-\eta^{2})I+\partial_{x}^{2} I-\partial_{x\eta}R.\]
These convective-diffusive systems (where the transport is only on \(\eta\)) can be expressed in matrix form. The matrix system in gradient form is
\[\partial_{t}\left(\begin{array}{c}R\\ I\end{array}\right)+\eta\partial_{\eta}\left(\begin{array}{c}R\\ I\end{array}\right)=-\left(\begin{array}{cc}\eta^{2}&-\delta V\\ \delta V&\eta^{2}\end{array}\right)\left(\begin{array}{c}R\\ I\end{array}\right)\]
\[+\left(\begin{array}{cc}\partial_{x}&\partial_{\eta}\\ -\partial_{\eta}&\partial_{x}\end{array}\right)\left(\begin{array}{c}\partial _{x}R\\ \partial_{x}I\end{array}\right),\]
or in divergence form as
\[\partial_{t}\left(\begin{array}{c}R\\ I\end{array}\right)+\partial_{\eta}\{\left(\begin{array}{c}R\\ I\end{array}\right)\otimes(0,\eta)\}=\]
\[-\left(\begin{array}{cc}\eta^{2}-1&-\delta V\\ \delta V&\eta^{2}-1\end{array}\right)\left(\begin{array}{c}R\\ I\end{array}\right)+\left(\begin{array}{cc}\partial_{x}&\partial_{\eta}\\ -\partial_{\eta}&\partial_{x}\end{array}\right)\left(\begin{array}{c} \partial_{x}R\\ \partial_{x}I\end{array}\right).\]
In both of them, it is evident the left-hand side has a convective transport structure and the right-hand side has matrix terms related to decay, source
(partly due to the potential), and diffusive behavior. If we define \(u=(R,I)^{T}\) we can express the system in gradient form as
\[\partial_{t}u+\partial_{(x,\eta)}u\cdot\left(\begin{array}{c}0\\ \eta\end{array}\right)=\left(\begin{array}{cc}\partial_{x}&\partial_{\eta}\\ -\partial_{\eta}&\partial_{x}\end{array}\right)\partial_{x}u-\left(\begin{array} []{cc}\eta^{2}&-\delta V\\ \delta V&\eta^{2}\end{array}\right)u,\]
or equivalently in divergence form as
\[\partial_{t}u+\partial_{(x,\eta)}\cdot\{u\otimes(0,\eta)\}=-\left(\begin{array} []{cc}\eta^{2}-1&-\delta V\\ \delta V&\eta^{2}-1\end{array}\right)u\]
\[+\partial_{(x,\eta)}\cdot\left(\partial_{x}u,J\partial_{x}u\right),\]
defining the matrix
\[J=\left(\begin{array}{cc}0&1\\ -1&0\end{array}\right).\]
So the transport is only vertical, and the gradient in the diffusion term is only related to \(x\)-partials.
## 3 Methodology: DG Formulation for Master Equations in transformed position coordinates
A NIPG-DG scheme for a vector variable will be presented for our master equation (where the test function will be a vector too), and where we will perform an inner product multiplication between these vector functions to obtain the weak form for our related NIPG-DG methodology.
### NIPG-DG method for a transformed Master Equation
Below one proceeds to describe then how to solve the system resulting from a transformed master equation, in convenient position coordinates, by means of a Non-symmetric Interior-Penalty Galerkin (NIPG) DG method, as in [8, 22] for elliptic equations and Wigner-Fokker-Planck equations,respectively. In this type of DG method, some penalty terms are introduced in order to account for the diffusion when treated by DG methodologies. More information can be found in the references mentioned above.
The NIPG-DG formulation (at the semi-discrete level) for the transformed master equation system is then the following: Find \(u=\left(R,I\right)^{T}\) such that, for all test functions \(v=\left(w,z\right)^{T}\) the following holds,
\[\partial_{t}(v,u)=a(v,u), \tag{8}\]
with the bilinear form (which also includes the penalty terms) given by
\[a(u,v) = (\nabla w,A\nabla R-bR)+(\nabla z,A\nabla I-bI)+(\nabla w,B\nabla I )-(z,B\nabla R)\] \[+(w,I\delta V)+(w,R\eta^{2})+(z,R\delta V)+(z,I\eta^{2})\] \[+(\alpha/h)\langle[wn],A[Rn]\rangle+(\alpha/h)\langle[wn],B[In] \rangle+(\alpha/h)\langle[wn],A[Rn]\rangle\] \[-\langle[wn],A\{\nabla R\}\rangle-\langle[wn],B\{\nabla R\}\rangle\] \[-\langle\{\nabla w\},A[Rn]\rangle-\langle\{\nabla w\},B[In]\rangle\] \[+(\alpha/h)\langle[zn],A[In]\rangle-(\alpha/h)\langle[zn],B[Rn] \rangle-\langle[zn],A\{\nabla I\}\rangle\] \[+\langle[zn],B\{\nabla R\}\rangle-\langle\{\nabla z\},A[In] \rangle+\langle\{\nabla z\},B[Rn]\rangle\] \[+\langle[w],[\widehat{b}R]\rangle+\langle w,\widehat{b}R\rangle ++\langle[z],[\widehat{b}I]\rangle+\langle z,\widehat{b}I\rangle,\]
where \([\cdot]\) stands for jumps, \(\{\cdot\}\) stands for averages, \((\cdot,\cdot)\) stands for volume integrals, \(\langle\cdot,\cdot\rangle\) stands for surface integrals, \(n\) stands for outward unit normals, \(b=(0,\eta)^{T}\) stands for the transport vector in our 2D position domain, and \(\widehat{b}=(b\cdot n+|b\cdot n|)/2\) is related to the upwind flux rule.
We discretize the time evolution by applying an implicit theta method. In order to advance from \(u_{0}\) to \(u\) in the next time-step, the method below is then used,
\[(v,u-u_{0})/\Delta t=\theta a(v,u)+(1-\theta)a(v,u_{0}), \tag{10}\]
where a value of \(\theta=1/2\in[0,1]\) is chosen, since it is known that this value, among all possible in [0,1], gives the highest order of convergence for implicit time evolution methods [11].
## 4 Computational Cost of Master Equations vs WFP via NIPG-DG methods
Our particular case of the WFP equation has the form
\[w_{t}+k\cdot\nabla_{x}w-\]
\[\frac{i}{(2\pi)^{d}}\int_{\mathcal{R}^{2d}}[V(x+\frac{\eta}{2})-V(x-\frac{\eta }{2})]w(x,k^{\prime},t)e^{i\eta\cdot(k-k^{\prime})}dk^{\prime}d\eta\]
\[=\nabla_{k}\cdot(kw)+\Delta_{x}w+\Delta_{k}w.\]
If the size of the global basis for \(w(x,k,t)=\sum_{i=1}^{n}\sum_{j=1}^{m}\sum_{p=0}^{P}\chi_{ij}(x,k)c_{p}^{ij}(t )\phi_{p}^{ij}(x,k)\) is \(N=(P+1)nm\), where \(P+1\) is the dimension of our local polynomial basis, \(n\) the number of intervals in \(x\), \(m\) the number of intervals in \(k\), the dimension of the test space \(\mbox{span}\{v_{k}(x,k)\}_{k=1}^{N}=\mbox{span}\{\{\{v_{p}^{ij}(x,k)\}_{p=0}^{ P}\}_{i=1}^{n}\}_{j=1}^{m}\) will be the same, and an NIPG-DG formulation for the Wigner-Fokker-Planck equation would look like ( \(1\leq q\leq N\) )
\[\sum_{i=1}^{n}\sum_{j=1}^{m}\sum_{p=0}^{P}(v_{q}^{ij}|\phi_{p}^{ij}) \frac{dc_{p}^{ij}}{dt}\chi_{ij}-\sum_{i=1}^{n}\sum_{j=1}^{m}\sum_{p=0}^{P}(\nabla _{x}v_{q}^{ij}|k\phi_{p}^{ij})c_{p}^{ij}\chi_{ij}-\sum_{i=1}^{n}\sum_{j=1}^{m} \sum_{p=0}^{P}(\nabla_{k}v_{q}^{ij}\cdot k|\phi_{p}^{ij})c_{p}^{ij}\chi_{ij}\] \[+\sum_{i=1}^{n}\sum_{j=1}^{m}\sum_{p=0}^{P}\left\langle[v_{q}^{ij} ]|[k\phi_{p}^{ij}]\right\rangle c_{p}^{ij}\chi_{ij}+\sum_{i=1}^{n}\sum_{j=1}^{ m}\sum_{p=0}^{P}\left\langle v_{q}^{ij}|k\phi_{p}^{ij}\right\rangle c_{p}^{ij} \chi_{ij}\] \[+(v_{q}^{ij}|\int_{{\cal R}^{2d}}\delta V(x,\eta)\phi(x,k^{\prime })\frac{e^{i\eta\cdot(k-k^{\prime})}}{i(2\pi)^{d}}dk^{\prime}d\eta)c_{p}^{ij}+\] \[=-\sum_{i=1}^{n}\sum_{j=1}^{m}\sum_{p=0}^{P}(\nabla_{x}v_{q}^{ij} |\nabla_{x}\phi_{p}^{ij}c_{p}^{ij})\chi_{ij}-\sum_{i=1}^{n}\sum_{j=1}^{m}\sum_ {p=0}^{P}(\nabla_{k}v_{q}^{ij}|\nabla_{k}\phi_{p}^{ij})c_{p}^{ij}\chi_{ij}+\] \[+\sum_{i=1}^{n}\sum_{j=1}^{m}\sum_{p=0}^{P}(\alpha/h)\langle[v_{q }^{ij}n]|[c_{p}^{ij}\phi_{p}^{ij}n]\rangle-\sum_{i=1}^{n}\sum_{j=1}^{m}\sum_{p =0}^{P}\langle[v_{q}^{ij}n]|\{\nabla\phi_{p}^{ij}c_{p}^{ij}\}\rangle-\sum_{i=1 }^{n}\sum_{j=1}^{m}\sum_{p=0}^{P}\langle\{\nabla v_{q}^{ij}\}|[\phi_{p}^{ij}a _{p}^{ij}n]\rangle\]
so we have, at the semi-discrete level, \(11N\) operations related to matrix-vector multiplications, but we must consider the cost of computing the matrix elements. If we go into the details of the cost of each term, assuming we use a piece-wise polynomial basis as traditionally used in DG (nonzero only inside a given element) we will see that computing all terms but the pseudo-differential operator takes work of \(O(10nm[P+1]^{2})\) integrations (due to the piece-wise nature of the spaces). Specifically, without considering the pseudo-differential term, we need \(5nm[P+1]^{2}\) volume integrations and \(5nm[P+1]^{2}\) surface integrals The pseudo-differential operator involves an integral over the phase space where the potential and the real and imaginary parts of the complex exponential are involved,
\[\sum_{r=1}^{n}\sum_{s=1}^{m}\sum_{p=0}^{P}(v_{q}^{ij}(x,k)|\int_{\eta_{r-}}^{ \eta_{r+}}\int_{k_{s-}}^{k_{s+}}\delta V\phi_{p}^{rs}(x,k^{\prime})\frac{e^{i \eta\cdot(k-k^{\prime})}}{i(2\pi)^{d}}dk^{\prime}d\eta)c_{p}^{rs}\chi_{rs},\]
due to the non-local nature of the double integrals, so there are possible extra overlaps in comparison to the other terms. Due to the nature of the basis, namely, piece-wise functions constituted by polynomials multiplied by characteristic functions, the overlap in \(x\) must happen in order to get nonzero terms, so only \(i=r\) will contribute, but the overlap in \(k\) will happen as a must in any case since the integration is over all momentum regions. Our integral reduces to
\[\sum_{r=1}^{n}\sum_{s=1}^{m}\sum_{p=0}^{P}(v_{q}^{ij}\chi^{ij}|\int_{\eta_{r-}} ^{\eta_{r+}}\int_{k_{s-}}^{k_{s+}}\delta V\phi_{p}^{rs}(x,k^{\prime})\frac{e^{ i\eta\cdot(k-k^{\prime})}}{i(2\pi)^{d}}dk^{\prime}d\eta)c_{p}^{rs}\chi_{rs}\]
\[=\sum_{s=1}^{m}\sum_{p=0}^{P}(v_{q}^{ij}\chi^{j}(k)|\int_{\eta_{i-}}^{\eta_{i+}} \int_{k_{s-}}^{k_{s+}}\delta V\phi_{p}^{is}(x,k^{\prime})\frac{e^{i\eta\cdot(k-k ^{\prime})}}{i(2\pi)^{d}}dk^{\prime}d\eta)c_{p}^{rs}\chi_{rs},\]
so the computational cost of this term is in principle
\[nm^{2}[P+1]^{2}=N^{2}/m,\]
which is \(m\) times the usual cost of the other terms. This proves that the pseudo-differential term is the computationally costliest in the WFP numerics.
On the other hand, the computational cost of our transformed master equation in convenient coordinates can be analyzed by considering
\[u=(R,I)=\sum_{i=1}^{n}\sum_{j=1}^{m}\sum_{p=0}^{P}\chi_{ij}(a_{p}^{ij}(t)\phi_{ p}^{ij},d_{p}^{ij}(t)\phi_{p}^{ij})\]
and
\[v_{q}^{ij}=(w_{q}^{ij},z_{q}^{ij})\]
with equivalent trial/test spaces (respectively)
\[\mbox{span}\{\phi_{p}^{ij}\}_{i=1,j=1,p=0}^{i\leq n,j\leq m,p\leq P},\]
\[\mbox{span}\{v_{p}^{ij}\}_{i=1,j=1,p=0}^{i\leq n,j\leq m,p\leq P}\]
for this system formulation. We have then that the NIPG-DG weak form can be expressed as
\[\sum_{i=1}^{n}\sum_{j=1}^{m}\sum_{p=0}^{P}(v_{q}^{ij}|\phi_{p}^{ij})\frac{da_ {p}^{ij}}{dt}\chi_{ij}+\sum_{i=1}^{n}\sum_{j=1}^{m}\sum_{p=0}^{P}(v_{q}^{ij}| \phi_{p}^{ij})(d_{p}^{ij})^{\prime}\chi_{ij}=\sum_{i=1}^{n}\sum_{j=1}^{m}\sum_ {p=0}^{P}(\nabla w_{q}^{ij}|A\nabla\phi_{p}^{ij}a_{p}^{ij}-ba_{p}^{ij}\phi_{p }^{ij})\]
\[+\sum_{i=1}^{n}\sum_{j=1}^{m}\sum_{p=0}^{P}(\nabla z_{q}^{ij}|A\nabla\phi_{p}^ {ij}d_{p}^{ij}-bd_{p}^{ij}\phi_{p}^{ij})+\sum_{i=1}^{n}\sum_{j=1}^{m}\sum_{p=0 }^{P}(\nabla w_{q}^{ij}|B\nabla\phi_{p}^{ij}d_{p}^{ij})-\sum_{i=1}^{n}\sum_{j= 1}^{m}\sum_{p=0}^{P}(z_{q}^{ij}|B\nabla\phi_{p}^{ij}a_{p}^{ij})\]
\[+\sum_{i=1}^{n}\sum_{j=1}^{m}\sum_{p=0}^{P}(w_{q}^{ij}|d_{p}^{ij}\phi_{p}^{ij }\delta V)+\sum_{i=1}^{n}\sum_{j=1}^{m}\sum_{p=0}^{P}(w_{q}^{ij}|a_{p}^{ij} \phi_{p}^{ij}\eta^{2})+\sum_{i=1}^{n}\sum_{j=1}^{m}\sum_{p=0}^{P}(z_{q}^{ij}|a _{p}^{ij}\phi_{p}^{ij}\delta V)+(z_{q}^{ij}|d_{p}^{ij}\phi_{p}^{ij}\eta^{2})\]
\[+\sum_{i=1}^{n}\sum_{j=1}^{m}\sum_{p=0}^{P}(\alpha/h)\langle[w_{q}^{ij}n]|A[a_{ p}^{ij}\phi_{p}^{ij}n]\rangle+\sum_{i=1}^{n}\sum_{j=1}^{m}\sum_{p=0}^{P}( \alpha/h)\langle[w_{q}^{ij}n]|B[d_{p}^{ij}\phi_{p}^{ij}n]\rangle\]
\[+\sum_{i=1}^{n}\sum_{j=1}^{m}\sum_{p=0}^{P}(\alpha/h)\langle[w_{q}^{ij}n]|A[a_{p}^{ ij}\phi_{p}^{ij}n]\rangle-\sum_{i=1}^{n}\sum_{j=1}^{m}\sum_{p=0}^{P}\langle[w_{q}^{ ij}n]|A\{\nabla\phi_{p}^{ij}a_{p}^{ij}\}\rangle-\sum_{i=1}^{n}\sum_{j=1}^{m}\sum_{p=0}^{P} \langle[w_{q}^{ij}n]|B\{\nabla\phi_{p}^{ij}a_{p}^{ij}\}\rangle\]
\[-\sum_{i=1}^{n}\sum_{j=1}^{m}\sum_{p=0}^{P}\langle\{\nabla w_{q}^{ij}\}|A[a_{p} ^{ij}\phi_{p}^{ij}n]\rangle-\sum_{i=1}^{n}\sum_{j=1}^{m}\sum_{p=0}^{P}\langle \{\nabla w_{q}^{ij}\}|B[d_{p}^{ij}\phi_{p}^{ij}n]\rangle+\sum_{i=1}^{n}\sum_{j =1}^{m}\sum_{p=0}^{P}\langle(z_{q}^{ij}n]|A[d_{p}^{ij}\phi_{p}^{ij}n]\rangle\]
\[-\sum_{i=1}^{n}\sum_{j=1}^{m}\sum_{p=0}^{P}(\alpha/h)\langle[z_{q}^{ij}n]|B[a _{p}^{ij}\phi_{p}^{ij}n]\rangle-\sum_{i=1}^{n}\sum_{j=1}^{m}\sum_{p=0}^{P} \langle[z_{q}^{ij}n]|A\{\nabla\phi_{p}^{ij}d_{p}^{ij}\}\rangle+\sum_{i=1}^{n} \sum_{j=1}^{m}\sum_{p=0}^{P}\langle[z_{q}^{ij}n]|B\{\nabla\phi_{p}^{ij}a_{p}^{ ij}\}\rangle\]
\[-\sum_{i=1}^{n}\sum_{j=1}^{m}\sum_{p=0}^{P}\langle\{\nabla z_{q}^{ij}\}|A[ \phi_{p}^{ij}d_{p}^{ij}n]\rangle+\sum_{i=1}^{n}\sum_{j=1}^{m}\sum_{p=0}^{P} \langle\{\nabla z_{q}^{ij}\}|B[a_{p}^{ij}\phi_{p}^{ij}n]\rangle+\sum_{i=1}^{n} \sum_{j=1}^{m}\sum_{p=0}^{P}\langle[w_{q}^{ij}]|[\widehat{b}a_{p}^{ij}\phi_{p }^{ij}]\rangle\]
\[+\sum_{i=1}^{n}\sum_{j=1}^{m}\sum_{p=0}^{P}\langle w_{q}^{ij}[\widehat{b}a_{p} ^{ij}\phi_{p}^{ij}\rangle+\sum_{i=1}^{n}\sum_{j=1}^{m}\sum_{p=0}^{P}\langle[z _{q}^{ij}]|[\widehat{b}d_{p}^{ij}\phi_{p}^{ij}]\rangle+\sum_{i=1}^{n}\sum_{j=1 }^{m}\sum_{p=0}^{P}\langle z_{q}^{ij}|[\widehat{b}d_{p}^{ij}\phi_{p}^{ij}\rangle.\]
So we have \(29*N\) matrix-vector multiplications-related operations, but to compute the matrices we simply need \(12nm[P+1]^{2}\) volume integrations and \(17nm[P+1]^{2}\) surface integrals, because we don't have any terms that involve any convolution or other extra integrations. The difference in the cost versus the WFP computation regarding integrations depends on the sign of \((12nm-5nm)+(17nm-5nm)-nm^{2}=(7+12-m)nm\). Therefore, unless there's a coarse meshing in \(k\) for which \(m\leq 19\), \(m>19\) would hold, and regarding the matrix elements computations (which involve the most number of operations) then the cost of our master equation will be more efficient than the one for WFP.
## 5 Numerical Results
In this section a benchmark for our WFP numerical solver (developed by using FEniCS [1], [2], [13], [14], [15], [16], [17], [18], [19], [23], [24], [27]) is presented, against a problem for which the analytical form of the steady state solution is known: one where the potential is harmonic, \(V(x)=x^{2}/2\). For this case it is known that the steady state solution to the WFP problem is [8, 25]
\[\mu(x,k)=\frac{\exp\left(-(\frac{|x|^{2}}{5}+\frac{x\cdot k}{5}+\frac{3|k|^{2} }{10})\right)}{2\pi\sqrt{5}},\,(x,k)\in\mathbb{R}^{2d}. \tag{11}\]
The respective density matrix (in convenient position coordinates) for this steady state solution has the following real and imaginary components (the full calculations appear in the Appendix),
\[u_{\mu}(x,\eta)=\rho_{\mu}(x+\eta/2,x-\eta/2)=R_{\mu}+iI_{\mu}, \tag{12}\] \[R_{\mu}(x,\eta,t)=\frac{e^{-(x/\sqrt{6})^{2}-(\sqrt{5}\eta/2)^{2} }}{\sqrt{6\pi}}\cos(\frac{x}{\sqrt{6}}\cdot\eta)\] (13) \[I_{\mu}(x,\eta,t)=\frac{e^{-(x/\sqrt{6})^{2}-(\sqrt{5}\eta/2)^{2 }}}{\sqrt{6\pi}}\sin(\frac{x}{\sqrt{6}}\cdot\eta). \tag{14}\]
The initial condition for our benchmark problem will be taken as the ground-state of the harmonic oscillator, whose Wigner function is
\[W_{0}(x,p)=\frac{2}{h}\exp(-a^{2}p^{2}/\hbar^{2}-x^{2}/a^{2}), \tag{15}\]
and whose density matrix representation in convenient position coordinates is
\[\widehat{w}_{0}(x,\eta,t)=\frac{2\sqrt{\pi}}{ha}\exp(-\frac{x^{2}+(\eta/2)^{2 }}{a^{2}}), \tag{16}\]
which only has a real component (its imaginary part is zero). Under the environment noise, it will be deformed into the aforementioned steady state solution.
Plots of the real and imaginary components of our density matrix at the initial time are presented below, corresponding to the groundstate of the harmonic oscillator in convenient position coordinates.
The results of our numerical simulations for the time evolution of our transformed master equations by the NIPG DG method are shown below. The time evolution is handled via a theta-method with \(\theta=1/2\), as previously indicated. Dirichlet boundary conditions were imposed related to the known analytical
Figure 1: Plot of the real component of the density matrix (in convenient position coordinates) of the harmonic oscillator groundstate (the imaginary component is zero and therefore omitted).
solution for the case of a harmonic potential in the numerical solution of this convection-diffusion system. If the domain size is increased, these boundary conditions will converge to homogeneous BC due to Gaussian decay.
The figures below present the projection of our initial condition (the transformed density matrix of the harmonic ground state) into our DG Finite Element (FE) space \(V_{h}^{1}\) of piece-wise continuous linear polynomials, for the real and imaginary components of the transformed density matrix in a position basis.
The numerical steady-state solution for the real and imaginary components of the density matrix is presented as well, which is achieved after a long time (say, a physical time of 50 in the units of the computational simulation) under the influence of a harmonic potential.
We first present the results for the numerical solution at \(t=2\) and then at \(t=50\), where in the latter case it is close to the steady state, given our convergence analysis studies to be presented below.
Figure 3: Plot of the imaginary component of the density matrix (in convenient position coordinates) of the steady state for our transformed master equation under a harmonic potential. Notice the low order of magnitude \(O(10^{-15})\) versus the real component \(O(1)\) one.
Figure 2: Plot of the real component of the density matrix (in convenient position coordinates) of the steady state for our transformed master equation under a harmonic potential.
Convergence and Error Analysis for NIPG solutions at \(t=2.0\)
The following table contains the \(L_{2}\) error between the analytical steady state solution \(u_{\infty}=R_{\infty}+iI_{\infty}\) and the numerical solution \(u_{h}|_{t_{f}}=R_{h}|_{t_{f}}+iI_{h}|_{t_{f}}\) achieved after a time of \(t_{f}=2.0\) (in normalized units), for both the real and imaginary components of the density matrix in convenient position coordinates. This error is indicated for the different number of intervals in which each dimension is subdivided, with the same number of subdivisions \(N_{x}=N_{\eta}\) in \(x\) as in \(\eta\).
Figure 4: Initial condition for the real (left) and imaginary (right) parts of the density matrix in convenient coordinates, corresponding to a harmonic ground state (projected into the \(V_{h}^{1}\) DG FE space). Remark: Color scale differs between right picture and left one.
Figure 5: Numerical solution of the real (left) and imaginary (right) parts of the density matrix in convenient coordinates under a harmonic potential after an evolution time of \(t=2.0\), solved by NIPG-DG. Remark: Color scale differs between right picture and left one.
The error roughly halves when refining by a factor of 2 the meshing in both \(x\) and \(\eta\), which is starting to indicate an order of convergence of the method of the type \(\varepsilon=O(h^{\kappa})\), \(\kappa=1\) (the numerical value that is obtained by the standard fit in the error analysis is \(\kappa_{\rm NSfit}=0.9756\)), as piece-wise linear polynomials (degree \(\kappa=1\)) have been used for our simulations.
For comparison, a table is presented as well where the \(L_{2}\) error between the analytical form of the initial condition \(u_{0}=R_{0}+iI_{0}\) and its projection \(u_{h}|_{t_{0}}=R_{h}|_{t_{0}}+iI_{h}|_{t_{0}}\) into the DG FE space of piece-wise linear polynomials \(V_{h}^{1}\) (where \(t_{0}=0\)) is indicated for the different number of intervals in which each dimension is subdivided, where again \(N_{x}=N=N_{\eta}\). In this case, one can observe that the projection error behaves as in \(\varepsilon=O(h^{2\kappa})\), \(\kappa=1\), using piece-wise linear polynomials. The actual fitted numerical value in the error analysis is \(\kappa_{\rm ICfit}=1.0154\).
A more detailed analysis of the above-mentioned errors is presented below, but now for the case when the number of intervals in \(x\) and \(\eta\) might differ, for both the projection \(L_{2}\) error of the initial condition and the convergence error for the numerical solution versus the steady state after the \(t_{f}=2.0\) time.
Figure 6: Numerical solution of the real (left) and imaginary (right) parts of the density matrix in convenient coordinates under a harmonic potential after an evolution time of \(t=10.0\), solved by NIPG-DG. Remark: Color scale differs between right picture and left one.
Figure 7: Numerical steady state solution of the real (left) and imaginary (right) parts of the density matrix in convenient coordinates under a harmonic potential after an evolution time of \(t=50.0\), solved by NIPG-DG. Remark: Color scale differs between right picture and left one.
The behavior of the \(L_{2}\) error regarding the numerical solution after an evolution time of \(t_{f}=2.0\) can be explained by understanding that our phenomena is of a convective-diffusive type, where the convective part might dominate over the diffusive one, and since the transport is mostly vertical, the refinement in \(\eta\) is the important one regarding error behavior due to mesh discretization, as opposed to the \(x\)-refinement which doesn't have as much an effect in changing the aforementioned error.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \(\|R_{0}-R_{h}|_{t_{0}}\|_{2}\) & \(N_{\eta}=32\) & \(N_{\eta}=64\) & \(N_{\eta}=128\) \\ \hline \(N_{x}=32\) & 0.0167 & 0.0062 & 0.0039 \\ \hline \(N_{x}=64\) & 0.0150 & 0.0042 & 0.0016 \\ \hline \(N_{x}=128\) & 0.0147 & 0.0038 & 0.0010 \\ \hline \end{tabular}
\end{table}
Table 3: Comparison of the \(L_{2}\) projection error in the real component (for the imaginary part it is zero) of the density matrix in the initial condition between its analytical form and its numerical representation in a \(V_{h}^{1}\) DG FE space for NIPG-DG.
\begin{table}
\begin{tabular}{|c|c|c|} \hline \(N_{x}=N=N_{\eta}\) & \(||R_{0}-R_{h}|_{t_{0}}||_{2}=||u_{0}-u_{h}|_{t_{0}}||_{2}\) & \(||I_{0}-I_{h}|_{t_{0}}||_{2}\) \\ \hline
32 & 0.0167 & 0.0 \\ \hline
64 & 0.0042 & 0.0 \\ \hline
128 & 0.0010 & 0.0 \\ \hline \end{tabular}
\end{table}
Table 2: Comparison of the \(L_{2}\) projection error in the real and imaginary components of the density matrix in the initial condition between its analytical form and its numerical representation in a \(V_{h}^{1}\) DG FE space for NIPG-DG.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \(\|R_{0}-R_{h}|_{t_{0}}\|_{2}\) & \(N_{\eta}=32\) & \(N_{\eta}=64\) & \(N_{\eta}=128\) \\ \hline \(N_{x}=32\) & 0.0167 & 0.0062 & 0.0039 \\ \hline \(N_{x}=64\) & 0.0150 & 0.0042 & 0.0016 \\ \hline \(N_{x}=128\) & 0.0147 & 0.0038 & 0.0010 \\ \hline \end{tabular}
\end{table}
Table 3: Comparison of the \(L_{2}\) projection error in the real component (for the imaginary part it is zero) of the density matrix in the initial condition between its analytical form and its numerical representation in a \(V_{h}^{1}\) DG FE space for NIPG-DG.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \(\|I_{\infty}-I_{h}|_{t_{f}}\|_{2}\) & \(N_{\eta}=32\) & \(N_{\eta}=64\) & \(N_{\eta}=128\) \\ \hline \(N_{x}=32\) & 1.1630 & 0.7872 & 0.3210 \\ \hline \(N_{x}=64\) & 1.1662 & 0.7913 & 0.3227 \\ \hline \(N_{x}=128\) & 1.1678 & 0.7932 & 0.3235 \\ \hline \end{tabular}
\end{table}
Table 5: Comparison of the \(L_{2}\) error in the imaginary component of the density matrix between our numerical solution and the known analytical steady state for a harmonic oscillator benchmark problem, under a NIPG solver using a \(V_{h}^{1}\) DG FE space, after an evolution time of \(t_{f}=2.0\).
Convergence and Error Analysis for NIPG solutions at \(t=50.0\)
The table presented below contains the \(L_{2}\) error between the analytical steady state solution \(u_{\infty}=R_{\infty}+iI_{\infty}\) and the numerical solution \(u_{h}|_{t_{f}}=R_{h}|_{t_{f}}+iI_{h}|_{t_{f}}\) achieved after a time of \(t_{f}=50.0\) (where the numerical solution is close to the analytical steady state after that evolution time), for both the real and imaginary components of the density matrix in convenient position coordinates. This error is indicated for the different number of intervals in which each dimension is subdivided, with the same number of subdivisions \(N_{x}=N_{\eta}\) in \(x\) as in \(\eta\).
The numerical value for the error convergence rate obtained by the standard fit for \(||u_{\infty}-u_{h}|_{t_{f}}||_{2}\) is of the type \(\varepsilon=O(h^{\kappa}),\,\kappa_{\rm NSoft}=0.9517\). Let's remember that piece-wise linear polynomials (degree \(\kappa=1\)) have been used for our simulations.
For comparison, a table is presented as well where the \(L_{2}\) error between the analytical form of the initial condition \(u_{0}=R_{0}+iI_{0}\) and its projection \(u_{h}|_{t_{0}}=R_{h}|_{t_{0}}+iI_{h}|_{t_{0}}\) into the DG FE space of piece-wise linear polynomials \(V_{h}^{1}\) (where \(t_{0}=0\)) is indicated for the different number of intervals in which each dimension is subdivided, where again \(N_{x}=N=N_{\eta}\). In this case, one can observe that the projection error behaves as in \(\varepsilon=O(h^{2\kappa})\), \(\kappa=1\), using piece-wise linear polynomials (the actual fitted numerical value in the error analysis is \(\kappa_{\rm ICfit}=0.9952\)).
A more detailed analysis of the above-mentioned errors is presented below, but now for the case when the number of intervals in \(x\) and \(\eta\) might differ, for both the projection \(L_{2}\) error of the initial condition and the convergence error for the numerical solution of the steady state after our evolution time of \(t_{f}=50\).
The behavior of the \(L_{2}\) error regarding the numerical steady state solution can be explained again by understanding that our phenomena is of a convective-diffusive type, but where the convective part dominates over the diffusive one. Since the transport is mostly vertical, the refinement in \(\eta\) is the important one regarding error behavior due to mesh discretization, as opposed to the
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \(N_{x}=N=N_{\eta}\) & \(||R_{\infty}-R_{h}|_{t_{f}}||_{2}||\) & \(||I_{\infty}-I_{h}|_{t_{f}}||_{2}||\) & \(||u_{\infty}-u_{h}|_{t_{f}}||_{2}\) \\ \hline
2 & 7.2349 & 0.0283 & 7.2350 \\ \hline
4 & 5.3768 & 0.5212 & 5.4020 \\ \hline
8 & 3.9771 & 0.8732 & 4.0718 \\ \hline
16 & 2.2221 & 0.5987 & 2.3014 \\ \hline
32 & 0.9311 & 0.2107 & 0.9546 \\ \hline
64 & 0.4604 & 0.0261 & 0.4612 \\ \hline
128 & 0.3352 & 0.0246 & 0.3361 \\ \hline \end{tabular}
\end{table}
Table 6: Comparison of the \(L_{2}\) error in the real and imaginary components of the density matrix in the steady state between our numerical solution and the known analytical one for a harmonic oscillator benchmark problem, under a NIPG solver using a \(V_{h}^{1}\) DG FE space, after an evolution time of \(t_{f}=50.0\).
refinement, which doesn't seem to have as much an effect again in changing the aforementioned error.
## 6 Conclusions
Work has been presented regarding the setup of DG numerical schemes applied to a transformed Master Equation, obtained as the Fourier transform of the WFP model for open quantum systems. The Fourier transformation was applied over the WFP equation in order to reduce the computational cost associated with
\begin{table}
\begin{tabular}{|c|c|c|} \hline \(N_{x}=N=N_{\eta}\) & \(||R_{0}-R_{h}|_{t_{0}}||_{2}=||u_{0}-u_{h}|_{t_{0}}||_{2}\) & \(||I_{0}-I_{h}|_{t_{0}}||_{2}\) \\ \hline
2 & 1.4274 & 0.0 \\ \hline
4 & 0.8582 & 0.0 \\ \hline
8 & 0.2599 & 0.0 \\ \hline
16 & 0.0664 & 0.0 \\ \hline
32 & 0.0167 & 0.0 \\ \hline
64 & 0.0042 & 0.0 \\ \hline
128 & 0.0010 & 0.0 \\ \hline \end{tabular}
\end{table}
Table 7: Comparison of the \(L_{2}\) projection error in the real and imaginary components of the density matrix in the initial condition between its analytical form and its numerical representation in a \(V_{h}^{1}\) DG FE space for NIPG-DG.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \(||R_{\infty}-R_{h}|_{t_{\infty}}||_{2}\) & \(N_{\eta}=8\) & \(N_{\eta}=16\) & \(N_{\eta}=32\) & \(N_{\eta}=64\) & \(N_{\eta}=128\) \\ \hline \(N_{x}=8\) & 3.9771 & 2.2802 & 0.9567 & 0.5039 & 0.3632 \\ \hline \(N_{x}=16\) & 3.8911 & 2.2221 & 0.8727 & 0.4183 & 0.3004 \\ \hline \(N_{x}=32\) & 3.9393 & 2.2722 & 0.9311 & 0.4764 & 0.3447 \\ \hline \(N_{x}=64\) & 3.9233 & 2.2603 & 0.9151 & 0.4604 & 0.3359 \\ \hline \(N_{x}=128\) & 3.9222 & 2.2591 & 0.9131 & 0.4583 & 0.3352 \\ \hline \end{tabular}
\end{table}
Table 9: Comparison of the \(L_{2}\) error in the real component of the density matrix in the steady state between our numerical solution and the known analytical one for a harmonic oscillator benchmark problem, under a NIPG solver using a \(V_{h}^{1}\) DG FE space, after an evolution time of \(t_{f}=50.0\).
the pseudo-differential integral operator appearing in WFP. The model has been expressed as a system of equations by decomposing it into its real and imaginary parts (when expressing the density matrix in terms of the position basis). Given the \(\eta\)-transport and \(x\)-related gradient in the diffusion for this problem, the system has been set up so that an NIPG-DG method can be implemented for the desired numerical solution. Numerical simulations have been presented for the computational study of a benchmark problem such as the case of a harmonic potential, where a comparison between the numerical and analytical steady-state solutions can be performed for a long enough simulation time. Further general potentials could be studied in future work for the analysis of perturbations in an uncertainty quantification setting (or to also consider self-consistent interaction effects between the agents under consideration), as well as the case of \(d>1\) for studying the interaction of a system with a noisy environment, as in the Noisy Intermediate Scale Quantum (NISQ) devices regime in higher dimensions \(d\).
## Acknowledgments
Start-up funds support from UTSA is gratefully acknowledged by the author.
|
2307.11305 | Quantum Software Analytics: Opportunities and Challenges | Quantum computing systems depend on the principles of quantum mechanics to
perform multiple challenging tasks more efficiently than their classical
counterparts. In classical software engineering, the software life cycle is
used to document and structure the processes of design, implementation, and
maintenance of software applications. It helps stakeholders understand how to
build an application. In this paper, we summarize a set of software analytics
topics and techniques in the development life cycle that can be leveraged and
integrated into quantum software application development. The results of this
work can assist researchers and practitioners in better understanding the
quantum-specific emerging development activities, challenges, and opportunities
in the next generation of quantum software. | Thong Hoang, Hoa Khanh Dam, Tingting Bi, Qinghua Lu, Zhenchang Xing, Liming Zhu, Lam Duc Nguyen, Shiping Chen | 2023-07-21T02:24:31Z | http://arxiv.org/abs/2307.11305v1 | # Quantum Software Analytics:
###### Abstract
Quantum computing systems depend on the principles of quantum mechanics to perform multiple challenging tasks more efficiently than their classical counterparts. In classical software engineering, the software life cycle is used to document and structure the processes of design, implementation, and maintenance of software applications. It helps stakeholders understand how to build an application. In this paper, we summarize a set of software analytics topics and techniques in the development life cycle that can be leveraged and integrated into quantum software application development. The results of this work can assist researchers and practitioners in better understanding the quantum-specific emerging development activities, challenges, and opportunities in the next generation of quantum software.
Quantum computing, quantum software engineering, quantum machine learning, software analytics
## I Introduction
_Quantum computing_ (QC) has emerged as the future for solving many problems more efficiently. For example, QC is used to simulate complex biochemical systems [1], reduce the training time of machine learning models [2], and create encryption methods for preventing cybersecurity threats [3]. Unlike classical computing, where the information is encoded as bits and each bit is assigned either 0 or 1, QC encodes the information as a list of quantum bit (qubit). Each qubit is a linear combination of two qubit states, such as \(|0\rangle\) and \(|1\rangle\). In recent years, the development of quantum computing systems has attracted significant interests from both research and industry communities [4, 5, 6]. Cloud-based quantum computing platforms have emerged to enable developers to create quantum software applications. For example, Google has created a Quantum Virtual Machine1 to emulate the results of quantum computers. IBM has offered a cloud quantum platform, namely IBM Quantum,2 to help developers run their programs on quantum systems.
Footnote 1: [https://quantumai.google/quantum-virtual-machine](https://quantumai.google/quantum-virtual-machine)
Footnote 2: [https://quantum-computing.ibm.com/](https://quantum-computing.ibm.com/)
There has been growth in the number of quantum-driven software systems in recent years. Hence, there is an urgent need to develop _quantum software engineering_ (QSE) techniques to support quantum software applications in various domains throughout the quantum software life cycle [7]. This cycle includes five different stages: requirements, design, implementation, testing, and maintenance. At each stage, software engineers need to employ a suitable quantum software technique to ensure the completeness of quantum software applications. For example, developers are required to model an architecture and understand the modularity of quantum software systems at the quantum software design stage. However, there are numerous quantum software techniques at each stage, posing challenges to correctly using these techniques. QSE provides guidelines to help developers select appropriate quantum techniques for fully developing quantum software applications.
Software analytics is recognized as a critical part of developing classical software systems [8, 9, 10]. It aims to monitor, predict, and improve the efficiency and effectiveness of software applications during their implementation, testing, and maintenance stages. For example, Tasktop,3 a software analytics tool, seeks to improve software quality by providing developers with a real-time view of how their software application is operating. As another example, Embold,4 a software analytics platform, helps developers analyze their source code and improve its stability and maintainability.
Footnote 3: [https://www.tasktop.com/](https://www.tasktop.com/)
Footnote 4: [https://embold.io/](https://embold.io/)
Similar to classical computing, quantum computing also needs software analytics to understand software artifacts, such as source code, bug reports, commits, etc., to assist developers in making better decisions in implementing quantum software applications. As quantum computing employs the principles of quantum mechanics to process data, we need to improve traditional software analytics to better understand the quantum computing components, such as qubits, quantum logic gates and quantum algorithms. In this case, we can improve quantum software quality, accelerate productivity, and reduce quantum software maintenance costs.
In this paper, we present the opportunities and challenges of software analytics in building quantum software applications. We believe that software analytics is vital to reducing quantum software development costs and improving quality and speed to market. We identify a number of areas that will be critical to the success of software analytics in developing quantum software applications. Those areas represent a new set of problems for the software analytics community to explore. We also present a brief roadmap of how those new problems could be addressed.
## II Background
Quantum computing employs a quantum bit (qubit) to encode the information. Different from a classical bit, which has values of 0 and 1, each qubit \(|e\rangle\) is represented by a linear combination of two basis states, such as \(|0\rangle\) and \(|1\rangle\), in the quantum state space as follows:
\[|e\rangle=\alpha|0\rangle+\beta|1\rangle \tag{1}\]
where \(\alpha\) and \(\beta\) are the complex numbers in which \(|\alpha|^{2}+|\beta|^{2}=1\). \(|0\rangle\) and \(|1\rangle\), the computational basis states of the qubit, are described as follows:
\[|0\rangle=\begin{bmatrix}1\\ 0\end{bmatrix}\hskip 28.452756pt|1\rangle=\begin{bmatrix}0\\ 1\end{bmatrix}\]
Figure 1 shows an overview of the architecture of a quantum system [11]. The architecture includes two main components: _quantum computer layers_ and _classical computer layers_. The details of quantum computer layers are as follows:
* _Physical building blocks_ have two vital parts: superconducting loops and couplers. While superconducting loops recognize the physical qubits, couplers connect different qubits in quantum systems. These blocks also contain other parts for qubit addressing and control operations.
* _Quantum logic gates_, the building blocks of quantum circuits, are used to process data in quantum systems.
* The _quantum-classical computer interface_ provides the interface between classical computers and a quantum processing unit (QPU).
The classical computer layers are described in the following:
* The _quantum programming environment_ includes the quantum assembly language for instructing a QPU, the programming APIs used to write a high-level quantum programming language, and the simulator support employed to run and test quantum programs.
* A _network_ system connects the quantum programming environment and the quantum software applications.
* _Quantum software applications_, written by developers, follow business requirements to serve customers.
## III Research Problems
To build quantum software applications, we first need to estimate the cost of developing these applications. To simplify the quantum cost estimation problem, we neglect the cost of understanding customers' needs and designing the quantum system architecture. If stakeholders agree with the quantum applications' cost, developers will start writing a quantum program for the applications. During this process, developers need to deal with quantum software bugs that produce unexpected results. Specifically, a list of open main research problems in developing quantum software development are described as follows:
**1. Quantum software cost estimation:** Software cost estimation has been extensively investigated in the classical computing community [12, 13, 14, 15, 16, 17]. In quantum applications, stakeholders or software teams are also required to accurately predict the cost of these applications to ensure the success of their project.
Thus, there is a need for estimating for developing a quantum software application. This is especially important to decide the cost and benefit of developing a software application using quantum computing. Effort estimation for quantum applications represents a novel problem in software analytics due to their distinct characteristics. A quantum system is a hybrid system, including quantum computer layers and classical computer layers (see Figure 1). The _physical building blocks_ and the _quantum programming environment_ are the vital components of the quantum and computer layers, respectively. There are two main challenges in evaluating the effort of quantum software applications.
* New models and techniques are needed for estimation the effort of constructing the _quantum physical building blocks_ in the quantum layers. As these building blocks include physical mechanisms such as _superconducting loops_ and _couplers_ (see Figure 1), developers require background knowledge in physics to correctly construct these building blocks. Research is needed to define a framework to estimate the knowledge of developers in comprehending the physical requirements of developing quantum software applications.
* We also need to evaluate how developers are familiar with the quantum programming environment (see Figure 1). During the development of quantum applications, developers are required to use suitable tools for simulating quantum computation (simulator support), optimizing quantum circuits
Fig. 1: An overview of the architecture of a quantum computing system [11].
(quantum circuit composer), describing quantum computation in a circuit model (quantum assembly language), and writing a quantum programming language (programming APIs). New research should investigate how developers comprehend these quantum tools to accurately estimate the effort for developing quantum applications.
**2. Quantum code migration:** Code migration is essential in the modern world of technology, where stakeholders often develop their products on multiple operating platforms using different programming languages [18, 19, 20]. As quantum computing potentially outperforms classical computing in various domains, such as biochemistry, machine learning, or cybersecurity, many quantum programming languages, such as Qiskit [21], ProjectQ [22] and pyQuil [23], have been developed. Therefore, there is a need to translate source code from classical programming languages to quantum programming languages to reduce the cost of implementing quantum software systems.
Code migration in quantum computing systems is a challenging task. The main reason is that it is difficult to understand quantum programming behavior. Unlike classical computing, where we can employ programming analysis techniques to analyze the behavior of classical programs, quantum computing uses qubits to encode information. We are clueless about how qubits are connected during the execution of a quantum program, leading to our incomprehension of the behavior of quantum programming. JavadiAbhari et al. [24] present an entanglement analysis that helps developers identify possible pairs of qubits to understand the behavior of quantum programs. However, it is unclear whether the analysis can be used on complex quantum programs.
**3. Quantum code generation:** Code generation is a vital problem in classical computing. Its goal is to generate explicit code from multimodel data sources, such as modeling languages [25], formal specification languages [26], and natural language descriptions [27]. Code generation is also a critical research problem in quantum computing to facilitate the process of developing quantum software applications. The main challenges of quantum code generation come from the data sources of quantum systems, such as quantum modeling languages and quantum specification languages. Unlike classical computing, where its modeling and specification languages have been deeply investigated, the research of quantum modeling languages and quantum specification languages has just started.
Perez-Delgado and Perez-Gonzalez [28] extended the unified model language (UML) to model quantum software systems. Their approach covers two types of UML, such as quantum class diagram and quantum sequence diagram. While the quantum class diagram indicates whether a software module makes use of quantum information, the quantum sequence diagram shows the connection between these software modules in a quantum program. However, Perez-Delgado and Perez-Gonzalez have ignored diagrams for the vital components of quantum systems, such as superconducting loop qubits, quantum logic gates, or quantum circuit composers (see Figure 1). These components need to be further studied to construct a model language for the quantum system.
Carriere [29] defined a formal specification language for quantum algorithms, but the language has only represented some elementary quantum logic gates, such as the Identity gate, C-Not gate, or Hadamard gate. In addition, the language has ignored the _physical building blocks_ (see Figure 1) of the quantum system. Even though the language can be used to specify a simple quantum system, its usefulness in complex quantum systems has been unknown.
Researchers need to investigate quantum modeling and specification languages to accurately solve the quantum code generation problem. Moreover, we need to develop a quantum verification program to ensure the generated code is consistent with the quantum system.
**4. Quantum defect prediction:** Defect prediction is essential to support developers in releasing stable software applications [30, 31, 32]. Defect prediction also plays an important role in reducing costs and improving the quality of quantum software systems. As quantum systems require a hybrid system, including quantum computer layers and classical computing layers (see Figure 1), many types of defects, such as incorrect quantum initial values, incorrect deallocation of qubits, and incorrect compositions of operations, have been found during the process of implementing quantum applications. There are two main challenges in detecting defects in quantum systems:
* Research in quantum software debugging and quantum software testing has received minor attention and still remains a vital problem in quantum systems [7]. As the systems often have complex components, such as _physical building blocks_ and _quantum logic gates_ (see Figure 1), it can be challenging to find defects in their source code. Moreover, there is no prior work focusing on defining concrete defect patterns in quantum programming languages.
* Developers require some knowledge of quantum computing systems to understand defects in their source code. However, it takes a lot of time, effort, and experience from developers during the process of developing quantum software applications. As quantum software applications have remained undeveloped, defects described by developers may not be correct in practice.
## IV Initial Solutions
In this section, we present the solutions and an evaluation of the main research problems as follows:
**1. Quantum software cost estimation:** To evaluate the cost of quantum software systems, we should produce an effort estimation. Specifically, given a quantum software system \(\mathcal{Q}\), the effort to implement the system is described as:
\[\mathcal{E}_{\mathcal{Q}}=\theta(f_{1},\dots,f_{n}) \tag{2}\]
where \(\theta\) is the effort prediction function. \(f_{1},\dots,f_{n}\) is a list of features used to estimate the effort of implementing the quantum system. Specifically, the features are grouped into four different categories, such as product attributes, quantum
system attributes, personnel attributes, and project attributes. The product attributes describe an overview of our product. The quantum system attributes, such as interoperability, security, or usability, focus on implementing the quantum system. The personnel attributes measure how familiar developers are with quantum systems. The project attributes present tools used in developing quantum systems.
The cost estimation of quantum systems is then calculated by employing various methods, such as COCOMO [33], Putnam [34], or function point-based analysis [35]. For example, we can apply the Putnam method to define the cost estimation of a quantum system as follows:
\[\mathcal{C}_{\mathcal{Q}}=\mathcal{F}_{e}\times\mathcal{E}_{\mathcal{Q}}^{1/3 }\times t_{d}^{4/3} \tag{3}\]
where \(t_{d}\) and \(\mathcal{F}_{e}\) represent the delivery time of the quantum system and the competencies of quantum development, respectively. Both \(t_{d}\) and \(\mathcal{F}_{e}\) are taken by using past quantum system projects.
**2. Quantum code migration:** As quantum computing potentially outperforms classical computing in terms of efficiency, many quantum programming languages have been developed for implementing quantum systems. Moreover, classical software systems have grown significantly nowadays, leading to a need to translate source code from classical programming languages to quantum programming languages.
Researchers employ statistical machine translation techniques to solve the code migration problem in classical systems [18, 19, 20]. We believe that these techniques are applicable in quantum code migration. Specifically, a classical code (a source code) is treated as a sequence of code tokens and is migrated into a fragment of a quantum code (a target code). In other words, we aim to map the classical code to the quantum code by analyzing the bilingual dual corpus, and then we extract the alignment between the tokens of the classical and quantum codes. We also need to manually define the translation rules for the mappings for the APIs used in the classical and quantum codes to improve the performance of our code migration models. For example, sklearn.svm.SVR5 and qiskit_machine_learning.algorithms.QSVR6 are two APIs for calling a support vector regression model in Python (a classical programming language) and Qiskit (a quantum programming language), respectively. To estimate the performance of quantum code migration, we can employ the BLEU score as our evaluation metric [36].
Footnote 5: [https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVR.html](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVR.html)
Footnote 6: [https://github.com/Qiskit/qiskit-machine-learning](https://github.com/Qiskit/qiskit-machine-learning)
**3. Quantum code generation:** Similar to code generation in classical computing, we can generate quantum code from various data sources, such as quantum model languages, quantum specification languages, or natural language descriptions. However, the quantum model languages and the quantum specification require further study to employ them in developing quantum software systems in practice.
In classical computing, researchers often employ deep learning (DL) frameworks to generate code from natural language descriptions [27]. These frameworks may be appropriate for generating quantum code to reduce the cost of developing quantum software applications. However, there are two main challenges to employing the DL techniques. First, this problem requires a large number of pairs of text descriptions and target quantum codes. For example, GitHub Coplicot,7 an AI tool generating programming language codes from comments, trains a deep learning model from 54 million public Python GitHub repositories. As quantum code generation is a new research topic, it needs time for developers to build up the pairs of text descriptions and quantum codes. Second, different from classical computing, where its code structures are represented in various forms, such as abstract syntax trees, control flow graphs, or program dependency graphs, quantum code structures are still unexplored. These two challenges may lead to poor performance in implementing quantum code generation models. More research work needs to be done in the future to address the problem of quantum code generation.
Footnote 7: [https://en.wikipedia.org/wiki/GitHub_Copilot](https://en.wikipedia.org/wiki/GitHub_Copilot)
**4. Quantum defect prediction:** Detecting defects in quantum systems is a critical research problem in developing any quantum software application. Like in classical computing, we can construct quantum defect prediction models based on high-quality quantum code metrics. The quantum code metrics should be related to the quantum system, such as:
* How many quantum logic gates are in the quantum system? What are they?
* How many quantum algorithms are employed in the quantum system? What are they?
* What is the size of the quantum system?
Deep learning methods [37, 38, 39] can be employed to automatically extract high-quality code metrics for detecting defects in quantum systems. Another approach is to identify defect patterns that may happen in quantum programs. Zhao et al. [40] show that there are some defect patterns in the quantum programming language Qiskit. We believe that pattern mining techniques [41], such as clustering or association rule learning, are appropriate to automatically identify such patterns to improve developers' productivity and reduce quantum software maintenance costs. Researchers can leverage a number of widely-used evaluation metrics, such as precision, recall, or F-measure, to capture the performance of their quantum defect prediction models.
## V Conclusion
Quantum computing is powerful in terms of qubit counts, algorithms, and decoherence times. Stakeholders' interest in applying quantum computing has surged in recent years. Leveraging technology to solve scientific problems requires a deeper understanding of the essential characteristics of quantum-specific applications, particularly those relevant to software development. As such, more and more software applications can be facilitated by quantum computing, and
the need for high-quality quantum applications will increase dramatically in the future. We believe that software engineering methodologies need to be leveraged in quantum systems to help researchers and practitioners more easily construct quantum software applications.
|
2306.06550 | Local Deformation for Interactive Shape Editing | We introduce a novel regularization for localizing an elastic-energy-driven
deformation to only those regions being manipulated by the user. Our local
deformation features a natural region of influence, which is automatically
adaptive to the geometry of the shape, the size of the deformation and the
elastic energy in use. We further propose a three-block ADMM-based optimization
to efficiently minimize the energy and achieve interactive frame rates. Our
approach avoids the artifacts of other alternative methods, is simple and easy
to implement, does not require tedious control primitive setup and generalizes
across different dimensions and elastic energies. We demonstrates the
effectiveness and efficiency of our localized deformation tool through a
variety of local editing scenarios, including 1D, 2D, 3D elasticity and cloth
deformation. | Honglin Chen, Changxi Zheng, Kevin Wampler | 2023-06-11T00:35:14Z | http://arxiv.org/abs/2306.06550v1 | # Local Deformation for Interactive Shape Editing
###### Abstract.
We introduce a novel regularization for localizing an elastic-energy-driven deformation to only those regions being manipulated by the user. Our local deformation features a natural region of influence, which is automatically adaptive to the geometry of the shape, the size of the deformation and the elastic energy in use. We further propose a three-block ADMM-based optimization to efficiently minimize the energy and achieve interactive frame rates. Our approach avoids the artifacts of other alternative methods, is simple and easy to implement, does not require tedious control primitive setup and generalizes across different dimensions and elastic energies. We demonstrates the effectiveness and efficiency of our localized deformation tool through a variety of local editing scenarios, including 1D, 2D, 3D elasticity and cloth deformation.
Local control, shape deformation, elasticity, sparsity, ADMM. +
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
We seek to combine the advantages of elastic energy minimization with the locality of sculpting-style tools. In doing so, we also want the locality of a deformation to be _automatic_, _natural_ and _efficient_. The locality of an edit should be automatic in that the region of influence (ROI) of the deformation should scale automatically depending on the size of the desired deformation. In addition, although we do not require a rig, in cases where a rig or other constraints have been placed on the shape, the ROI needs to automatically adapt to them. We further want the notion of locality to be _natural_ in that it adjusts to both the geometry of the shape and the elastic energy driving its deformation, where changes to either will lead to a fitting change in the ROI. Finally, we require that the method be fast enough to run in real time.
We achieve this with the following contributions:
* We introduce a novel deformation regularizer, called a smoothly clamped \(\ell_{1}\) (SC-L1) loss which augments an elastic energy with a notion of locality. SC-L1 regularization is simple to implement, and avoids artifacts of previous methods.
* We enable real-time localized deformation with an ADMM-based optimization algorithm for SC-L1-regularized deformation which is significantly faster than prior work using a group lasso regularizer.
We illustrate the utility of SC-L1 regularization on a wide range of examples, including multiple different elastic energies, 1D curves, 2D and 3D meshes, and cloth. This provides a localized deformation tool which avoids artifacts of other regularizers, is easy to implement, generalizes across different dimensions and material models, and performs fast enough to run in real time.
## 2. Related Work
Shape deformation algorithms in computer graphics have typically fallen into one of two categories, which we will call the _direct_ approach and the _optimization_ approach. In the direct approach, the shape's deformation is an explicit function of the user's input, often either by modifying some high-level parameterization or by applying a pre-specified deformation field. In the optimization approach, the shape deformation is an indirect product of the user's input combined with an elastic energy, and the deformation itself is only known after an optimization process has converged to minimize this energy. These two approaches differ in how (and often whether) they enforce the locality of a deformation.
### Locality in High-Level Parameterizations
A time-honored and common approach for localized deformation is the direct manipulation of high-level parameters. From the onset, a notion of locality is baked into these parameters, which can directly encode the shape itself, often using splines (Hoschek et al., 1993) or a "rig" to control a shape's deformation, as with cage-based generalized barycentric coordinates (Joshi et al., 2007; Lipman et al., 2008), linear-blend skinning (Jacobson et al., 2011; Magnenat-Thalmann et al., 1989; Wang et al., 2015), lattice deformers (Coquillart, 1990; Sederberg and Parry, 1986), wire curves (Singh andFiume, 1998), or learned skinning weights (Genova et al., 2020), to list a few.
Approaches of this nature, although popular in computer graphics, have a few disadvantages. Firstly, since the locality is baked into the parameterization, it cannot easily adapt based on changes in the deformation. Secondly, without an optimization step, even in cases where a capable elastic energy is within reach there is no way to incorporate it into the deformation.
### Locality in Deformation Fields
Instead of providing localized deformation via pre-chosen parameters, it is also possible to define a localized deformation field, then apply it to a shape. These methods often follow a "sculpting" metaphor, and include simple move, scale, pinch, and twist edits as well as more sophisticated operations (Cani and Angelidis, 2006). Recently, De Goes and James (2017) introduced _regularized Kelvinlets_, which provides real-time localized volumetric control based on the regularized closed-form solutions of linear elasticity. These closed-form solutions were later extended to handle dynamic secondary motions (De Goes and James, 2018), sharp deformation (de Goes and James, 2019) and anisotropic elasticity (Chen and Desbrun, 2022).
These approaches allow for local deformation with real-time feedback. However, as they are designed for digital sculpting, these methods usually require the user to explicitly pick the falloff of the brushes. Furthermore, these methods are usually based on Euclidean distance, unaware of the shape's geometry. In contrast, our method is shape-aware and enables automatic dynamic region of influence with interactive feedback.
### Localized Optimization via an ROI
The most natural formulation of optimization-based deformation editing is by solving globally for the entire shape's deformation at once (Shengel et al., 2017; Smith et al., 2019; Zhu et al., 2018). Nevertheless, there are methods attempting to enforce locality in the shape optimization process. Previous methods of this sort have often computed the ROI of a manipulation as a preprocessing step, then restricting the optimization to only move parts of the shape within this ROI. The ROI can be taken as an input (Alexa, 2006) or based on a small amount of user markup (Kho and Garland, 2005; Luo et al., 2007; Zimmermann et al., 2007). Other methods combine deformation energies with handle-based systems, including skeleton rigs (Hahn et al., 2012; Jacobson et al., 2012; Kavan and Sorkine, 2012) and cages (Ben-Chen et al., 2009).
Unfortunately, in many contexts, the ROI is hard or impossible to know in advance. This is particularly the case when constraints are involved, or where it is not known in advance if a deformation will be small (best fitting a small ROI) or large (best fitting a large ROI). In addition, the correct ROI may also depend on the elastic energy driving the deformation, thus difficult to account for when the ROI calculation is decoupled in a separate step.
Figure 2. Directly applying different norms to an elastic energy leads to global deformation, and thus the user needs to carefully set up additional fixed handles (green points) to keep the shapes from freely moving around in the space.
### Localized Optimization via Sparsity Norms
Sparsity-inducing norms, such as the smooth \(\ell_{0}\) norm, have been widely applied to many domains and problems including medical image reconstruction (Xiang et al., 2022), sparse component analysis (Mohimani et al., 2007) and UV mapping (Poranne et al., 2017). To allow an adaptive ROI while preserving the benefits of optimization-based deformation, a few works have adopted a sparsity-inducing norm, typically a \(\|x\|_{1}\) or \(\|x\|_{2}\) norm, in their energy, which is then minimized by Alternating Direction Method of Multipliers (ADMM) (Boyd et al., 2011; Peng et al., 2018; Zhang et al., 2019) or Augmented Lagrangian Method (ALM) (Bertsekas, 1996). We refer the readers to (Xu et al., 2015) for a survey on sparsity in geometry modelling and processing.
Several methods of this variety rely on sparsity-inducing norm formulated as a sum of \(\|x\|_{2}\) norms, referred to as \(\ell_{2,1}\) or \(\ell_{1}/\ell_{2}\) norms, or a _group lasso_ penalty. They have been applied in a preprocessing phase to compute sparse deformation modes for interactive local control (Brandt and Hildebrandt, 2017; Deng et al., 2013; Neumann et al., 2013). However, the deformation is limited by the linear deformation modes and thus struggles with large deformation.
Another class of methods adds a sparsity-induced regularization to an elastic energy optimization to achieve local deformation. Gao et al. (2012) applied different \(\ell_{p}\) sparsity norms to the as-rigid-as-possible energy (Sorkine and Alexa, 2007) to create various deformation styles. Recently, Chen et al. (2017) used \(\ell_{2,1}\) regularization on vertex positions to locally control the deformation. However, their direct use of the \(\ell_{2,1}\) norm will create artifacts when the control point is not on the boundary of the shape (see Fig. 3(b) and Fig. 13(b)). Moreover, their method requires one ADMM solve in each global iteration, which renders the optimization less efficient and slow in runtime. Also, their framework is limited to 2D deformation with ARAP energy only, while our framework generalizes across dimensions and a variety of energy models.
Our algorithm, inspired by sparsity-seeking regularizers such as that used by (Chen et al., 2017; Fan and Li, 2001), addresses these shortcomings. In particular, we propose a simple and novel sparsity-inducing norm that eliminates artifacts arising from the \(\ell_{2,1}\) norm, and our efficient optimization scheme leads to interactive performance.
## 3. Overview
The main idea of our method is to use a novel regularization term to produce local deformation with a dynamic region of influence (ROI). Our method takes a triangle/tetrahedral mesh (or a 1D polyline) and a set of selected vertices as control handles as input. The output of our method is a deformed shape where the deformation is both _local_ and _natural_ and the ROI is automatically adaptive to the deformation. Here the "locality" implies that a handle only dominates its nearby areas without affecting the regions far away.
### SC-L1 Regularization
We suggest a sparsity-inducing regularization term to produce natural local deformation. This regularizer is applied per-vertex to \(\mathbf{V}_{i}-\widetilde{\mathbf{V}}_{i}\) to bias each vertex deformed position \(\mathbf{V}_{i}\) to exactly match its initial rest position \(\widetilde{\mathbf{V}}_{i}\) except in isolated regions of the shape. The most obvious choice for this regularization would be to enforce sparsity either with an \(\ell_{1}\)-norm, or with a _group lasso_ / \(\ell_{2,1}\)_regularization_ defined as \(\sum_{i}\|\mathbf{V}_{i}\|_{2}\) as in (Chen et al., 2017).
Figure 3. Our method (a) produces _local_ deformation which is smooth, shape-aware and has automatically adaptive region of influence without the need of additional handles. Here we highlight all the handles that have been moved in yellow. For the other alternatives, \(l_{1}\)-norm-based method (b) contains artifacts due to the use of \(l_{1}\) norm (see the green highlighted regions); regularized Kelvinlets technique (c) is based on Euclidean distance and unaware of the geometry (see the blue highlighted regions); and biharmonic coordinates approach (d) requires a careful placement of additional handles (green) to explicitly control the region of influence. Note that the use of \(l_{1}\)-norm (b) leads to either small global motions and minor artifacts (b2, smaller \(l_{1}\)-norm) or no global motions but significant artifacts (b3, larger \(l_{1}\)-norm). Thus (Chen et al., 2017) mitigates it with a smoothing regularizer (b1), but this changes the elastic energy and it no longer deforms like a localized ARAP.
However, direct use of a \(\ell_{2,1}\) regularization term leads to artifacts, due to the fact that the \(\ell_{2,1}\)-norm competes with the elastic energy by dragging _all_ the vertices towards their original positions. This results in undesired distortion near the deformation handles (see Fig. 3(b) and Fig. 13(b)), making the deformed region look unnatural. Previous \(\ell_{2,1}\)-based methods (Chen et al., 2017) focus specifically on ARAP-like deformation, and attempt to alleviate this artifact by adding a Laplacian smoothness term and a weighting term based on binarmonic distance. Unfortunately, as seen in Fig. 3 at the ends of the octocat's tentacles, artifacts can arise in regions quite close to the deformation handles, and there is not necessarily any setting of these parameters which alleviates these artifacts without oversmoothing the entire deformation.
Inspired by "folded concave" losses in statistical regression (Fan and Li, 2001; Zhang, 2010) and the use of the \(\ell_{2,1}\) loss in deformation (Chen et al., 2017), we propose using a smoothly clipped group \(\ell_{1}\)-norm as our locality-inducing regularization. We call it a SC-L1 loss for "smoothly clamped \(\ell_{1}\) loss" (see the inset).
We adopt a simple implementation for our SC-L1 loss:
\[\|\mathbf{x}\|_{\text{SC-L1}}=\begin{cases}\|\mathbf{x}\|_{2}-\frac{1}{2s}\| \mathbf{x}\|_{2}^{2}&\|\mathbf{x}\|_{2}<s\\ \frac{1}{2}s&\|\mathbf{x}\|_{2}\geq s\end{cases}, \tag{1}\]
where \(s\) is the threshold distance beyond which the regularizer is disabled (this is equivalent to a group-variant of the MCP loss in (Zhang, 2010), but we use the term "SC-L1" to emphasize that the minimax concave property is not critical for localized deformation). This function is continuously differentiable and piecewise smooth, and admits a proximal shrinkage operator free of local minima. Near the origin the SC-L1 loss function acts like the group \(\ell_{1}\)-norm, which drives \(\|\mathbf{x}\|_{\text{SC-L1}}\) towards 0 in a sparse-deformation-seeking manner. When \(\|\mathbf{x}\|_{2}\geq s\), the SC-L1 loss function value is a constant and has no penalty on \(\|\mathbf{x}\|_{2}\). For a detailed comparison between our SC-L1 loss function and other alternatives, please see Sec.5 of the supplementary material.
### Local Deformation Energy
We denote \(\mathbf{V}\) as a \(|\mathbf{V}|\times d\) matrix of vertex positions at the deformed state, and \(\mathbf{\bar{V}}\) as a \(|\mathbf{V}|\times d\) matrix containing rest state vertex positions.
The total energy for our local deformation is as follows:
\[\text{minimize}\quad\underbrace{E(\mathbf{V})}_{\text{Elasticity}}+ \sum_{i\in V}\underbrace{wa_{i}\|\mathbf{V}_{i}-\mathbf{\widetilde{V}}_{i}\|_{ \text{SC-L1}}}_{\text{Locality}}, \tag{2b}\] \[\text{s.t.}\quad\mathbf{V}_{s}=\mathbf{p}_{s},\quad\text{(position constraint)}\] \[\text{A}_{k}\mathbf{V}_{t}=\mathbf{b}_{k},\forall t\in\mathcal{ S}_{k}\quad\text{(affine constraint)} \tag{2a}\]
The first term is an elasticity energy of choice, and can be selected independent of the locality regularization. The second term is the novel "SC-L1 loss" term on the vertex position changes, which measures the locality of the deformation. \(a_{i}\) is the barycentric vertex area of the \(i\)-th vertex, which ensures the consistency of the result across different mesh resolutions for the same constant \(w\). To enable more user control, position constraints and optional affine constraints can be added on selected vertices to achieve different deformation effects. \(s\) denotes the indices of the vertices with the position constraint, and we call these vertices "handles". \(\mathcal{S}_{k}\) is the \(k\)-th set of vertex indices where an affine constraint is added. For simplicity, we omit the position constraints and affine constraints in the discussion below, as they can be easily intergrated to the system by removing the corresponding degrees of freedom and using Lagrange multiplier method (see Eq(1) in (Wang et al., 2015)).
Inspired by the local-global strategy in (Brown and Narain, 2021), our local deformation energy (Eq. 2) can be rewritten as:
(3a) \[\text{s.t.}\quad\mathbf{X}_{j}=\text{sym}(\mathbf{D}_{j}\mathbf{V }),\forall j,\] (3b)
where \(\mathbf{D}_{j}\) is the selection matrix for edges of the \(j\)-th vertex or element. \(\text{sym}(\mathbf{F})\) denotes the symmetric factor \(\mathbf{S}\) computed using the polar decomposition \(\mathbf{F}=\mathbf{RS}\), where \(\mathbf{F}\) is the deformation gradient. Thus \(\mathbf{X}_{j}\) is the symmetric factor of deformation gradient of the \(j\)-th vertex or element. (Note that in general \(\text{sym}(\mathbf{F})\neq\frac{1}{2}(\mathbf{F}+\mathbf{F}^{\top})\).) The goal of using \(\text{sym}()\) here is to ensure the local coordinates \(\mathbf{X}_{j}\) are invariant to rotations as well as translations. For details, please see (Brown and Narain, 2021).
## 4. Optimizing With Admm
A natural way to minimize this energy is to use the alternating direction method of multipliers (Boyd et al., 2011) for the sparsity term, and to use a local-global update strategy for the elasticity term. However, previous \(\ell_{2,1}\)-based methods (Chen et al., 2017) apply these two strategies separately in their local and global steps, resulting in an inefficient optimization scheme. As they only support 2D ARAP energy, we discuss further in Sec. 4.1.
We propose a new way to efficiently minimize such energies in Eq. 3 by combining the sparsity-targeted ADMM with the elasticity-focused local-global strategy. We minimize our energy (Eq. 3) using a three-block alternating direction method of multipliers scheme (Boyd et al., 2011) following the local-global update strategy. Our first local step, finding the optimal symmetric factor \(\mathbf{X}_{i}\) of the deformation gradient, can be formulated as a minimization problem on its singular values. Our second local step, minimizing the SC-L1 loss term for each \(\mathbf{Z}_{i}\), can be solved using a shrinkage step. Our global step, updating vertex positions \(\mathbf{V}\), is achieved by solving a linear system. We provide an overview of our three-block ADMM scheme in Alg. 1.
### Example I: Local ARAP Energy
We begin by considering how to minimize an as-rigid-as-possible (ARAP) energy (Sorkine and Alexa, 2007) when combined with a SC-L1 loss regularizer. With an ARAP elastic energy, the total energy (Eq. 3) for our local deformation is as follows:
\[\underset{\mathbf{V}_{i}\left\{\mathbf{R}_{i}\right\}}{\text{ minimize}}\quad\sum_{i\in V}\quad\underbrace{\frac{1}{2}\|\mathbf{R}_{i}\mathbf{D}_{i}- \widetilde{\mathbf{D}}_{i}\|_{\mathbf{W}_{i}}^{2}}_{\mathbf{ARAP}}+\underbrace{ \mathbf{w}_{i}\mathbf{i}\|\mathbf{V}_{i}-\widetilde{\mathbf{V}}_{i}\|_{\text{ SC-L1}}}_{\text{Localness}}, \tag{4}\]
where \(\mathbf{R}_{i}\) is a \(d\times d\) rotation matrix, \(W_{i}\) is a \(|\mathcal{N}(i)|\times|\mathcal{N}(i)|\) diagonal matrix of cotangent weights, \(\widetilde{\mathbf{D}}_{i}\) and \(\mathbf{D}_{i}\) are \(3\times|\mathcal{N}(i)|\) matrices of "spokes and rims" edge vectors of the \(i\)-th vertex at the rest and deformed states respectively. \(\|\mathbf{X}\|_{\mathbf{W}_{i}}^{2}\) denotes \(\operatorname{Tr}(\mathbf{X}^{\top}\mathbf{W}_{i}\mathbf{X})\). Here we use \(\mathbf{R}_{i}\) to denote \(\mathbf{X}_{i}\), since we drive the deformation gradient towards a rotation matrix in ARAP energy.
Previous method (Chen et al., 2017) optimizes the \(\ell_{2,1}\) version of Eq. 4 in a less efficient way. Their local step optimizes over per-vertex rotation \(\mathbf{R}_{i}\) and their global step minimizes over vertex positions \(\mathbf{V}\) using a two-block ADMM scheme. This leads to an expensive optimization with a full ADMM optimization in _each_ global step, making their method too slow for interactive usage. In contrast, applying our new three-block ADMM scheme to the local ARAP energy results in a much more efficient solver, which is one ADMM optimization itself (see the inset, where the blue regions denote an ADMM optimization). We further show the pseudocode of our three-block ADMM for local ARAP energy in Suppl. Alg.1.
More concretely, by setting \(\mathbf{Z}_{i}=\mathbf{V}_{i}-\widetilde{\mathbf{V}}_{i}\), we can further rewrite Eq. 4 as
\[\underset{\mathbf{V}_{i}\left\{\mathbf{R}_{i}\right\},\mathbf{Z}}{ \text{minimize}}\quad\sum_{i\in V}\quad\frac{1}{2}\|\mathbf{R}_{i}\mathbf{D}_{i }-\widetilde{\mathbf{D}}_{i}\|_{\mathbf{W}_{i}}^{2}+wa_{i}\|\mathbf{Z}_{i}\|_ {\text{SC-L1}}, \tag{5b}\] \[\text{s.t.}\quad\mathbf{Z}_{i}=\mathbf{V}_{i}-\widetilde{\mathbf{V}}_{i}, \quad\forall i. \tag{5a}\]
The above minimization problem can be solved efficiently using the following ADMM update steps:
\[\mathbf{R}_{i}^{k+1} \leftarrow\underset{\mathbf{R}_{i}\in\text{SO}(\mathbf{S})}{ \text{arg}\min}\ \frac{1}{2}\|\mathbf{R}_{i}\mathbf{D}_{i}-\widetilde{\mathbf{D}}_{i}\|_{ \mathbf{W}_{i}}^{2} \tag{6b}\] \[\mathbf{Z}_{i}^{k+1} \leftarrow\underset{\mathbf{Z}_{i}}{\text{arg}\min}\ \ w_{i}\|\mathbf{Z}_{i}\|_{\text{SC-L1}}+\frac{\rho}{2}\|\mathbf{V}_{i}^{k+1}- \widetilde{\mathbf{V}}_{i}-\mathbf{Z}_{i}+\mathbf{U}_{i}^{k}\|_{2}^{2}\] (6c) \[\mathbf{V}^{k+1} \leftarrow\underset{\mathbf{V}}{\text{arg}\min}\ \mathbf{W}(\mathbf{V}^{\top}\mathbf{L}\mathbf{V}-\mathbf{B}^{\top}\mathbf{V})+ \frac{\rho}{2}\|\mathbf{V}-\widetilde{\mathbf{V}}-\mathbf{Z}^{k}+\mathbf{U}^ {k}\|_{2}^{2}\] (6d) \[\mathbf{U}_{i}^{k+1} \leftarrow\mathbf{U}_{i}^{k}+\mathbf{V}_{i}^{k+1}-\widetilde{ \mathbf{V}}_{i}-\mathbf{Z}_{i}^{k+1} \tag{6a}\]
Here \(\rho\) is a fixed penalty parameter. For a detailed derivation of the ADMM update, please see Sec. 2 of the supplementary material.
The various steps in this ADMM-based algorithm are computed as follows:
For **updating**\(\mathbf{R}_{i}\), local step 1 (Eq. 6a) is an instance of the Orthogonal Procrustes problem, which can be solved in the same way as the rotation fitting step in (Sorkine and Alexa, 2007). The optimal \(\mathbf{R}_{i}\) can be computed as \(\mathbf{R}_{i}^{k+1}\leftarrow\mathcal{V}_{i}\mathcal{U}_{i}^{\top}\) from the singular value decomposition of \(\mathbf{M}_{i}=\mathcal{U}_{i}\Sigma_{i}\mathcal{V}_{i}^{\top}\), where \(\mathbf{M}_{i}=\mathbf{D}_{i}\widetilde{\mathbf{D}}_{i}^{\top}\).
For **updating**\(\mathbf{Z}_{i}\), following the derivation of the proximal operator of SC-L1 loss in Sec. 1 of the supplementary material, our local step 2 (Eq. 6b) is solved using a SC-L1 loss-specific shrinkage step:
\[\mathbf{Z}_{i}^{k+1} \leftarrow\mathcal{S}_{wa_{i}}^{k}\left(\mathbf{V}_{i}- \widetilde{\mathbf{V}}_{i}+\mathbf{U}_{i}\right) \tag{8}\] \[\mathcal{S}_{wa_{i}}(\mathbf{x}) =\left\{\begin{pmatrix}\frac{\rho s-wa_{i}/\|\mathbf{x}\|_{2}}{ \rho s-wa_{i}}\\ \mathbf{x},\text{ otherwise}\end{pmatrix}_{+}\mathbf{x},\text{ if }\|\mathbf{x}\|_{2}\leq s\right. \tag{7}\]
To avoid local minima in the shrinkage step, this assumes \(\rho\) is set to satisfy \(\rho>\frac{\max\left(wa_{i}\right)}{s}\) (see Sec. 1 of the supplementary material).
For **updating**\(\mathbf{V}\), the global step (Eq. 6c) can be achieved by solving a linear system:
\[(\mathbf{L}+\rho)\mathbf{V}=\mathbf{B}+\rho(\widetilde{\mathbf{V}}+\mathbf{Z}^ {k}-\mathbf{U}^{k}), \tag{9}\]
where the Laplacian \(\mathbf{L}\) and \(\mathbf{B}\) are defined in the same way as the global step (Eq. 9) in (Sorkine and Alexa, 2007). For fixed \(\rho\) an efficient implementation is obtained by precomputing and storing the Cholesky factorization of \(\mathbf{L}+\rho\mathbf{I}\).
### Example II: Local Neo-Hookean Energy
Our local deformation scheme can be further extended to physics-based elasticity energies, e.g., Neo-Hookean energy. Using the Neo-Hookean energy as our elasticity energy and following the framework of (Brown and Narain, 2021), the optimization problem in Eq. 3 can be written as follows:
\[\underset{\mathbf{V}_{i}\left\{\mathbf{X}_{j}\right\},\mathbf{Z}_{ i}}{\text{minimize}}\quad\sum_{j\in T}\underbrace{E_{\text{nn}}(\mathbf{X}_{j})}_{\text{ Neo-Hookean}}+\sum_{i\in V}\underbrace{w_{i}\mathbf{a}_{i}\|\mathbf{V}_{i}- \widetilde{\mathbf{V}}_{i}\|_{\text{SC-L1}}}_{\text{Locality}}, \tag{8}\] \[\text{s.t.}\quad\mathbf{X}_{j}=\text{sym}(\mathbf{D}_{j}\mathbf{V}), \forall j, \tag{7}\]
where \(T\) denotes all the elements.
Similarly, by introducing \(\mathbf{Z}_{i}=\mathbf{V}_{i}-\widetilde{\mathbf{V}}_{i}\), we can minimize our local Neo-Hookean energy using ADMM. For a detailed derivation, please see Sec. 3 of the supplementary material.
The ADMM update (Alg. 1) for the above minimization problem is as follows:
(9) \[\mathbf{X}_{j}^{k+1} \leftarrow\underset{\mathbf{X}_{j}}{\text{arg}\min}\ \ E_{\text{nn}}(\mathbf{X}_{j})+\frac{\gamma}{2}\|\text{sym}(\mathbf{D}_{j} \mathbf{V})-\mathbf{X}_{j}+\mathbf{W}_{j}\|_{2}^{2}\] (10a) \[\mathbf{Z}_{i}^{k+1} \leftarrow\underset{\mathbf{Z}_{i}}{\text{arg}\min}\ \ w_{i}\|\mathbf{Z}_{i}\|_{\text{SC-L1}}+\frac{\rho}{2}\|\mathbf{V}_{i}^{k+1}- \widetilde{\mathbf{V}}_{i}-\mathbf{Z}_{i}+\mathbf{U}_{i}^{k}\|_{2}^{2}\] (10b) \[\mathbf{V}^{k+1} \leftarrow\underset{\mathbf{V}}{\text{arg}\min}\ \ \sum_{j\in T}\frac{\gamma}{2}\|\text{sym}(\mathbf{D}_{j} \mathbf{V})-\mathbf{X}_{j}+\mathbf{W}_{j}\|_{2}^{2}\] (10c) \[+\sum_{i\in V}\frac{\rho}{2}\|\mathbf{V}-\widetilde{\mathbf{V}}- \mathbf{Z}^{k}+\mathbf{U}^{k}\|_{2}^{2}\] \[\mathbf{W}_{j}^{k+1} \leftarrow\mathbf{W}_{j}^{k}+\text{sym}(\mathbf{D}_{j}\mathbf{V})- \mathbf{X}_{j}\] (10d) \[\mathbf{U}_{i}^{k+1} \leftarrow\mathbf{U}_{i}^{k}+\mathbf{V}^{k+1}-\widetilde{\mathbf{V}}_{i }-\mathbf{Z}_{i}^{k+1}\] (10e)
Here \(\rho\) and \(\gamma\) are fixed penalty parameters.
The local step 2 (updating \(\mathbf{Z}_{i}\)) and the global step (updating \(\mathbf{V}\)
Let us denote the proximal operator of \(E_{\text{nh}}\) and the singular value decomposition of \(\text{sym}(\mathbf{D}_{j}\mathbf{V})+\mathbf{W}_{j}\) as:
\[\text{prox}_{E_{\text{nh}}}(\mathbf{X}_{j})=E_{\text{nh}}(\mathbf{X} _{j})+\frac{\gamma}{2}\|\text{sym}(\mathbf{D}_{j}\mathbf{V})+\mathbf{W}_{j}- \mathbf{X}_{j}\|_{2}^{2} \tag{13}\] \[\text{sym}(\mathbf{D}_{j}\mathbf{V})+\mathbf{W}_{j}=\mathcal{U}_ {j}\Sigma_{j}\mathcal{V}_{j}^{\top} \tag{12}\]
where \(\gamma\) is the augmented Lagrangian parameter for \(\mathbf{X}_{j}\).
As shown by (Brown and Narain, 2021), we can compute the optimal \(\mathbf{X}_{j}\) as:
\[\Sigma_{j}^{k+1}\leftarrow\operatorname*{arg\,min}_{\Sigma_{j}} \ \text{prox}_{E_{\text{nh}}}(\mathcal{U}_{j}\Sigma_{j}\mathcal{V}_{j}^{\top}) \tag{15}\] \[\mathbf{X}_{j}\leftarrow\mathcal{U}_{j}\Sigma_{j}^{k+1}\mathcal{V }_{j}^{\top} \tag{14}\]
Specifically, one can compute the SVD of \(\text{sym}(\mathbf{D}_{j}\mathbf{V})+\mathbf{W}_{j}\) and perform the minimization of \(\text{prox}_{E_{\text{nh}}}\) only on its singular values, while keeping singular vectors unchanged. The above optimization of singular values \(\Sigma_{j}^{k+1}\) can be performed using an L-BFGS solver.
### Extension to Other Elastic Energies
Our algorithm can easily generalize across different dimensions and material models. Switching the material model only requires a change on the minimization problem in the local step \(1\ \operatorname*{arg\,min}_{\mathbf{X}}\), which can be optimized over the singular values of the symmetric factor \(\mathbf{X}_{i}\) of the deformation gradient.
#### 4.3.1. As-Conformal-As-Possible Energy
For editing tasks where users intend to locally scale the geometry while preserving the texture, it's desirable to constrain the angle preservation, or conformality (see Fig. 4 and Fig. 10). We can adapt the ARAP energy to as-conformal-as-possible (ACAP) energy (Bouaziz et al., 2012) by allowing local scaling:
\[E_{\text{ACAP}}(\mathbf{V})=\sum_{k\in T}\sum_{l,j\in\mathcal{N}(k)}\frac{w_{ ij}}{2}\|s_{k}\mathbf{R}_{k}\widetilde{\mathbf{d}}_{ij}-\mathbf{d}_{ij}\|_{2}^{2} \tag{16}\]
where \(s_{k}\) is a scalar controlling the scaling of the local patch and can be computed analytically (see Sec.4 of the supplementary material).
#### 4.3.2. Cloth
Our method also generalizes to higher co-dimensional settings, such as deformable thin sheets and cloth in \(\mathbb{R}^{3}\). We model the cloth deformation using ARAP elasticity (Eq. 4), hard strain limiting, and quadratic bending resistance (Bergou et al., 2006).
#### 4.3.3. 1D Polyline
Our algorithm can be also extended to the local editing of 1D polyline in vector graphics. The deformation of a polyline can be modeled using the ARAP energy (Eq. 4) with uniform weights.
## 5. Results
We evaluate our method by comparing it against existing local deformation tools and showcasing its extension to various elastic energies. All the colormaps in our figures visualize the vertex displacement with respect to the rest shape. The accompanying video also includes several animation examples generated using our local deformation tool.
We implement a 2D version of our method in MATLAB with gptoolbox (Jacobson et al., 2018), and a 3D version in C++ with libigl (Jacobson et al., 2018) based on the WRAPD framework (Brown and Narain, 2021). We also implement the 2D version of our method in C++ for runtime evaluation and comparison. Benchmarks are performed using a MacBook Pro with an Apple M2 processor and 24GB of RAM for 3D and a Windows desktop with an i9-9900K 3.60 GHz CPU for 2D. Table 2 in the supplementary material shows the performance statistics and relevant parameters of all our examples.
QualityWe compare our methods against other local editing tools, including **i)**\(\ell_{1}\)-based deformation (Chen et al., 2017), **ii)** regularized Kelvinlets (De Goes and James, 2017) and **iii)** biharmonic
Figure 4. Our method automatically choose a natural ROI based on the elastic energy in use. Here we use the same parameter settings for both the local ARAP and local ACAP energies. With the local ACAP energy, we have a smaller ROI than the case of local ARAP energy as the ACAP energy allows for local scaling.
Figure 5. Given the _same_ handle offset magnitude, a “natural” ROI size also depends on the way the handles are moved, which is more complex than simply growing the ROI proportional to the handle displacement. Here we moved the handles (yellow) with the _same_ offset magnitude 1.0 towards the bottom, left and right respectively, resulting different ROIs for the _same_ handle offset magnitude.
coordinates (Wang et al., 2015). Among them, the use of \(\ell_{2,1}\)-norm regularization (Chen et al., 2017) causes artifacts (see Fig. 3-b and Fig. 13-b). Regularized Kelvinlets technique (De Goes and James, 2017) deforms a shape based on Euclidean distances, thus not shape-aware, creating artifacts when two disjoint parts are close in Euclidean space but far away geodesically (see the blue region in Fig. 3-c and the teeth area in Fig. 13-c). Methods based on shiharmonic coordinates, such as (Wang et al., 2015), usually require careful placement of additional fixed control points to pre-determine the ROI. The latter two methods do not minimize any elastic energy in the deformation process, and thus their deformations are more susceptible to shape distortion (see Fig. 13). We additionally compare our method against the _sparse_ deformation method (Gao et al., 2012), which directly introduces a sparsity-induced norm in ARAP energy. As shown in Fig. 2, the resulting deformation is _sparse_ but not _local_, thus requiring the setup of additional fixed constraints.
In contrast, our method produces the deformation which is local, natural, and shape-aware; it automatically adapts the ROI without the need for careful control primitive setup.
_Efficacy._ We illustrate the ROI adaptation of our method in different situations: It adapts to different energy models--for example, the local ACAP has a smaller ROI than the local ARAP energy as the former allows for local scaling (see Fig. 4). It also adapts to different extents of deformation. As Fig. 8 demonstrates, the ROI gradually increases as the deformation of the bar becomes larger.
One can configure the local deformation style by choosing various elastic energy models. For example, the local Neo-Hookean energy leads to deformation that preserves volume, while the local ARAP energy is volume agnostic (see Fig. 11). The deformation can be further tuned by introducing additional affine constraints--for instance, to enable the character to wave hands (Fig. 12) or the crocodile to open its mouth (Fig. 13)--in a natural way.
_Performance._ In terms of performance, our solver is able to efficiently minimize the energy at interactive rate, while the method of (Chen et al., 2017) is too slow to run in realtime. Because their method only supports 2D ARAP energy, in Table 1 of the supplementary material, we evaluate the runtime of our method (using both the SC-L1 loss and \(\ell_{2,1}\) loss) and (Chen et al., 2017) on a 2D ARAP local energy and across different mesh resolutions and deformations. Measured with the same convergence threshold, our method runs orders of magnitude faster than (Chen et al., 2017), achieving roughly \(1000\times\) speedup for small deformation and \(100\times\) speedup for large deformation.
_Extensibility._ Our method can easily generalize to other dimensions and material models, such as the ACAP deformation, cloth deformation, and 1D polyline deformation (see Fig. 10 and Fig. 4). The local ACAP energy enables local scaling and better preserves the texture around the deformed region. In Fig. 7, the user can interactively edit a polyline and naturally recovers its rest shape, which is a desirable feature by the users. In Fig. 6, to deform a cloth in a physically plausible way, the deformation locality is particularly useful, as otherwise a local edit of the cloth may cause a global change leading to unexpected intersections with other objects. Lastly, to demonstrate our method in a more complex scenario, an editing session involving multiple objects and clothes is shown in Fig. 1.
## 6. Conclusion & Future Work
We describe a regularization based on an "SC-L1 loss" which provides an effective and simple to implement tool for localizing an elastic energy driven deformation to only those regions of a shape being manipulated by a user. The region of influence induced by our method naturally adapts to the geometry of the shape, the size of the deformation, and the elastic energy being used. Furthermore SC-L1 regularization is generic enough to be applied to a wide range of shapes and elastic energies, including 1D, 2D, 3D and cloth finite element deformation, and is fast enough to be used in real-time. Our proposed approach offers several benefits for shape manipulation: It avoids undesired movement in far-off regions of a shape when only one part is being moved by the user, it allows parts of a shape to be deformed with direct manipulation without a pre-rigging step, and avoids the visual artifacts of previous work.
There remain several issues related to localized shape deformation not addressed by our method. Firstly our regularization is applied independently per-vertex, which makes it difficult to apply to splines, NURBS, or even meshes with highly irregular element sizes, which we mark an important direction for future work. In addition, since we use an ADMM method in the optimization, our approach suffers from the common shortcomings of applying ADMM to non-convex energies, including lack of convergence guarantees and slow convergence when high precision is required. Exploration of other optimization algorithms alleviating these issues is another useful future direction. Finally, although it is out of scope for our work here, we note in particular the usefulness of incorporating localized elastic energy deformation into sculpting workflows for artists. This involves a number of facets: Choosing the correct elastic energy to achieve an artistic effect, providing an intuitive UI to adjust the scale of the ROI (for instance by adjusting \(w\) and \(s\), see Fig. 9 and the supplementary video), and in ensuring that our tool integrates well with other sculpting tools. This is particularly useful when handling large "freeform" deformations, as the elastic energy will tend to fight against such deformations, making other tools more suitable. One simple idea for this is to simply reset the rest shape after each click-and-drag, since each deformation step is then independent of the others, and one could switch between our method and others at each step. We have found this mode of interaction to be useful even when only using our method, as it leads to a simple sculpting-style interface, and we include some examples in the supplementary video.
###### Acknowledgements.
This work is funded in part by National Science Foundation (1910839). We especially thank George E. Brown for sharing the WRAPD implementation and the help with setting up experiments. We thank Jiayi Eris Zhang and Danny Kaufman for sharing the undeformed scene geometry; Lillie Kittredge and Huy Ha for proofreading; Rundi Wu for the help with rendering; all the artists for sharing the 2D and 3D models and anonymous reviewers for their helpful comments and suggestions. |
2308.10846 | Real World Time Series Benchmark Datasets with Distribution Shifts:
Global Crude Oil Price and Volatility | The scarcity of task-labeled time-series benchmarks in the financial domain
hinders progress in continual learning. Addressing this deficit would foster
innovation in this area. Therefore, we present COB, Crude Oil Benchmark
datasets. COB includes 30 years of asset prices that exhibit significant
distribution shifts and optimally generates corresponding task (i.e., regime)
labels based on these distribution shifts for the three most important crude
oils in the world. Our contributions include creating real-world benchmark
datasets by transforming asset price data into volatility proxies, fitting
models using expectation-maximization (EM), generating contextual task labels
that align with real-world events, and providing these labels as well as the
general algorithm to the public. We show that the inclusion of these task
labels universally improves performance on four continual learning algorithms,
some state-of-the-art, over multiple forecasting horizons. We hope these
benchmarks accelerate research in handling distribution shifts in real-world
data, especially due to the global importance of the assets considered. We've
made the (1) raw price data, (2) task labels generated by our approach, (3) and
code for our algorithm available at https://oilpricebenchmarks.github.io. | Pranay Pasula | 2023-08-21T16:44:56Z | http://arxiv.org/abs/2308.10846v1 | # Real World Time Series Benchmark Datasets with Distribution Shifts:
###### Abstract
The scarcity of task-labeled time-series benchmarks in the financial domain hinders progress in continual learning. Addressing this deficit would foster innovation in this area. Therefore, we present **COB**, **C**rude **O**il **B**enchmark datasets. COB includes 30 years of asset prices that exhibit significant distribution shifts and optimally generates corresponding task (i.e., regime) labels based on these distribution shifts for the three most important crude oils in the world. Our contributions include creating real-world benchmark datasets by transforming asset price data into volatility proxies, fitting models using expectation-maximization (EM), generating contextual task labels that align with real-world events, and providing these labels as well as the general algorithm to the public. We show that the inclusion of these task labels universally improves performance on four continual learning algorithms, some state-of-the-art, over multiple forecasting horizons. We hope these benchmarks accelerate research in handling distribution shifts in real-world data, especially due to the global importance of the assets considered. We've made the (1) raw price data, (2) task labels generated by our approach, (3) and code for our algorithm available at oilpricebenchmarks.github.io.
## 1 Introduction
Benchmarks serve as testbeds for artificial intelligence (AI) algorithms and have promoted the rapid acceleration of algorithm performance that has been seen over the past ten years. Notable examples from the beginning of this ten year period include ImageNet [4], COCO [12], CIFAR-10 [15], and the Arcade Learning Environment [1], the first of which is often credited with initiating the ongoing deep learning revolution.
These benchmarks all contain images in the form of raw pixel data, which is an important medium, but more recently, benchmarks have expanded to include different modalities of data that are aimed at evaluating progress in a variety of AI subfields and problem domains.
In this work, we focus on _temporal distribution shifts_ within the financial domain and introduce time-series benchmark datasets of the three primary global crude oil price markers, _West Texas Intermediate (WTI)_, _Brent Blend_, and _Dubai Crude_.
For each dataset we provide over 30 years of data--daily spot prices for WTI and Brent and monthly average prices for Dubai Crude--and see striking distribution shifts throughout this time period for each asset.
Problem settings with time-series or otherwise sequential data often pose serious issues to predictive AI models in several ways that the standard supervised learning setting using (independent and identically distributed) IID non-sequential data does not. Time-series often embodies obstacles, such as
* **Correlatedness**. While training predictive models, series data points that occur near the same time tend to take similar values and have similar targets. This biases the models to this local region of the input and target variable spaces.
* **Non-stationarity**. Detailed in Section 1.1.
* **Missing data**. Subsets of data are often missing from series data. For example, inadequate record keeping may be why the WTI and Brent datasets we introduce are missing 3 and 2 percent of daily spot price data, respectively, but we resample the datasets in a way that results in new series without missing data.
* **Spurious data**. Outliers, erroneous, or infeasible data skew model learning in poor directions.
### Non-Stationarity
Non-stationarity, or distribution shifts, threaten predictive models that have been trained on data and then deployed to make predictions online, or in real-time on different data. As the data distribution shifts further from the data distribution the model was trained on, the worse the model performs. Furthermore, this non-stationarity can manifest in fundamentally different ways.
Let \(x\) be the independent variables, \(y\) be the target variables, and \(P\) be an appropriate probability measure, then we have
the following types of non-stationarity:
* _Covariate shift_: A shift in the distribution of independent variables \(P(X)\).
* _Prior probability shift_: A shift in the distribution of target variables \(P(Y)\).
* _Concept drift_: A shift in the distribution of the relationship between the independent and target variables \(P(Y\mid X)\).
### Out-of-Distribution Detection
It's crucial for online algorithms to quickly identify distribution shifts because predictive models perform worse as the data they are predicting on deviates further from the data that they were trained on. _But how does an algorithm decide that a distribution shift has occurred?_ There's a trade-off between speed and confidence in deciding whether a recent sequence of data includes a distribution shift.
### Continual (Lifelong) Learning
A rapidly growing field of artificial intelligence that often handles data online is _continual learning_, which is sometimes used interchangeably with _lifelong learning_ or _incremental learning_. Continual learning algorithms aim to develop models that accrue useful knowledge over time to accomplish some set of tasks, or equivalently, perform well on data distributions corresponding to some set of underlying tasks while avoiding _negative interference_ or _catastrophic forgetting_.
We refer to the typical continual learning problem setting as one in which the tasks are known, regime shifts happen instantaneously during a discrete timestep, and both the regime-generated data and the regime labels are given to the model at any time \(t\).
### Our Contribution
Our main contribution is the creation of three new real-world benchmark datasets on prices of WTI, Brent, and Dubai crude oil, which are of critical importance worldwide (Backus and Crucini, 2000; Kilian and Park, 2009).
We have
1. **Transformed** the data into a proxy for volatility,
2. **Fitted** the data into tasks using expectation-maximization (EM), a well-established algorithm, to optimize for measures of information criteria,
3. **Provided** contextual labels of major real-world events that align with price and regime shifts,
4. **Demonstrated** that our algorithmically task-labeled dataset generated tasks that align with recessions and other spurious real-world events.
We hope that this work will accelerate the development of continual learning, out-of-distribution detection, and other algorithms in handling issues posed by real-world data that contains distribution shifts. Furthermore, we believe that the societal importance of the assets represented by these datasets makes these benchmarks especially appealing.
## 2 Datasets
A description of WTI, Brent, and Dubai Crude oils can be found in Section 4.
The raw data for WTI and Brent contains prices at a daily frequency while that for Dubai Crude contains average prices at a monthly frequency. Therefore the resampling process described in Section 3.3 applies only to WTI and Brent.
## 3 Dataset Creation Process
This section elucidates the steps that we have taken to construct the real-world benchmark datasets that we are providing.
### Converting Daily Spot Price to Volatility
**WTI and Brent**
We use the daily spot prices of WTI (U.S. EIA, 2022) and Brent (U.S. EIA, 2022) to derive weekly percent changes, a proxy of volatility, which is important in evaluating _value at risk_ (VaR), a statistic that quantifies the extent of potential financial losses within a position over a certain period of time.
**Dubai Crude**
We use the monthly average prices of Dubai Crude (IMF, 2022) to derive monthly percent changes, another proxy of volatility, but one that is attenuated by the averaging over the monthly prices, as seen by comparing Figure 3 against Figure 1 and Figure 2.
**Heterogenous Dataset Difficulty**
The difference in pricing frequency between the first two, WTI and Brent, and the last, Dubai, allow us to create benchmark datasets with different difficulties. Since Dubai Crude prices are monthly averages, much of the pricing spread is smoothed out relative to if daily spot prices were used. Unfortunately, the most reliable and granular Dubai Crude prices that had history comparable in length to the WTI and Brent prices we use are the monthly average prices we source from (IMF, 2022).
However, we see this as an opportunity to present benchmark datasets with varying degrees of difficulty on various tasks, most obviously to make predictions on, as target variables with lower spread are easier for predictive models to handle.
### Selecting Model Class Used to Create Tasks
**Markov Regime Switching Models**
Markov switching models are used on sequential data that is expected to transition through a finite set of latent states, or tasks. These states can and often differ significantly, as evidenced in Figure 1, Figure 2, and Figure 3.
[Kim _et al._, 1998] describes a Markov switching model that is particularly well suited to handling the heteroskedasticity and mean reversion properties of the raw data. The algorithm used in (Kim _et al._, 1998) has additional nice properties, such as the use of expectation-maximization (EM), a form of (local) _maximum a posteriori_ (MAP) estimation.
Therefore, we use (Kim _et al._, 1998) to construct the benchmark datasets and describe these steps in Section 3.3.
### Creating Tasks Through Fitting model class
**Appropriate Resampling**
The raw WTI and Brent spot price data was captured daily, and the day-to-day differences oscillated between positive and negative values so frequently that the method we used to
generate tasks [Kim _et al._, 1998] oscillated between tasks so frequently that the results were unrealistic. Resampling the data by every two days before using [Kim _et al._, 1998] didn't resolve the issue, as evidenced by Figure 5 in Appendix A.
We found that resampling weekly was an excellent frequency for [Kim _et al._, 1998] to generate tasks that were reasonable and well optimized measures of likelihood and information criteria.
#### Evaluating Stationarity
Since we're putting forth the WTI, Brent, and Dubai crude oil datasets as benchmark datasets, we want them to be amenable to several types of evaluations. The most common evaluation on these series data are statistical inference, specifically forecasting future asset prices. [Nagabandi _et al._, 2018; He _et al._, 2019; Caccia _et al._, 2020; He and Sick, 2021; Pasula _et al._, 2023] have shown that the provision of task, or context, information to predictive models boosts their ability to accurately predict future values on a variety of problem domains with challenging properties.
Therefore, it's important to ensure that the time-series benchmark datasets we're providing are useful for evaluating tasks such as forecasting or statistical inference. A critical step towards this is verifying that this series data are not characterized by unit root processes, which can lead to issues in statistical inference on this data [Phillips and Perron, 1988].
To evaluate whether unit roots are present in any of the benchmark datasets, we perform an augmented Dickey-Fuller (ADF) test on each of the datasets. The ADF tests on the datasets reject the null hypotheses that unit roots are present in the data with all p-values \(<1\mathrm{e}{-23}\).
#### Choosing the Number of Tasks
The problem of choosing the number of tasks \(k\) is one of choosing the model itself: once \(k\) is specified, [Kim _et al._, 1998] uses EM to fit the data to a partition of \(k\) tasks. A rigorous way to guide model selection, and hence \(k\), is to use statistical measures of fit known as information criteria [Stoica and Selen, 2004]. The Akaike, Bayesian, and Hannann-Quinn information criteria (AIC, BIC, HQIC) are three of the most commonly measures. Each penalizes model deviation the same way but they vary the degree that they penalize the number of model parameters and the loss of degrees of freedom.
We choose \(k\) so that \(\text{AIC}+\text{BIC}+\text{HQIC}\) is minimized under the constraint that each task is present for at least one timestep for every dataset. This results in \(k=3\). We provide more information on our decision process for this in Appendix B.
#### Smoothing probabilities
At any time \(T\) with \(t<T\), [Kim _et al._, 1998] uses an algorithm introduced in [Kim, 1994] to smooth conditional probabilities \(P(\mathcal{T}_{t}=i\mid Y_{t})\) for every \(t\) based on all observations available up through time \(T\).
#### Assigning Tasks
For every timestep \(t\), we assign task \(\mathcal{T}_{t}\) to the argmax of the smoothed probabilities at \(t\) over all tasks \(\{1,2,\ldots,k\}\).
## 4 Datasets
### West Texas Intermediate Crude Oil Daily Spot Price
A light (low density) and sweet (low sulfur content) crude oil that is ideal for gasoline refining. This oil is extracted from land wells in the United States and is transported to a location in the center of the US, Cushing, Oklahoma. The cost required to transport WTI from its land-locked location to overseas areas of the globe is a major drawback to this otherwise high-quality crude oil and contributes to the positive Brent-WTI spread. WTI is the main benchmark for oil that is used within the United States. We depict our benchmark dataset creation algorithm for WTI in Figure 1.
### Brent Crude Oil Daily Spot Price
The most popular oil contracted, making up approximately two-thirds of all global crude contracts, "Brent" is a blend of oil from four fields in the North Sea and, like its WTI counterpart, is light and sweet. Because it can be distilled into high-value products, such as gasoline and diesel fuel, and due to it being sea-sourced and thus having relatively low transportation costs globally, it has a far greater global reach and generally higher value than WTI. We depict our benchmark dataset creation algorithm for Brent in Figure 2
### Dubai Crude Oil Monthly Average Price
Also known as _Fateh_, Dubai Crude is the least popular of the three primary global crude oil price benchmarks, but it is used as such because it is one of the few crude oils originating from the Persian Gulf that is available immediately. Dubai Crude has medium density and higher sulfur content than WTI and Brent but is priced comparably to both because of its central role in pricing crude oil exports from the Persian Gulf to Asian-Pacific markets. We depict our benchmark dataset creation algorithm for Dubai Crude in Figure 3
## 5 Experimental Design and Results
We use the regime, or task, labels derived from our algorithm as context labels for four continual learning algorithms: MOLe Nagabandi _et al._[2018], MoB Pasula _et al._[2023], MAML \(k\)-shot, and MAML-continuous. We evaluate on these algorithms because (1) each uses meta-learned priors \(\theta^{*}\) for few-shot adaption to distribution shifts and (2) to cover a comprehensive set of continual learning algorithm /emphclasses: a selector of expert models, a weighted mixture of expert and constructive non-expert models, a few-shot model that adapts from a history of \(k\) data points, and a few-shot model that adapts continously from each new data point, respectively.
The first two, MOLe Nagabandi _et al._[2018] and MoB Pasula _et al._[2023], are modular algorithms, instantiating models from meta-learned priors when a new task is detected. In comparison to MOLe, a state-of-the-art approach for its time, MoB superceder MOLe by attaining both better overall performance and requiring fewer additional models to do so on multiple diverse domains while controlling for model architecture.
The latter two, MAML \(k\)-shot and MAML-continuous use a single model adapted from a meta-learned prior. MAML
Figure 1: Gray areas are periods that the NBER has classified as a recession. _(Top)_ WTI daily spot price. _(Middle top)_ Excess returns, a proxy of volatility and variance. _(Middle bottom)_ Fitted model regime probabilities over time. _(Bottom)_ Task labels for each point in time, computed via argmax probabilites.
Figure 2: _(Top)_ Brent crude oil daily spot price per barrel. _(Middle top)_ Percent change over the weekly end price, a proxy of volatility and variance. _(Middle bottom)_ Fitted model regime probabilities over time. _(Bottom)_ Regime labels for each point in time, computed via argmax probabilites.
\(k\)-shot adapts using the \(k\) latest data points, and MAML-continuous updates the model at each time step using the most recent observation. We chose \(k=13\) for WTI and Brent and \(k=3\) for Dubai Crude based on prior experience with similar time-series data and to align with the notion of _quarterly_ duration, which is of key importance in financial domains.
We evaluate the usefulness of our contextual task labels by comparing the percent change in forecasting accuracy over financially and seasonally meaningful time-horizons with and without the inclusion of these task labels as model inputs. Since the original Dubai Crude time-series were average monthly prices, the horizons for this and for WTI and Brent differ. Results of the improvements after including the task labels are shown in Figure 4.
By including our EM-based algorithm's automatically generated contextual task labels, we see universal improvement over each benchmark, time-horizon, and model considered. Since the raw data for Dubai Crude have already been averaged over month-long periods, we expect the performance changes on this benchmark to be relatively small compared to those of WTI and Brent, which is indeed what we see in Figure 4.
## 6 Limitations
While our work on time-series benchmarks in the financial domain provides valuable contributions, there are certain limitations to be acknowledged. These limitations include:
_Domain Specificity_: The benchmark datasets we introduce focus solely on the financial domain, specifically crude oil prices. While this is an important domain, the applicability of our benchmarks to other domains may be limited.
_Dataset Granularity_: The datasets we provide have varying levels of granularity, with daily spot prices for WTI and Brent and monthly average prices for Dubai Crude. This difference in frequency may impact the generalizability of our benchmarks across different time-series analysis tasks.
_External Factors_: Our benchmarks consider major real-world events and contextual labels aligned with price and regime shifts. However, there may be additional external factors and events that could impact the performance of AI algorithms on the datasets, which are not explicitly captured in our work.
Addressing these limitations and expanding the scope of our benchmarks to encompass diverse domains and data granularities would be valuable directions for future research.
## 7 Discussion
In conclusion, we have introduced three novel time-series benchmark datasets that encompass the global crude oil market. These datasets were meticulously processed to fit into tasks using an expectation-maximization algorithm, providing valuable insight into temporal distribution shifts and their significant impact on predictive models. The benchmarks come equipped with labels aligning with significant real-world events, creating a rich context for further exploration and understanding.
Furthermore, we empirically show that including our algorithm's automatically generated task labels as model inputs
Figure 3: _(Top)_ Dubai crude oil monthly average price per barrel. _(Middle top)_ Percent change over the _monthly average price_, a proxy of volatility and variance, but not as granular as the weekly end price that the WTI and Brent datasets are using. _(Middle bottom)_ Fitted model regime probabilities over time. _(Bottom)_ Regime labels for each point in time, computed via argmax probabilites.
universally improves performance over all benchmarks, time-horizons, and models considered.
Our goal is to fuel advancements in areas such as continual learning and out-of-distribution detection, while addressing challenges posed by sequential data like non-stationarity and correlatedness. We believe that these benchmarks, due to the societal importance of the assets they represent, can make a substantial contribution to AI research, particularly in handling real-world data that contains distribution shifts. As AI continues to permeate every sector, the importance of such representative, real-world benchmarks will continue to grow.
## Acknowledgments
We would like to extend our sincerest gratitude to the IJCAI 2023 AI4TS Organizers and Program Committee. Their insightful reviews, constructive feedback, and recognition of the importance of this work have been instrumental in shaping the final manuscript. Their dedication to fostering innovation and excellence in AI is truly commendable, and we're honored that they have awarded this work in recognition of promoting these ideals.
## Ethical Statement
There are no ethical issues.
|
2308.01793 | Spectrum-to-position mapping via programmable spatial dispersion
implemented in an optical quantum memory | Spectro-temporal processing is essential in reaching ultimate per-photon
information capacity in optical communication and metrology. In contrast to the
spatial domain, complex multimode processing in the time-frequency domain is
however challenging. Here we propose a protocol for spectrum-to-position
conversion using spatial spin wave modulation technique in gradient echo
quantum memory. This way we link the two domains and allow the processing to be
performed purely on the spatial modes using conventional optics. We present the
characterization of our interface as well as the frequency estimation
uncertainty discussion including the comparison with Cram\'er-Rao bound. The
experimental results are backed up by numerical numerical simulations. The
measurements were performed on a single-photon level demonstrating low added
noise and proving applicability in a photon-starved regime. Our results hold
prospects for ultra-precise spectroscopy and present an opportunity to enhance
many protocols in quantum and classical communication, sensing, and computing. | Marcin Jastrzębski, Stanisław Kurzyna, Bartosz Niewelt, Mateusz Mazelanik, Wojciech Wasilewski, Michał Parniak | 2023-08-03T14:41:44Z | http://arxiv.org/abs/2308.01793v2 | Spectrum-to-position mapping via programmable spatial dispersion implemented in an optical quantum memory
###### Abstract
Spectro-temporal processing is essential in reaching ultimate per-photon information capacity in optical communication and metrology. In contrast to the spatial domain, complex multimode processing in the time-frequency domain is however challenging. Here we propose a protocol for spectrum-to-position conversion using spatial spin wave modulation technique in gradient echo quantum memory. This way we link the two domains and allow the processing to be performed purely on the spatial modes using conventional optics. We present the characterization of our interface as well as the frequency estimation uncertainty discussion including the comparison with Cramer-Rao bound. The experimental results are backed up by numerical numerical simulations. The measurements were performed on a single-photon level demonstrating low added noise and proving applicability in a photon-starved regime. Our results hold prospects for ultra-precise spectroscopy and present an opportunity to enhance many protocols in quantum and classical communication, sensing, and computing.
+
Footnote †: Equal contributions
+
Footnote †: Equal contributions
+
Footnote †: Equal contributions
+
Footnote †: Equal contributions
## I Introduction
Encoding information in many degrees of freedom of light such as polarization [1; 2], angular momentum [3; 4] or temporal [5; 6] and spatial modes [7; 8] is crucial in quantum and classical optics [9], especially in optical communication [10; 11; 12] and metrology [13]. Spectral bins [14; 15] or other kinds of temporal modes [16; 17] may be used to encode qubits or high-dimensional states, and are an important tool for quantum information processing [18; 19; 20; 21]. In optical communication, clever transformation of many temporal or spectral modes at the receiver site allows reaching the ultimate limits in channel capacity [22; 23; 24]. In metrology, such spectro-temporal processing enables optimal detection, extracting all the information from detected photons, manifested as saturating the Quantum Cramer-Rao bound. Implementing the desired spectro-temporal operations on many modes is however challenging, as in general it requires a multi-stage setup of stacked electro-optical modulators and dispersive elements [25; 26; 27]. On the other hand, in the spatial domain many of the transformations can be realized by simple optical elements such as lenses and beamsplitters interleaved with free space. Hence, linking the two domains seems advantageous and may extend the set of currently available spectro-temporal manipulations. One way to create an interface between the spectrum of the light and position can be implemented using dispersive elements, such as diffraction gratings [28; 29]. However, they do not provide proper spectrum-to-position mapping as the information about the frequency of the signal is conserved and thus are not convenient for quantum and classical information processing. In particular, the spectral components of the signal separated into spatial modes will not be able to interfere. The diffraction grating thus cannot be used for instance to convert frequency-bin qubits into dual-rail spatial-mode qubits. Moreover, the diffraction-grating spectrometers are mainly limited by their unremarkable resolution, which for very precise detection requires large gratings.
In recent years, it was shown that dispersion in the medium can be controlled via the electromagnetic field [30], especially in resonant atomic media [31; 32; 33]. Large dispersion that can be introduced in atoms may allow to outperform the resolution of the diffraction grating spectrometers. A novel method with a so-called adaptive prism [34; 35] provided ultra-high dispersion allowing for resolving spectral components of light with high precision.
Here we present a brand new approach to tackle this problem by utilizing optical gradient echo quantum memory [36] based on cold rubidium atoms along with spatial spin-wave modulation technique [37; 38]. Quantum memory may also be employed as a useful and feasible interface connecting the angle of incident with read-out light propagation direction [39]. With recent advances in the field of single-photon-level spatial imaging [40; 41], we were able to create the interface between spectral components of light and its spatial degree of freedom, allowing for spectrum-to-position conversion, enabling ultra-high resolution spectrometry as well as spectro-spatial quantum information encoding.
## II Idea
The presented method is based on spectrum to position mapping in gradient echo quantum memory. The spectrum to direction interface is implemented in three steps as sketched in Fig. 1. First, the frequencies of the optical signal are mapped onto spatially separate portions of the atomic cloud. Next, a prism-like phase modulation is applied to the atomic coherence to prime those portions to emit into distinct directions. Finally, the coherence is mapped back to light.
We employ gradient echo quantum memory (GEM) protocol [42] built around three atomic levels \(\left|g\right\rangle,\left|h\right\rangle\) and \(\left|e\right\rangle\) in a
\(\Lambda\) type system presented in Fig. 1(d). The interface between light and atoms in this setup allows us to map the optical signal \(\mathcal{E}_{\text{in}}(x,y,t)=A_{\text{in}}(t)\cdot u(x,y)\exp(-i\omega_{0}t)\) from the entrance plane \(z=-L/2\) onto atomic coherence \(\rho_{gh}\), where \(u(x,y)\) is beam spatial profile and \(A_{\text{in}}(t)\) is a temporal envelope of the amplitude. Due to magnetic field gradient causing Zeeman shifts between energy levels \(|g\rangle\) and \(|h\rangle\), different spectral components of light are stored in different parts of the atomic ensemble along the propagation axis \(z\). The mapping follows the resonance condition:
\[\omega=\beta\cdot z+\omega_{0} \tag{1}\]
where \(\beta\) is the value of the magnetic gradient, \(z\) is the position along the \(z\)-axis and \(\omega_{0}\) is the optical carrier frequency.
The atomic coherence \(\rho_{gh}^{(i)}(x,y,z)\) stored in the quantum memory can be approximated as:
\[\rho_{gh}^{(i)}(x,y,z)\approx\alpha n(x,y,z)\tilde{A}_{\text{in}}(\beta z)\exp (i\beta zT) \tag{2}\]
where \(n(x,y,z)\) is atomic cloud density spatial profile, \(\alpha\) is a constant corresponding to coupling beam amplitude, \(\tilde{A}_{\text{in}}(\omega)\) is Fourier transform of the input signal temporal envelope \(A_{\text{in}}(t)\) and \(T\) is the storage duration. The signal is mapped onto the entire cloud since beam spatial profile \(u(x,y)\) is broader than cloud spatial profile \(n(x,y,z)\).
To redirect various frequencies into different directions we imprint a phase modulation \(\phi(x,z)=\kappa xz/L\) onto the stored atomic coherence \(\rho_{gh}^{(i)}(x,y,z)\). Thus the atomic coherence is transformed \(\rho_{gh}^{(m)}(x,y,z)=\rho_{gh}^{(i)}(x,y,z)\exp(i\kappa xz/L)\). Crucially, this modulation represents a shift of the \(k_{x}\) component of the wavevector by \(\kappa z/L\). Since each position \(z\) represents a certain frequency component as dictated by Eq. 1, the shift can be written as
\[k_{x}(\omega)=(\omega-\omega_{0})\frac{\kappa}{\beta L} \tag{3}\]
Since the final far field picture of the readout will derive from momentum distributions, let us Fourier transform the coherence along \(x\) and \(y\) axis. Assuming the atomic cloud spatial profile has the same cross section \(n_{\perp}(x,y)\) at every \(z\) i.e. \(n(x,y,z)=n_{\perp}(x,y)n_{z}(z)\), we obtain:
\[\tilde{\rho}_{gh}^{(m)}(k_{x},k_{y},z)\approx\tilde{n}_{\perp}(k_{x},k_{y})* \tilde{A}_{\text{in}}\left[\omega\left(k_{x}\right)\right]\cdot n(z)\exp(i \beta zT) \tag{4}\]
where \(*\) denotes convolution and \(\omega(k_{x})=\omega_{0}+k_{x}L\beta/\kappa\) is obtained by inverting Eq. 3.
After the spatial phase modulation, we flip the magnetic gradient \(\beta\rightarrow-\beta\) to gradually unwind the GEM longitudinal phase \(\exp(i\beta zT)\) of the atomic coherence. After this step \(\rho^{(f)}=\rho^{(m)}\exp(-i\beta zT)\).
Finally we illuminate the atoms with coupling beam to perform the read-out. The electric field at the readout has a direction-dependent amplitude \(\tilde{A}_{\text{out}}(k_{x},k_{y})\) which is a sum of contributions from slices of the atomic cloud along the propagation of the beam: \(\tilde{A}_{\text{out}}(k_{x},k_{y})=\int d\tilde{\rho}_{gh}^{(f)}(k_{x},k_{y},z)\), where \(\tilde{\rho}\) and \(\tilde{A}_{\text{out}}\) denote Fourier transform along \(x\) and \(y\) of the respective fields.
For the coupling laser propagating along the \(z\) axis, the momentum conservation dictates that the transverse wavevector \(k_{x}(\omega)\) and \(k_{y}\approx 0\) will be directly transferred from atomic coherence to the emitted photons wavevector. It follows that the read-out signal's emission angle \(\theta(\omega)=k_{x}(\omega)/k_{0}\) is proportional to the frequency of the incoming light.
\[\theta(\omega)=\frac{\kappa}{L\beta k_{0}}(\omega-\omega_{0}) \tag{5}\]
The required phase modulation \(\phi(x,z)=\kappa xz/L\) is accomplished by illuminating atoms with shaped, strong off-resonant light. The intensity pattern \(I(x,y)\) is produced modulo \(I_{2\pi}\), where \(I_{2\pi}\) is the intensity of ac-Stark beam for which the phase of the atomic coherence is changed by \(2\pi\) since only the acquired phase is relevant for the experiment and higher intensity leads to decoherence.
By considering the relation from Eq. (5) we can see that a higher magnetic field gradient \(\beta\) allows for denser storage of the impulses in the cloud broadening the bandwidth of the converter but consequently diminishing the resolution of the converter.
The bandwidth of the presented converter is fundamentally limited by the energy difference between two ground states of hyper-fine structure \(|h\rangle\) and \(|g\rangle\) that is equal to \(2\pi\times 6.8\,\text{GHz}\).
Another limiting factor is GEM storage efficiency that is equal to \(\eta=1-\exp\left(-2\pi\text{ODT}/\text{B}\right)\)[43], where \(\Gamma\) is decoherence rate caused by the coupling beam, B is memory bandwidth and OD is the optical density of atomic ensemble.
## III Experiment
The experiment is based on GEM that is built on rubidium-87 atoms trapped in a magneto-optical trap (MOT). The trap
Figure 1: Main steps of the experiment: (a) Different frequencies of incoming signal (blue and yellow) are stored in separate parts of the cold atomic cloud (gray) due to magnetic field gradient. Atomic polarization wavefronts are presented as white disks, corresponding wave vectors are displayed below the cloud. (b) Spatially shaped off-resonant illumination induces phase modulation (violet) and causes wavefronts to tilt proportionally to their positions along the z axis. (c) During the retrieval components of stored signal are emmited into different directions. Magnetic gradient is turned off at this stage causing all components to be emitted with the same frequency. (d) Relevant \({}^{87}\)Rb energy levels. (e) Experimental sequence. SSM, HP and ZP are respectively spatial spin-wave modulation, Hyperfine pumping and Zeeman pumping.
ping and experiments are performed in a sequence lasting \(12\,\mathrm{ms}\), which is synchronized with power line frequency. The experimental sequence is presented in Fig. 1(e). Atoms form an elongated cloud in a cigarette shape with an optical density reaching \(60\). The ensemble temperature is \(50\,\mathrm{\SIUnitSymbolMicro K}\). After cooling and trapping procedure atoms are optically pumped to the state \(\ket{g}\coloneqq 5^{2}S_{1/2}\,F=2,m_{F}=2\). We utilize the \(\Lambda\) system depicted in Fig. 1(d) to couple the light and atomic coherence. Signal laser with \(\sigma^{-}\) polarization is red detuned by \(2\pi\times 60\,\mathrm{MHz}\) from the \(\ket{g}\rightarrow\ket{e}\coloneqq 5^{2}P_{1/2}\,F=1,m_{F}=1\) transition. Coupling laser with \(\sigma^{+}\) polarization is tuned to the resonance for the \(\ket{e}\rightarrow\ket{h}\coloneqq\nicefrac{{5}}{{2}}S_{1/2}\,F=1,m_{F}=0\) transition enabling two-photon transition, inducing atomic coherence between \(\ket{g}\) and \(\ket{h}\) states. Ac-Stark modulation is performed with \(\pi\) polarized beam red detuned by \(\Delta_{acS}=2\pi\times 1\) GHz from the transition \(\ket{h}\rightarrow\nicefrac{{5}}{{2}}P_{3/2}\,F^{\prime}=2\). We set waists of the coupling and signal beams in cloud's near field to be respectively \(217\,\mathrm{\SIUnitSymbolMicro m}\) and \(695\,\mathrm{\SIUnitSymbolMicro m}\).
We defined transverse dimension of the atomic ensemble \(R\) as the distance of the \(x\)-axis where the cloud density decreases by a factor of \((1/e)^{2}\). In the same way we defined longitudinal dimension \(L\) but along the \(z\)-axis. To measure \(R\) and \(L\), we shine the cloud with the beam perpendicular to the \(z\) and \(x\) axes and measured the atomic absorption profile. We fitted Gaussian function to the transverse dimension and super-Gaussian function to longitudinal dimension. The parameters \(L\) and \(R\) equal respectively \(9\,\mathrm{mm}\) and \(208\,\mathrm{\SIUnitSymbolMicro m}\).
The transverse distribution of the atoms \(n_{\perp}(x,y)\) in the cloud determines the transverse spatial profile of readout light and the far field divergence. For a cloud with a gaussian cross-section with waist \(R\) the emitted beam's angle spread equals \(w_{\theta}=\lambda/(\pi R)\). By generalized Rayleigh criterion[44] the lowest difference of angles which can be resolved is \(\delta\theta\geq 1.33w_{\theta}\). It follows that minimal difference in frequencies are bounded by \(\delta\omega\geq 1.33w_{\omega}=1.33w_{\theta}\frac{L\omega\beta}{\pi}=1.33 \frac{2L\theta}{\pi R}\). Where \(w_{\omega}\) is the waist of the least spread emitted beam measured on the spectroscope. In this case the resolving power of spectroscope would be \(R_{p}=\frac{\omega_{0}}{\delta\omega}\).
To imprint the prism-like modulation phase profile we utilise spatial spin-wave modulation setup [9]. The ac-Stark beam is shaped by illuminating the spatial light modulator (SLM) then sent onto the cloud and monitored on an auxiliary CCD camera. The camera and atomic ensemble are both placed at the image plane of the SLM. Ac-Stark beam intensity profile is generated via mapping camera pixels onto SLM pixels and optimizing displayed image with iterative algorithm in a feedback loop connecting image detected on camera and target displayed on SLM.
We set the magnetic gradient to \(\beta=2\pi\times 1.35\,\mathrm{MHz}\,\mathrm{cm}^{-1}\) and with measured atomic cloud length \(L\) we calculated memory bandwidth \(B=\beta L=2\pi\times 1.2\,\mathrm{MHz}\). Along with the coupling-induced decoherence decay rate \(\Gamma=9.1\,\mathrm{kHz}\) it leads to the light absorption efficiency \(\eta=36.5\%\). We store signal impulses with gaussian envelope and duration \(\sigma_{t}=5.64\,\mathrm{\SIUnitSymbolMicro s}\) in GEM sequence. This corresponds to gaussian spectral shape \(\exp{(-\omega^{2}/2\sigma_{\omega}^{2})}\) with \(\sigma_{\omega}=2\pi\times 30\,\mathrm{kHz}\). After the storage the spin-wave modulation is performed via ac-Stark beam applying prism-like modulation phase profile. Finally the readout is performed using a strong coupling beam and the emitted light is imaged onto the intensified sCMOS camera (I-sCMOS), placed in the far-field of the atomic ensemble. The camera is equipped with image intensifier allowing it to be sensitive to single-photons. The intensifier gate was open during the readout stage for \(1\,\mathrm{\SIUnitSymbolMicro s}\). In order to reduce the noise we used far and near field apertures. To remove the stray coupling beam light we used the atomic filter [45] placed between the MOT and I-sCMOS. The filter consisted of a glass cell containing warm rubidium-87 vapor optically pumped to the \(5S_{1/2},F=1\) state so the coupling beam was absorbed while the signal and the emitted light pulses were transmitted.
## IV Calibration
To calibrate the position on the I-sCMOS camera in terms of the deflection angle we utilized a reference transmission diffraction grating with a known wavevector \(k_{\perp}=2\pi\times 10\,\mathrm{mm}^{-1}\) placed in the near field of the signal beam, exactly behind the chamber. We measured the distance of the difference of the camera pixel corresponding to the deflection angle imposed by the diffraction grating \(\theta_{k_{\perp}}=8\,\mathrm{mrad}\), to be \(29\) pixels which leads to a ratio \(0.27\,\mathrm{mrad}/\mathrm{px}\). This procedure allowed us to convert the position on the camera to the value
Figure 2: (a) The relevant cross sections of the image from I-sCMOS camera stacked for different frequency detuning. (b) Results obtained from numerical simulations. Columns represent different values of the \(\kappa\). (c) The crosssection of Fig. 2(b) through \(\omega/2\pi=450\,\mathrm{kHz}\). We observe the manifestation of higher order deflection modes for higher frequency detuning in experiment and simulation.
of wavevector imprinted on the atoms by ac-Stark beam.
In addition to that we also measured the angular spread of the read-out emission \(w_{\theta}^{\text{exp}}=1.5\,\text{mrad}\). This is close to the limit of \(w_{\theta}=1.2\,\text{mrad}\) imposed by the cloud diameter \(2R\). From this values we can calculate the fundamentally limited minimal spread in frequencies registered on the spectrometer \(w_{\omega}=2\pi\times 91.7\,\text{kHz}\) and the measured experimentally \(w_{\omega}^{\text{exp}}=2\pi\times 114.6\,\text{kHz}\). In our system the main limitation for the resolution is the maximal deflection angle. Grating density is limiting possible angular range since the narrowest fringe's Rayleigh range must be equal to the waist of the transverse dimension of the atomic cloud \(R\). Thus maximal wavevector of the grating is \(k_{\text{max}}=2\pi\sqrt{\pi/(\lambda R)}\).
In order to assess the experimental parameters of the applied phase, it is crucial to determine the number of SLM pixels per atomic cloud millimeter (ppcm). To establish this coefficient we display a grating with a wavevector given on the SLM and record the image on the camera in located at the same distance as the atomic cloud. Knowing the size of camera pixel we establish pccm to be \(104\,\text{mm}^{-1}\).
By applying a constant diffraction grating phase profile with wavevector \(k\), we benchmarked the resolution which is achievable by the SLM optical setup. We determined the maximal achievable \(k\), by requiring the amplitude of the 1st deflection mode to be greater than the 0th. Our measurements show that this value is \(k_{\text{max}}^{\text{exp}}=2\pi\times 12\,\text{mm}^{-1}\). This means that the maximal achievable deflection angle is \(\theta_{\text{max}}^{\text{exp}}=\pm 9.54\,\text{mrad}\). For our parameters \(\kappa=2\pi\times 20\,\text{mm}^{-1}\) and \(\omega_{0}=2\pi\times 377\,\text{THz}\) the ideal resolution and resolving power would be respectively \(\delta\omega=2\pi\times 120\,\text{kHz}\) and \(R_{p}=3.2\times 10^{9}\) and thus \(\sigma_{\omega}\ll\delta\omega\), which allows us to examine fundamental limitation of our setup.
Another important piece in our experiment was the magnetic gradient and its calibration. In order to calibrate the value of the magnetic gradient we conduct a measurement displaying a special pattern shown on Fig. 3(b). This pattern is obtained by flipping the sign of the prism-like phase pattern every 110 pixels. By estimating what is the bandwidth of each small section and knowing the ppcm coefficient we can calculate the value of the magnetic gradient by dividing the frequency range attributable to a certain number of pixels by its length.
Let us now describe the measurement procedure. We scan signal laser frequency by \(1.6\,\text{MHz}\) with \(40\,\text{kHz}\) step collecting 40 independent spectra. For each incoming signal frequency \(\omega\) we collect photon positions along \(x\) axis from 2000 iterations of the experiment. Collected histograms of counts are depicted in Fig. 2(b).
During the final measurement the average photon number in each read-out iteration was \(\bar{n}_{\text{read}}\simeq 2.5\) per frame and the average number of background photons was \(\bar{n}_{\text{noise}}\leq 0.1\) per frame which was mostly produced by the coupling beam leak and the average number of dark count photons of the image intensifier is estimated at \(0.0007\) per frame.
## V Simulation
The performance of the converter can be calculated from the actual phase mask profiles projected by the SLM and the atomic density measured by absorption imaging. We acquire the image projected by the SLM onto the atoms from an auxiliary camera, as seen in Fig. 4(a), and rescale it to calculate the actual phase profile \(\phi(x,z)\). While measuring the radius of the atomic cloud we collected the shadow image of the cloud, from which we infer the density of the atoms \(n(x,z)\). Combining these we can calculate an approximate spin-wave coherence as \(\rho_{\phi h}(x,z)\approx n(x,z)\exp(i\phi(x,z))\). Here each position along \(z\) corresponds to a frequency as described in section II. We expect the angular distribution of the readout light to be given by the Fourier transform of \(\rho_{gh}(x,z)\) along \(x\). The intensity of emission predicted this way is displayed in Fig. 2(b) we perform the Fourier Transform of \(n(x,z)\exp(i\phi(x,z))\) in the \(x\)-axis and plot its squared absolute value. For an ideal phase pattern \(\kappa z\kappa/L\) we should get a single Gaussian ridge along the line described by the Eq. (5).
However, the phase profile is imperfect. It loses sharpness, and the fringes are more blurry, illustrated in Fig. 4. Imperfections manifest themselves as parasitic orders of diffraction. They are visible in the form of smaller lines with a different inclinations. These patterns are observed in the experiment as well.
In order to verify the converter operation and assess its agreement with the numerical simulations we need to establish how large deflection angle can we achieve per \(1\,\text{MHz}\) of frequency detuning. To histograms corresponding to different frequencies we fit a Gaussian function to extract the peak's position \(\theta\). After that, we fit a linear function to the data \((\omega/2\pi,\theta)\) for \(k_{\text{max}}^{\text{exp}}\) achieving the slope \((12.40\pm 0.09)\,\text{mrad}\,\text{MHz}^{-1}\) for the experiment in Fig. 5. This result is consistent with the numerical simulations which deviate from the experimental results by \(4\%\) and with uncertainties fit within \(1.5\sigma\). The relative uncertainty of the slope obtained from the simulations equals
Figure 3: (a) Deflection angle as a function of frequency detuning in case of calibration of the magnetic field gradient. (b) Magnetic gradient calibration pattern displayed on SLM. The pattern is obtained by periodically flipping sign of the ordinary prism-like pattern.
2.6% and is caused mostly by the precision of estimating the magnetic gradient per pixel of the spatial light modulator.
## VI Resolving power
The non-trivial imperfections of the converter suggest that to best assess its resolving capabilities a versatile informational approach is needed. The lower bound for frequency estimation uncertainty is given by Cramer-Rao bound [46] (CR bound). The minimal variance of the parameter \(\omega\), for \(N\) photons, is given by the inequality:
\[\Delta^{2}\omega\geq\frac{1}{NF_{\omega}} \tag{6}\]
Where \(F_{\omega}\) denotes the Fisher information [47] defined as:
\[F_{\omega}=\int_{-\infty}^{+\infty}\frac{\left[\frac{dp_{\omega}(\theta)}{d \omega}\right]^{2}}{p_{\omega}(\theta)}\,d\theta \tag{7}\]
Where \(p_{\omega}(\theta)\) is the probability of detecting a photon deflected by an angle \(\theta\) for given frequency \(\omega\). In our case, we aim the estimate the central frequency of a Gaussian spectrum. In the ideal case, the uncertainty for a Gaussian with width \(w_{\omega}/2\) will be \(\Delta^{2}\omega=(w_{\omega}/2)^{2}/N\). In practice, we observe additional diffraction orders and other imperfections which lead to deviations from theoretical maximum.
During data analysis, for the obtained spectrogram we can calculate the Fisher information per registered photons and compare them to the achieved variance of positions registered by the camera depicted in Fig. 6. We generate the statistics via bootstrapping [48] by randomly selecting 500 frames out of 2000 and repeating the calculations 100 times.
From the measurements we take the histograms of counts (just like the one in Fig. 2(c)) as an experimental approximation of probability distributions \(p_{\omega}(\theta)\). The average number of registered photons was around 5000. Next we calculate Fisher information from definition given above, approximating the derivative by the finite difference of the neighboring distributions.
The converter efficiency varies with frequency \(\eta(\omega)\) due to inhomogeneities in the atomic density along the cloud. We rescale the Fisher information for each frequency by the factor \(\eta(\omega)/\eta_{\text{max}}\), thus calculating it per photon transmitted by the memory for the optimal frequency at which we reach \(\eta_{\text{max}}\). The plotted quantity in Fig. 6 is the inverse of the square root of the calculated Fisher information.
In order to compute fit errors, for a given frequency \(\omega\), we collect 2000 frames from the I-sCMOS camera. These frames are randomly distributed among 100 samples, each containing 500 frames. On average each sample contains 1250 photon counts. Every sample is then averaged and a gaussian function with variable position and fixed width and height is fitted to the first-order peak. The fixed parameters are extracted from the average of all 2000 frames. In the end, we have 100 positions \(\theta\) corresponding to a single frequency. This allows us to estimate the variance of the position \(\Delta^{2}\theta\) and estimate the true value to be an average of these positions. We repeat this
Figure 4: (a) The pattern illuminating the atoms. (b) The ideal pattern. (c) Vertical slice through the pattern used in experiment (through pixel 1450). (d) Vertical slice through the ideal pattern (through pixel 1450).
Figure 5: Fitted positions of gaussian distribution (points) and line fitted to them. Data sets are shifted by 2 mrad to clarify the plot. The grey colored points are the manifestations of parasitic orders of diffraction and are not taken into account in the line fitting procedure.
Figure 6: The plot of \(1/\sqrt{NF_{\omega}}\) (red line) and the uncertainty (square root of the variance) of the fitted frequencies (circles) extracted from the experimental data. The blue and purple line correspond to the Cramer-Rao bound calculated for the gaussian pulse with width \(w_{\omega}/2\) and \(w_{\omega}^{\text{eqn}}/2\) respectively and number of photons equal to the average. We approach the fundamental limit in the peak optical density regions.
procedure for all available frequencies.
The relationship between angles of deflection \(\theta\) and frequency \(\omega\) is depicted in Fig. 7. Errorbars are square roots of the variance \(\Delta^{2}\omega\) calculated from the previously explained bootstrapping method.
In order to translate MHz into pixels we perform a fit of the linear function into the data of gaussian centers vs frequency detuning. Fit data acquired from accumulating 2000 frames and data obtained from bootstrapping with errorbars equal to standard deviations are shown in Fig. 7. To obtain the uncertainties in proper units we take the square root of variance - that is standard deviation and scale by the coefficient of MHz per pixel and \(10^{3}\) to obtain kHz. We can compare the CR bound with the uncertainty of the fitted center of the gaussian distribution.
Now let us experimentally test the resolution of the converter by sending a pulse with a double-gaussian spectrum. In Fig. 8 we compare the write-in signal (histogram) with readout from the camera (red line) for different frequency separations \(\varepsilon\). Resolution can be calculated as the width of fitted gaussian functions to the read-out or alternatively it is the lowest resolvable separation of two gaussian pulses. These two approaches are consistent, estimated spectrometer resolution is \(\delta\omega^{\text{exp}}=2\pi\times 150\,\text{kHz}\). The resolution is close to the limit calculated theoretically, which is \(\delta\omega=2\pi\times 120\,\text{kHz}\). The measured resolving power of the converter is \(R_{P}^{\text{exp}}=2.5\times 10^{9}\). The discrepancy may be caused by varying refraction of the beam caused by air currents and temperature fluctuations in the optical setup (similar to astronomical "seeing") and misalignment of the I-sCMOS camera in the far field.
## VII Conclusions
In summary, we have demonstrated spectrum-to-position conversion interface in gradient echo quantum memory based on the ultracold atomic ensemble along with spatial spin-wave modulation technique. The experimental performance of the converter was compared with the numerical simulations obtained from phase mask profiles for different values of \(\kappa\).
We have shown that for our setup Rayleigh limit is \(\delta\omega^{\text{exp}}=2\pi\times 150\,\text{kHz}\) corresponding to the resolving power \(R_{P}^{\text{exp}}=2.5\times 10^{9}\) which is larger by few orders of magnitude from diffraction grating spectrometers [49]. Calculating Fisher information allowed us to define the Cramer-Rao bound for the converter, which limits the minimal possible uncertainty of estimation of the frequency of the signal, effectively providing division for the converter. The uncertainty obtained from the analysis of experimental data approaches the CR bound in the region where the higher order deflection modes make a negligible contribution.
Presented spectrum-to-position conversion with low noise level makes the protocol suitable to act as a super-precise spectrometer at a single-photon-level regime for the signals near the rubidium emission frequency. It can be applicable in quantum information processing and quantum computing utilizing spatial degree of freedom of light.
The improvement of the magnetic field gradient would allow for extending the GEM bandwidth. By combining it with a faster change in the number of slits on the phase profile, the presented setup would be able to resolve a proportionally larger number of spectral modes, achieving the resolution of \(\sim 10\,\text{kHz}\). The results of this article introduce many prospects for applications of ultra-precise spectrum-to-position conversion in optical communication and optical signal processing as the presented conversion protocol does not conserve information about the frequency of the stored signal.
Data availabilityData for figures 2-8 has been deposited at [50] (Harvard Dataverse).
###### Acknowledgements.
The "Quantum Optical Technologies" (MAB/2018/4) project is carried out within the International Research Agendas programme of the Foundation for Polish Science co-financed by the European Union under the European Regional Development Fund. This research was funded in whole or in part by National Science Centre, Poland grant no. 2021/43/D/ST2/03114. We thank K. Banaszek for generous support.
Figure 8: (a) Two impulses separated by \(\varepsilon=150\,\text{kHz}\) (b) \(\varepsilon=300\,\text{kHz}\) (c) \(\varepsilon=450\,\text{kHz}\). Red curves on the plots represent write-in signal stored in the memory and grey bars represent number of photons of read-out signal measured on the I-sCMOS camera.
Figure 7: Fit data acquired from accumulating 2000 frames and data obtained from averaging over 100 iterations with errorbars equal to standard deviations. Points are intentionally shifted by 5 kHz in frequency detuning for better clarity of the plot. |
2306.09663 | Resonant Cancellation Effect in Ramsey Experiments | We investigate the response of a Ramsey-type experiment on an additional
oscillating magnetic field. This superimposed field is oriented in the same
direction as the static main magnetic field and causes a modulation of the
original Larmor spin precession frequency. The observable magnitude of this
modulation reduces at higher frequencies of the oscillating field. It
disappears completely if the interaction time of the particles matches the
oscillation period, which we call resonant cancellation. We present an
analytical approach that describes the effect and compare it to a measurement
using a monochromatic cold neutron beam. | Ivo Schulthess, Ivan Calic, Estelle Chanel, Anastasio Fratangelo, Philipp Heil, Christine Klauser, Gjon Markaj, Marc Persoz, Ciro Pistillo, Jacob Thorne, Florian M. Piegsa | 2023-06-16T07:39:30Z | http://arxiv.org/abs/2306.09663v1 | # Resonant Cancellation Effect in Ramsey Experiments
###### Abstract
We investigate the response of a Ramsey-type experiment on an additional oscillating magnetic field. This superimposed field is oriented in the same direction as the static main magnetic field and causes a modulation of the original Larmor spin precession frequency. The observable magnitude of this modulation reduces at higher frequencies of the oscillating field. It disappears completely if the interaction time of the particles matches the oscillation period, which we call resonant cancellation. We present an analytical approach that describes the effect and compare it to a measurement using a monochromatic cold neutron beam.
## I Introduction
Rabi's nuclear magnetic resonance method [1; 2] and Ramsey's techniques of separated oscillatory fields [3; 4; 5] have been used very effectively in various scientific experiments. These methods utilize both constant and time-varying magnetic fields to manipulate the spin of probe particles. Specifically, Ramsey's technique allows for precisely determining of the Larmor precession frequency of a spin in a magnetic field \(B_{0}\). This is achieved by initially flipping the spin-polarized particles into the plane orthogonal to \(B_{0}\) using an oscillating field \(B_{1}\), allowing them to precess for a defined period of time, and then flipping them again using a second oscillating \(B_{1}\) field. By scanning the frequency of the oscillating fields close to the resonance and keeping the phase between the two signals locked, an interference pattern of the spin polarization in the frequency domain is obtained. Another option is to scan the relative phase between the two oscillating spin-flip signals while keeping their frequencies on resonance. Ramsey's technique is highly versatile and can be applied for precise measurements of changes in magnetic and pseudo-magnetic fields. For instance, it has been utilized in a variety of applications such as atomic clocks [6; 7], the measurement of the Newtonian gravitational constant [8], the measurement of the neutron magnetic moment [9], the search for the neutron electric dipole moment [10; 11; 12], probing for dark matter [13; 14], and the search for new particles and interactions [15].
The main magnetic field \(B_{0}\) that causes the Larmor precession in a Ramsey-type experiment is usually constant over time. In this manuscript, we describe the effect of a superimposed oscillating (pseudo-)magnetic field on the Ramsey signal, called resonant cancellation. Such a signal is potentially present in an axionlike dark-matter field that couples to the electric dipole moment of particles. Two experiments searching for an oscillating electric dipole moment of neutrons [16; 14] and electrons [17] showed how this systematic effect attenuates the signal and reduces their sensitivity. Moreover, ultralight scalar or pseudo-scalar dark matter could lead to oscillating nuclear charge radii that could be detected with Ramsey's technique in optical atomic clocks [18]. Such measurements would also potentially suffer from the effect.
In the following, we provide a theoretical derivation of the resonant cancellation effect for a monochromatic particle beam with a velocity \(v\). In a Ramsey-type experiment, the spins precess in the main magnetic field \(B_{0}\) due to the Larmor precession
\[\varphi=\omega T=\gamma B_{0}T\, \tag{1}\]
where \(\gamma\) is the gyromagnetic ratio and \(T\) is the interaction time. The phase shift that the spins acquire when interacting with an oscillating magnetic field \(B_{a}(t)=B_{a}\cos\left(2\pi f_{a}t\right)\) parallel to \(B_{0}\) is
\[\Delta\varphi=\gamma\int B_{a}\cos\left(2\pi f_{a}t\right)\mathrm{d}t\, \tag{2}\]
where \(B_{a}\) is the amplitude and \(f_{a}\) the frequency of the oscillating field. This becomes maximal when the integral is evaluated symmetrically around zero which leads to a maximum value of the phase
\[\Delta\varphi_{\mathrm{max}} =\left|\gamma B_{a}\int_{-T/2}^{T/2}\cos\left(2\pi f_{a}t\right) \mathrm{d}t\right| \tag{3}\] \[=\left|\frac{\gamma B_{a}}{\pi f_{a}}\sin\left(\pi f_{a}\frac{L} {v}\right)\right|\, \tag{4}\]
where the interaction length \(L=vT\). Equation (4) has the shape of a sinc-function with roots at \(f_{a}=k\cdot v/L\), \(k\in\mathbb{N}\). Hence, when the interaction time with the oscillating field matches one or multiple periods of the oscillating magnetic field, the integral becomes zero, yielding
no phase shift. Therefore, a Ramsey-type experiment is insensitive to magnetic field effects of specific frequencies. In the case of a white beam, e.g., with a Maxwell-Boltzmann-like velocity distribution, this effect is suppressed, and there is no complete cancellation. It still results in a reduced phase shift and, therefore, a decreased sensitivity to oscillating magnetic field effects at higher frequencies.
## II Setup
To investigate the resonant cancellation effect we conducted measurements during a dedicated beamtime in May 2021 at the Narziss instrument at the Paul Scherrer Institute in Switzerland. The Narziss beamline is usually used for neutron reflectivity measurements. It provides neutrons with a de Broglie wavelength of 4.96 A and can be regarded as monochromatic for the purpose of this measurement. A schematic of the experiment is presented in Fig. 1. The neutrons pass a beam monitor after exiting from the neutron guide. This monitor signal can be used to normalize all measurements to the same number of incoming neutrons and accounts for potential fluctuations in the neutron flux of the source. There are four apertures with an opening of \(30\times 2\) mm\({}^{2}\) that define the beam cross-section and divergence along the beam path. Their position is indicated in Fig. 1. The neutrons are polarized with a polarizing supermirror that transmits only one spin state. The other spin state is reflected and stopped in an aperture. To maintain the polarization, there is a small magnetic guiding field between the polarizer and the first spin-flip coil. The main magnetic field \(B_{0}\approx 3\) mT is created by an electromagnet surrounding the back of a C-shaped iron yoke. The yoke guides the field lines and creates a field between its flanks in the vertical direction. The oscillating magnetic field \(B_{a}(t)\) is applied over the same region as the \(B_{0}\) field. It is created by a Keysight waveform generator [19] that is connected via a Kepco bipolar power amplifier [20] to an auxiliary coil. This coil can also be used to apply DC offset fields to test the functionality of the setup. Two rectangular solenoid-type spin-flip coils induce resonant \(\pi/2\)-flips. They have a length of 6 cm and a center-to-center separation of 50 cm. Their axis is aligned with the neutron beam and their cross-section is much bigger than the size of the beam. Their signals are produced by a waveform generator and amplified by two Mini-Circuits LYZ-22+ power amplifiers [21]. They are connected to the spin-flip coils via resonance circuits that match 50 \(\Omega\) and are tuned to the resonance frequency of about 90 kHz. The two sides are covered with 2 mm thick aluminum plates to minimize their fringe fields. The spins are analyzed with a second polarizing supermirror. The transmitted neutrons are counted using a \({}^{3}\)He detector. Each neutron creates an electronic pulse. The timing of the pulse is processed and recorded by a custom-made data-acquisition system using an Arduino.
## III Measurements
The spin-flip signals were optimized for Ramsey-type measurements. We individually measured a Rabi resonance frequency of \((91.94\pm 0.01)\) kHz and \((91.46\pm 0.01)\) kHz for the first and the second spin-flip coil, respectively. For further measurements we applied a signal at a frequency of 91.7 kHz. The amplitudes of both spin-flip signals were optimized for the highest signal visibility in a Ramsey measurement, corresponding to a \(\pi/2\)-flip.
To test the functionality and characterize the setup we performed various Ramsey frequency and phase scans. Two such measurements are presented in Fig. 2. For each data point, the neutron counts were integrated over roughly 10 seconds, corresponding to \(2\times 10^{5}\) monitor counts. The Ramsey frequency scan presented in Fig. 2a shows an overall envelope which arises from the Rabi resonance. The fringes are the interference pattern produced by the two spatially separated spin-flip coils. All fringe maxima are close to the maximum neutron counts of about \(2.8\times 10^{4}\) neutrons, confirming the small wavelength spread. We measured the shift of the resonance frequency by scanning only the central fringe while applying various DC offset currents through the auxiliary coil. A linear fit through all resonance frequencies results in a value for the shift of \((-1.096\pm 0.002)\) kHz/A which corresponds to roughly 38 \(\mu\)T/A. The Ramsey phase scan in Fig. 2b shows a sinusoidal behavior whose phase shifts if an additional offset current is applied, as in the case of the central resonance fringe. We performed several more measurements to get the phase shift as a function of the applied current. A linear least-squares fit through all phases results in a value of \((-257.0\pm 0.4)^{*}\)/A. The mea
Figure 1: Schematic of the experimental apparatus. Neutrons enter the setup from the left. They first pass a polarizer that transmits only one spin state. The neutrons then enter the magnetic field region where a constant magnetic field \(B_{0}\) and an oscillating magnetic field \(B_{a}(t)\) are applied in the vertical direction. Two 6 cm-long spin-flip coils with a center-to-center separation of 50 cm allow to flip the spins. After the second spin-flip coil the neutron spin states are analyzed and counted with a \({}^{3}\)He detector. There are four apertures to define the beam cross-section and divergence. They also block the reflected beams from the polarizer and analyzer.
surements presented in Fig. 2 show that the apparatus works as expected and that an offset current through the auxiliary coil indeed changes the precession frequency of the neutrons. The results of the Ramsey frequency and phase scans can be used to calculate the effective interaction length with the use of Eq. (1) and \(v\approx 798\) m/s. The resulting value of \((51.9\pm 0.1)\) cm is slightly longer than the separation of the two spin-flip coils. The reason is that the spins already start to precess within the spin-flip coil when partially flipped.
To investigate the resonant cancellation effect, we measured the neutron counts continuously and saved their timing information. Because of limitations from the data-acquisition system, we phased/mapped the data into two periods of the oscillating magnetic field \(B_{a}(t)\). This mapping was triggered by the waveform generator which created the oscillating signal. We chose a time bin size of 20 \(\mu\)s. This size allowed us to have more than ten bins for the highest frequency of 3500 Hz. The relative phase between the spin-flip signals was set to \(105^{\circ}\) which corresponds to the point of steepest slope in the reference measurement of the Ramsey phase scan shown in Fig. 2b. At this point, the measurement is most sensitive to magnetic field changes, and the relation between the neutron counts and the phase is approximately linear.
We measured the oscillating neutron amplitude for 100 frequencies between 60 Hz and 3500 Hz. Four ex
Figure 3: Neutron signals for oscillating magnetic fields at 252 Hz, 1020 Hz, 1477 Hz, and 2227 Hz. They show the neutron counts as a function of time. The counts are phased/mapped into two periods of the signal. The data (black) are fitted with a sinusoidal function with a fixed frequency (orange). The fit results are presented in Tab. 1. The measurement time was roughly 100 seconds for each setting.
\begin{table}
\begin{tabular}{c|c c c c} & amplitude & offset & amplitude/offset & \(\chi^{2}\)/NDF \\ \hline
252 Hz & \(37\pm 1\) & \(368\pm 1\) & \(0.100\pm 0.004\) & 329/391 \\
1020 Hz & \(59\pm 6\) & \(1463\pm 4\) & \(0.040\pm 0.004\) & 120/92 \\
1477 Hz & \(5\pm 8\) & \(2093\pm 6\) & \(0.003\pm 0.004\) & 52/62 \\
2227 Hz & \(67\pm 12\) & \(3108\pm 9\) & \(0.022\pm 0.004\) & 47/39 \\ \end{tabular}
\end{table}
Table 1: Results of the sinusoidal fit of the data presented in Fig. 3. The amplitude is normalized by the offset to make the measurements comparable. The phase of the sinusoidal fit is not relevant for the analysis of the resonant cancellation effect. The \(\chi^{2}\) of each fit and the corresponding number of degrees of freedom are also given.
Figure 2: (a) Ramsey frequency scan over the full resonance. The data shows the neutron counts as a function of the spin-flip signal frequency. The solid line serves only as a guide for the eyes. (b) Ramsey phase scans were the neutron counts are shown as a function of the relative phase between the two spin-flip signals. The frequency was fixed on resonance at 91.7 kHz. Besides a reference measurement (black ), we applied various DC offset currents via the auxiliary coil. The measurements with a current of \(-100\) mA (blue ), \(+100\) mA (orange ), and \(+200\) mA (green ) are shown. The solid lines correspond to least-squares fits of a sinusoidal function.
amples of neutron signals are shown in Fig. 3. The results of their sinusoidal fits are presented in Tab. 1. All measurements have different periods but the same time bin sizes. Therefore, the number of time bins is different. Since the total number of neutrons is the same for all measurements, the amplitude and offset of the sinusoidal neutron signal scale with the frequency of the oscillating field. To account for this, we normalized the amplitude of the neutron signal by its offset. To improve the signal-to-noise ratio we repeated this sequence three times and averaged the data. The frequency dependence of the oscillating magnetic field amplitude \(B_{a}\) was investigated in an auxiliary measurement with a fluxgate sensor. It features a slight decrease for higher frequencies which is due to the experimental setup. We used the fluxgate data to account for this characteristics. Additionally, we normalized the amplitude to be one at DC. The result of this measurement and analysis is shown in Fig. 4. A fit of Eq. (4) yields the values of the roots at \((1529\pm 7)\) Hz and \((3057\pm 14)\) Hz. The reduced chi-square of the fit is \(\chi_{r}^{2}=1.04\) for 98 degrees of freedom. Figure 4 shows that the effect of the resonant cancellation behaves as expected for a monochromatic neutron beam. Given a neutron wavelength (velocity) of 4.96 A (798 m/s), the fit values of the root leads to an interaction length of \((52.2\pm 0.2)\) cm. This value is in agreement with the aformentioned effective interaction length of \((51.9\pm 0.1)\) cm which was determined from the Ramsey data.
## IV Conclusion
In conclusion, we investigated the resonant cancellation effect. This effect is present in Ramsey-type experiments where an oscillating magnetic field is superimposed to the main magnetic field that causes the Larmor precession. We showed that the neutron oscillation amplitude follows the expected theoretical behavior in the case of a monochromatic neutron beam. This results in a reduced sensitivity of the experiment for higher frequencies to the superimposed field. In particular, the sensitivity becomes zero if the interaction time of the neutrons matches the period of the oscillating field. This approach can be used to estimate the attenuation effect for experiments searching for time dependent signals. For instance, this systematic effect is important in axionlike dark-matter searches using electric dipole moments of particles.
###### Acknowledgements.
We gratefully acknowledge the excellent technical support by R. Hanni, J. Christen, and L. Meier from the University of Bern. The experiment has been performed at the Swiss Spallation Neutron Source SINQ at the Paul Scherrer Institute in Villigen, Switzerland. This work was supported via the European Research Council under the ERC Grant Agreement no. 715031 (BEAM-EDM) and via the Swiss National Science Foundation under grants no. PP00P2-163663 and 200021-181996.
## Conflict of interest
The authors have no conflicts to disclose.
## Author contributions
**I. Schulthess:** conceptualization (equal); data curation (lead); formal analysis (equal); investigation (equal); methodology (lead); software (lead); validation (equal); visualization (lead); writing - original draft (lead); writing - review & editing (equal). **I. Calic:** formal analysis (equal); investigation (equal); software (supporting); validation (equal); writing - review & editing (supporting). **A. Fratangelo:** investigation (equal); writing - review & editing (supporting). **P. Heil:** writing - review & editing (supporting). **Ch. Klauser:** resources (supporting); writing - review & editing (supporting). **G. Markaj:** writing - review & editing (supporting). **M. Persoz:** writing - review & editing (supporting). **C. Pistillo:** writing - review & editing (supporting). **J. Thorne:** writing - review & editing (supporting). **F. M. Piegsa:** conceptualization (equal); funding acquisition (lead); investigation (equal); project administration (lead); resources (lead); supervision (lead); validation (equal); writing - review & editing (equal).
Figure 4: Measurement of the resonant cancellation effect at the Narziss beamline. The normalized amplitude as a function of the frequency of the oscillating signal is shown. The data (black) are fitted with Eq. (4) (red). The lower subfigure shows the residuals.
## Data Availability
The data and analysis that support the findings of this study are openly available in a Github repository [22].
|
2307.06679 | Stochastic thermodynamics of a quantum dot coupled to a finite-size
reservoir | In nano-scale systems coupled to finite-size reservoirs, the reservoir
temperature may fluctuate due to heat exchange between the system and the
reservoirs. To date, a stochastic thermodynamic analysis of heat, work and
entropy production in such systems is however missing. Here we fill this gap by
analyzing a single-level quantum dot tunnel coupled to a finite-size electronic
reservoir. The system dynamics is described by a Markovian master equation,
depending on the fluctuating temperature of the reservoir. Based on a
fluctuation theorem, we identify the appropriate entropy production that
results in a thermodynamically consistent statistical description. We
illustrate our results by analyzing the work production for a finite-size
reservoir Szilard engine. | Saulo V. Moreira, Peter Samuelsson, Patrick P. Potts | 2023-07-13T10:56:04Z | http://arxiv.org/abs/2307.06679v3 | # Stochastic thermodynamics of a quantum dot coupled to a finite-size reservoir
###### Abstract
In nano-scale systems coupled to finite-size reservoirs, the reservoir temperature may fluctuate due to heat exchange between the system and the reservoirs. To date, a stochastic thermodynamic analysis of heat, work and entropy production in such systems is however missing. Here we fill this gap by analyzing a single-level quantum dot tunnel coupled to a finite-size electronic reservoir. The system dynamics is described by a Markovian master equation, depending on the fluctuating temperature of the reservoir. Based on a fluctuation theorem, we identify the appropriate entropy production that results in a thermodynamically consistent statistical description. We illustrate our results by analyzing the work production for a finite-size reservoir Szilard engine.
_Introduction.--_ In nanometer scale systems in contact with an environment, fluctuations of physical quantities are ubiquitous. The ability to control and measure systems at such small scales has been a key driving force in the development of stochastic thermodynamics [1, 2, 3, 4, 5, 6, 7, 8], which provides a theoretical framework for thermodynamics phenomena based on concepts such as stochastic entropy [9], as well as detailed [10, 11, 12, 13, 14, 15, 16] and integral [17, 18, 19] fluctuation theorems. Stochastic thermodynamics has over the last two decades successfully been employed to describe a large number of experiments on small scale systems, such as implementations of Maxwell's demon [20, 21, 22, 23, 24] and Szilard's engine [25, 26], verifications of Landauer's principle [27, 28, 29], tests of fluctuation theorems [30, 31, 32, 33], and determination of system free energies [34, 35, 36].
In all these experiments, the environment can to a good approximation be described as a bath, or reservoir, in thermal equilibrium. Thus, the reservoir is effectively of infinite size, such that the exchange of heat with the system does not affect the reservoir. However, in many nanoscale experiments, the reservoirs are themselves of finite size, with system back action inducing energy fluctuations within the reservoir. Given a fast relaxation time-scale, such a reservoir may then be described by a fluctuating temperature. Such temperature fluctuations were recently investigated in small metallic islands [37].
Theoretically, the effect of finite-size reservoirs with time-dependent (but not fluctuating) temperatures on thermodynamic and transport properties have been investigated in a number of systems [38, 39, 40, 41]. Furthermore, average values of thermodynamic quantities have been investigated for finite-size reservoirs that exhibit energy fluctuations [42]. There is to date, however, no stochastic thermodynamics analysis of small scale systems coupled to finite-size reservoirs, fully accounting for the system-reservoir back action and the resulting, correlated fluctuations of their physical properties. While the formalism outlined in Ref. [43] could provide the basis of such an investigation, we focus here on scenarios where the reservoir may be described by a (fluctuating) temperature at all times.
In this letter, we present such a stochastic thermodynamics analysis, focusing on a basic, experimentally realizable setup - a single level quantum dot with a time-dependent level energy, tunnel-coupled to a finite-size electronic reservoir that can be described by a fluctuating temperature. The dynamics of the system and the reservoir temperature is described by a Markovian master equation. Based on a fluctuation theorem, relating the probabilities for forward and backward trajectories for the system and the reservoir temperature, we identify the appropriate stochastic entropy production. This allows for a thermodynamically consistent description given the knowledge of the fluctuating reservoir temperature. This is in contrast to previous approaches describing finite-size reservoirs, where effective temperatures are defined based on averages of reservoir observables [43, 44, 45]. To illustrate the approach, we consider a Szilard engine and show that the performed work is smaller than the work of an ideal engine, where the reservoir is of infinite size.
_Entropy production conundrum.--_ The challenges with a stochastic thermodynamics description of small systems coupled to finite-size reservoirs can be compellingly illustrated by considering the basic setup in Fig. 1a). A classical two-state system, with an energy difference \(\epsilon>0\) between the two states \(0\) and \(1\), exhibits stochastic state transfers due to the exchange of a discrete amount of heat \(q=\pm\epsilon\) with a finite-size reservoir. Assuming that the reservoir temperature \(T\) increases monotonically with increasing reservoir energy, \(T\) will fluctuate between \(T_{0}\) and \(T_{1}<T_{0}\), with superscripts denoting the system state. Naively employing the known result for infinite-size reservoirs, namely that the entropy is given by the heat transferred divided by the reservoir temperature, one would assume that a transfer of heat into (out of) the reservoir leads to a production of entropy \(\Delta s_{\rm in}=-\epsilon/T_{1}\) (\(\Delta s_{\rm out}=\epsilon/T_{0}\)) in the reservoir. For two subsequent heat transfers this leads to a reservoir entropy production
\[\Delta s_{\rm in}+\Delta s_{\rm out}=\epsilon(1/T_{0}-1/T_{1})>0 \tag{1}\]
Since the system is back in the same state after two transfers, the system does not contribute to the entropy.
The result in Eq. (1) would thus imply that in equilibrium, stochastic heat fluctuations lead to a non-zero entropy production, which is physically non-sensical. From this reasoning, it is clear that entropy production in a reservoir with a temperature changing as a result of heat transfers between system and reservoir requires further understanding. To provide this, in the following we present a fully stochastic approach to the thermodynamics of an experimentally realistic implementation of the setup in Fig. 1 a).
_System and master equation_.--We consider a single level quantum dot with a time-dependent level energy \(\epsilon_{t}\), coupled to a finite-size electron reservoir via a tunnel barrier, characterized by a tunnel rate \(\Gamma\), see Fig. 1 b). An electron tunneling from the dot to the reservoir (from the reservoir to the dot) at time \(t\) adds (removes) energy \(\epsilon_{t}\) to (from) the reservoir. The stochastic nature of the tunneling process induces, in this way, fluctuations in time of the average reservoir energy \(E\). The electronic thermalization in the reservoir is considered to be so fast that, at all times, the electrons are effectively in a quasi-equilibrium state, described by a Fermi distribution
\[f(\epsilon,E)=\frac{1}{1+e^{\epsilon/k_{\mathrm{B}}T(E)}}. \tag{2}\]
The temperature \(T(E)\) is related to \(E\) via the heat capacity \(C(T)=C^{\prime}T\) as
\[T(E)=\sqrt{\frac{2E}{C^{\prime}}}, \tag{3}\]
where \(C^{\prime}=\pi^{2}k_{\mathrm{B}}^{2}\nu_{0}/3\), \(\nu_{0}\) being the density of states and \(k_{\mathrm{B}}\) is the Boltzmann constant. As \(E\) fluctuates in time, \(T(E)\) is a fluctuating temperature.
We describe the system's time evolution with the phenomenological, energy resolved master equation,
\[\begin{split}&\frac{d}{dt}\begin{bmatrix}p_{0}(E)\\ p_{1}(E-\epsilon_{t})\end{bmatrix}=\mathcal{W}\begin{bmatrix}p_{0}(E)\\ p_{1}(E-\epsilon_{t})\end{bmatrix},\\ &\mathcal{W}=\begin{bmatrix}-\Gamma_{\mathrm{in}}(\epsilon_{t},E)&\Gamma_{ \mathrm{out}}(\epsilon_{t},E-\epsilon_{t})\\ \Gamma_{\mathrm{in}}(\epsilon_{t},E)&-\Gamma_{\mathrm{out}}(\epsilon_{t},E- \epsilon_{t})\end{bmatrix}\end{split} \tag{4}\]
where \(p_{n}(E)\equiv p(n,E;t)\) is the probability that there are \(n=0\), \(1\) electrons on the dot and that the reservoir energy is \(E\) at time \(t\). The probabilities satisfy the normalization condition \(\int dE\)\([p_{0}(E)+p_{1}(E)]=1\), and the tunneling rates are given by [46, 47]
\[\Gamma_{\mathrm{in}}(\epsilon,E)=\Gamma f(\epsilon,E),\ \ \Gamma_{\mathrm{out}}( \epsilon,E)=\Gamma[1-f(\epsilon,E)], \tag{5}\]
see the supplemental information for a motivation of these rates, as well as a discussion on charging effects and chemical potential fluctuations.
For a level energy that is constant in time, \(\epsilon_{t}=\epsilon\), given a well-defined initial _total_ energy for the system and reservoir (denoted by \(\mathcal{E}\)), the reservoir energy can only take on the values \(\mathcal{E}\) and \(\mathcal{E}-\epsilon\), for zero and one electron on the dot, respectively. This implies that the temperature fluctuates between \(T(\mathcal{E})\) and \(T(\mathcal{E}-\epsilon)\), and the stationary solution to Eq. (4) is given by \(p_{n}(E)=\delta(E-\mathcal{E}+n\epsilon)p_{n}^{s}(\epsilon|\mathcal{E})\) with
\[p_{1}^{s}(\epsilon|\mathcal{E})=1-p_{0}^{s}(\epsilon|\mathcal{E})=\frac{f( \epsilon,\mathcal{E})}{1-f(\epsilon,\mathcal{E}-\epsilon)+f(\epsilon, \mathcal{E})}, \tag{6}\]
which reduces to \(f(\epsilon,\mathcal{E})\) only when \(T(\mathcal{E}-\epsilon)\simeq T(\mathcal{E})\).
_Fluctuation theorem and entropic temperature_.-- For a stochastic thermodynamic description, we consider \(n(t)\) and \(E(t)\) as the stochastic system state and reservoir energy, respectively. A trajectory \(\gamma=\{n(t),E(t)|0\leq t\leq\tau\}\) may then be defined during a protocol, where the level energy may depend on time \(\epsilon_{t}\), as illustrated in Fig. 1 d). We denote the starting point of \(\gamma\) as \((n_{0},E_{0})\equiv(n(0),E(0))\) and its endpoint as \((n_{\tau},E_{\tau})\equiv(n(\tau),E(\tau))\). Note that \(E(t)\) and \(n(t)\) undergo abrupt changes at times \(\tau_{j}\), whenever an electron tunnels.
A fluctuation theorem relates the probability \(P(\gamma)\) for the trajectory to occur to the probability \(\tilde{P}(\tilde{\gamma})\) for the time-reversed trajectory \(\tilde{\gamma}\) to occur under the time reversed protocol (where the level energy is changed as \(\epsilon_{\tau-t}\))
\[\frac{P(\gamma)}{\tilde{P}(\tilde{\gamma})}=\exp{\left[\frac{\sigma(\gamma)} {k_{\mathrm{B}}}\right]}, \tag{7}\]
where \(\sigma(\gamma)\) is the total, stochastic entropy production along \(\gamma\). We can write
\[\sigma(\gamma)=\Delta s(\gamma)+\Delta s_{\mathrm{r}}(\gamma), \tag{8}\]
where \(\Delta s\), the change in system entropy, is given by
\[\Delta s(\gamma)\equiv k_{\mathrm{B}}[\ln p(n_{0},E_{0};0)-\ln p(n_{\tau},E_{ \tau};\tau)], \tag{9}\]
Figure 1: a) Sketch of a two-level system with an energy gap \(\epsilon\), coupled to a finite-size reservoir, the temperature of which being \(T(t)\equiv T(E(t))\). The system and reservoir exchange discrete amounts of heat \(q=\pm\epsilon\) in a stochastic way. b) Representation of the coupling between the dot system and the finite-size fermionic reservoir, where \(\Gamma\) is the tunneling strength. c) Plot of the temperature \(T(t)\) and the entropic temperature \(T_{\mathrm{e}}(t)\) in orange for a linearly decreasing level energy \(\epsilon_{t}\). d) Reservoir energy as a function of time. When an electron tunnels at time \(\tau_{j}\), the reservoir energy changes by \(\epsilon_{\tau_{j}}\).
where \(p(n_{0},E_{0};0)\), \(p(n_{\tau},E_{\tau};\tau)\) are the probabilities for the initial and final system states and reservoir energies. The term \(\Delta s_{\mathrm{r}}\), describing the stochastic entropy production associated to the reservoir, can be written as a stochastic integral along the trajectory \(\gamma\) (see supplemental information)
\[\Delta s_{\mathrm{r}}(\gamma)\equiv-\int_{\gamma}\frac{dq(t)}{T_{\mathrm{e}}(t )}, \tag{10}\]
where \(dq(t)=\epsilon_{t}dn(t)\) and we introduced the _entropic temperature_ as
\[T_{\mathrm{e}}(t)\equiv T_{\mathrm{e}}(\epsilon_{t},\mathcal{E}(t))=\frac{ \epsilon_{t}}{k_{\mathrm{B}}}\left[\ln\frac{\Gamma_{\mathrm{out}}(\epsilon_{t},\mathcal{E}(t)-\epsilon_{t})}{\Gamma_{\mathrm{in}}(\epsilon_{t},\mathcal{E}( t))}\right]^{-1}, \tag{11}\]
where \(\mathcal{E}(t)=E(t)+\epsilon_{t}n(t)\) denotes the total energy. Note that the entropic temperature is a stochastic variable taking on different values along different trajectories, just like \(n(t)\) and \(E(t)\).
The entropic temperature in Eq. (10) determines how the reservoir stochastic entropy changes along a given trajectory. In Fig. 1 c), we illustrate its behaviour in comparison to the actual temperature, \(T(t)\equiv T(E(t))\) which is obtained from the stochastic energy \(E(t)\) via Eq. (3). We note that the entropic temperature is a continuous function with kinks when quanta of energy are exchanged via the tunneling process. This is due to the fact that the change in total energy is determined by the work performed on the system, which exhibits kinks because work is only performed when the dot is occupied (see below).
We note that Eqs. (10) and (8) look just like Clausius' second law [48], but with stochastic quantities and with temperature being replaced by the entropic temperature. The reason that the entropic and not the actual temperature enters the entropy production is because energy exchange happens in quanta. Indeed, we find that for \(\epsilon_{t}\ll\mathcal{E}(t)\)
\[T_{\mathrm{e}}(t)\approx\sqrt{\frac{2\mathcal{E}(t)}{C^{\prime}}}-\sqrt{\frac {2\mathcal{E}(t)}{C^{\prime}}}\frac{\epsilon_{t}}{4\mathcal{E}(t)}. \tag{12}\]
Thus, when the quantization of energy becomes negligible (and \(\mathcal{E}(t)\approx E(t)\)), the entropic temperature reduces to the actual reservoir temperature \(T_{\mathrm{e}}(t)\approx T(t)\). In turn, a sizeable \(\epsilon_{t}\) leads to the disparity between the entropic temperature and the actual temperature. Our equations may thus be understood as a generalization of Clausius' second law that takes into account energy quantization in the exchange of heat.
The entropic temperature becomes particularly simple for a constant level energy \(\epsilon_{t}=\epsilon\) (again, assuming a fixed total energy)
\[T_{\mathrm{e}}(\epsilon,\mathcal{E})=\frac{\epsilon}{k_{\mathrm{B}}}\left[\ln \frac{p_{0}^{\mathrm{s}}(\epsilon|\mathcal{E})}{p_{1}^{\mathrm{s}}(\epsilon| \mathcal{E})}\right]^{-1}, \tag{13}\]
where the entropic temperature is no longer a stochastic quantity. We note that the last equation expresses detailed balance in terms of the entropic temperature. Furthermore, the stationary solution in Eq. (6) can be written in terms of the Fermi-Dirac distribution as
\[p_{1}^{\mathrm{s}}(\epsilon|\mathcal{E})=\frac{1}{1+e^{\epsilon/k_{\mathrm{B}} T_{\mathrm{e}}(\epsilon,\mathcal{E})}}. \tag{14}\]
These observations further illustrate that it is the entropic temperature that determines the thermodynamics of the dot coupled to a finite-size reservoir.
_Stochastic thermodynamics.--_ The stochastic internal energy of the system along the trajectory \(\gamma\) can be defined as
\[u(t)\equiv n(t)\epsilon_{t}. \tag{15}\]
The average internal energy is obtained by averaging this expression over the distribution for trajectories \(P(\gamma)\),
\[U(t)=\langle n(t)\rangle\epsilon_{t}=p_{1}(t)\epsilon_{t}, \tag{16}\]
where \(p_{n}(t)=\int dEp(n,E;t)\). According to the first law of thermodynamics, the system's internal energy changes can be divided into work and heat. Using Eq. (15), we identify heat and work as
\[du(t)=\epsilon_{t}dn(t)+n(t)\dot{\epsilon}_{t}dt=dq(t)+dw(t), \tag{17}\]
where the dot denotes a derivative with respect to \(t\). In this way, the stochastic heat and work along the trajectory are given by
\[q\equiv\int_{\gamma}\epsilon_{t}dn(t),\;\;\;w\equiv\int_{0}^{\tau}dt\ n(t) \dot{\epsilon}_{t}. \tag{18}\]
Similarly to Eq. (16), we can write the first law in terms of the average heat, \(Q\equiv\langle q\rangle\), and average work, \(W\equiv\langle w\rangle\),
\[\Delta U\equiv U(\tau)-U(0)=W+Q, \tag{19}\]
where
\[Q=\int_{0}^{\tau}dt\dot{p}_{1}(t)\epsilon_{t},\;\;\;\;\;W=\int_{0}^{\tau}dtp_{ 1}(t)\dot{\epsilon}_{t}. \tag{20}\]
Moreover, we can obtain the average entropy production by averaging the stochastic entropy production in Eq. (8)
\[\Sigma\equiv\langle\sigma(\gamma)\rangle=\Delta S-\left\langle\int_{\gamma} \frac{dq(t)}{T_{\mathrm{e}}(t)}\right\rangle, \tag{21}\]
where \(\Delta S\equiv\langle\Delta s(\gamma)\rangle\). Using Eq. (7), the non-negativity of the Kullback-Leibler divergence [49] implies that
\[\Sigma=k_{\mathrm{B}}\left\langle\ln\frac{P(\gamma)}{\bar{P}(\bar{\gamma})} \right\rangle\geq 0. \tag{22}\]
Hence, Eq. (22) can be seen as a second law of thermodynamics for the dot system coupled to the finite-size reservoir. Note that, by considering the entropy production
in Eq. (21) for a time-independent level energy \(\epsilon_{t}=\epsilon\), it follows that \(\Sigma=0\) in equilibrium, as expected.
Remarkably, we show in the supplemental information that not only the average entropy production but also the stochastic entropy production in Eq. (8) is zero in equilibrium as well as in the quasi-static limit, where the system always approximately remains in equilibrium. In this way, a parallel can be established with Clausius' second law, which also gives zero entropy production for a quasi-static process with an infinite-size reservoir at constant temperature.
_Work extraction._--To illustrate our approach, we first consider a basic protocol for work extraction. Starting at \(t=0\) with an empty dot at energy \(\epsilon_{0}\), we move the dot level down in energy with constant speed \(\nu\) to zero, as \(\epsilon_{t}=\epsilon_{0}(1-\nu t)\). By simulating a large number of trajectories, we obtain the statistical properties of the thermodynamic quantities. In Fig. 2 a), the average extracted work \(-W\), as well as the work extracted along individual trajectories, are shown as functions of time for different speeds. We see that \(-W\), as well as the number of work extraction intervals, decrease for increasing \(\nu\).
The corresponding full probability distribution of extracted work is shown in Fig. 2 b). For the fast drive, a sizeable fraction of trajectories display no electron tunneling and, hence, no work is extracted. For the slow drive the distribution becomes Gaussian shaped. Comparing to the work distribution of an infinite-size reservoir, the finite-size effects are most clearly visible for a slow drive, where they lead to a shift of the distribution towards smaller work values. Thus, the largest difference between the average work extracted with a finite and infinite-size reservoir seems to occur in the quasi-static regime.
In Fig. 2 c), the distributions of the entropic temperature \(T_{\rm e}\) at the end of the protocol for the same parameters as in Fig. 2 b) are shown. Compared to the distribution for large speed, the small speed distribution is narrowed and shifted to lower temperatures. In Fig. 2 d) it is shown how the distribution of total entropy production is narrowed and shifted towards zero when the drive speed is decreased.
To highlight the effect of a finite-size reservoir on information to work conversion, we analyze a Szilard engine, following closely the quantum dot protocol in Ref. [26]. Initially, the dot level energy is put to zero, giving a dot occupation probability \(1/2\), and the reservoir energy is fixed to \(E_{0}\). The occupation is then measured, with two possible outcomes: i) If the dot is empty, the level energy is instantaneously increased to \(\epsilon_{\rm i}\) and thereafter quasi-statically taken back to zero. ii) If the dot is instead occupied by an electron, the level energy is instantaneously decreased to \(-\epsilon_{\rm i}\), and thereafter quasi-statically increased back to zero. We note that the process is not completely cyclic, since the initial reservoir energy is well-defined, while the final energy is a stochastic quantity. The average extracted work \(-W\) as a function of \(\epsilon_{0}\), for different heat capacities, is shown in Fig. 3. We see that decreasing the size of the reservoir leads to a monotonically decreasing \(-W\).
_Conclusion._-- We provided a consistent thermodynamic description for a two-level system, namely a quantum dot, coupled to a finite-size reservoir. In our approach, the reservoir entropy along a given trajectory is determined by the entropic temperature, which therefore
Figure 2: Stochastic thermodynamic quantities. a) The average extracted work \(-W\) (solid lines) and the extracted work along a typical trajectory (dashed lines) as a function of time \(t\). b) The probability distribution of work extracted during the protocol. Solid (dashed) lines are for a finite (infinite) size reservoir. c) The probability distribution of the entropic temperature \(T_{\rm e}\) at the end of the protocol. d) The probability distributions of the total entropy productions \(\Sigma\) during the protocol. In all panels the heat capacity \(C(T)=4k_{\rm B}\) and the dot level energy is driven as \(\epsilon_{t}=\epsilon_{0}(1-\nu t)\) with \(\epsilon_{0}/k_{\rm B}T=1.5\) and \(10^{6}\) trajectories have been generated. Initially, at \(t=0\), the dot is empty and the reservoir energy distribution is Gaussian, with average \(2k_{\rm B}T\) and width \(0.1k_{\rm B}T\). The drive speeds are \(\nu=\Gamma/100\) (blue lines) and \(\Gamma/10\) (red lines).
Figure 3: Work extracted, \(-W\), as a function of \(\epsilon_{\rm i}\) in a cycle of the Szilard engine in the quasi-static limit. Dashed lines: quasi-static expression, c.f. Eq. (S35), solid lines: perturbative solution for large heat capacities, c.f. Eq. (S40) for the initial heat capacities: \(16k_{\rm B}\) (green line), \(20k_{\rm B}\) (orange line), \(40k_{\rm B}\) (blue line), \(100k_{\rm B}\) (purple line). The infinite-size reservoir case corresponds to the red line.
dictates the thermodynamics of the system and finite-size reservoir. Notably, we found that the entropic temperature is required to describe the thermodynamics of the system and reservoir as long as energy exchange occurs in quanta. When energy quantization is neglible, the entropic temperature reduces to the actual temperature, and therefore a connection with Clausius' second law is established. We complete our analysis by defining work and heat, and by showing that the stochastic entropy production vanishes for each trajectory in the quasi-static limit. Our results are illustrated by a protocol for work extraction and for the Szilard engine.
Our results show how to describe the thermodynamics of a finite-size reservoir that can be described by a fluctuating temperature. While we focus on an electronic system, our results can easily be generalized to other scenarios, e.g., a superconducting qubit coupled to an electro-magnetic environment or an electron spin coupled to nuclear spins. In addition, our approach can be adapted to more complicated scenarios, including quantum effects and transport between different reservoirs.
Finally, we note that a microscopic derivation of our master equation in Eq. (4) would provide quantitative insight into the limitation of our approach and is left for future work. A starting point for such a derivation could be provided by the extended micro-canonical master equation [42; 50; 51; 52; 53; 54].
_Acknowledgements.--_ S.V.M. and P.S. acknowledge support from the Knut and Alice Wallenberg Foundation (Project No. 2016-0089). S.V.M. acknowledges funding from the European Commission via the Horizon Europe project ASPECTS (Grant Agreement No. 101080167), and P.S. acknowledges support from the Swedish Research Council (Grant No. 2018-03921). P.P.P. acknowledges funding from the Swiss National Science Foundation (Eccellenza Professorial Fellowship PCEFP2_194268).
|
2304.07037 | No Easy Way Out: the Effectiveness of Deplatforming an Extremist Forum
to Suppress Hate and Harassment | Legislators and policymakers worldwide are debating options for suppressing
illegal, harmful and undesirable material online. Drawing on several
quantitative data sources, we show that deplatforming an active community to
suppress online hate and harassment, even with a substantial concerted effort
involving several tech firms, can be hard. Our case study is the disruption of
the largest and longest-running harassment forum Kiwi Farms in late 2022, which
is probably the most extensive industry effort to date. Despite the active
participation of a number of tech companies over several consecutive months,
this campaign failed to shut down the forum and remove its objectionable
content. While briefly raising public awareness, it led to rapid platform
displacement and traffic fragmentation. Part of the activity decamped to
Telegram, while traffic shifted from the primary domain to previously abandoned
alternatives. The forum experienced intermittent outages for several weeks,
after which the community leading the campaign lost interest, traffic was
directed back to the main domain, users quickly returned, and the forum was
back online and became even more connected. The forum members themselves
stopped discussing the incident shortly thereafter, and the net effect was that
forum activity, active users, threads, posts and traffic were all cut by about
half. Deplatforming a community without a court order raises philosophical
issues about censorship versus free speech; ethical and legal issues about the
role of industry in online content moderation; and practical issues on the
efficacy of private-sector versus government action. Deplatforming a dispersed
community using a series of court orders against individual service providers
appears unlikely to be very effective if the censor cannot incapacitate the key
maintainers, whether by arresting them, enjoining them or otherwise deterring
them. | Anh V. Vu, Alice Hutchings, Ross Anderson | 2023-04-14T10:14:16Z | http://arxiv.org/abs/2304.07037v7 | # No Easy Way Out: The Effectiveness of Deplatforming an
###### Abstract
Legislators and policymakers worldwide are debating options for suppressing illegal, harmful and undesirable material online. Drawing on several quantitative data sources, we show that deplatforming an active community to suppress online hate and harassment, even with a substantial collective effort involving several tech firms, can be hard. Our case study is the disruption of the largest and longest-running harassment forum Kiwi Farms in late 2022, which is probably the most extensive industry effort to date. We collected complete snapshots of this site and its primary competitor Lollow Farm, encompassing over 14.7M posts during their lifespan over the past decade. These data are supplemented with a full scrape of the Telegram channel used to disseminate new updates when the forum was down, tweets made by the online community leading the takedown, and with search interest and web traffic to the forum spanning two months before and four months after the event. Despite the active participation of a number of tech companies over several consecutive months, this campaign failed to shut down the forum and remove its objectionable content. While briefly raising public awareness, it led to rapid platform displacement and traffic fragmentation. Part of the activity decomposed to Telegram, while traffic shifted from the primary domain to previously abandoned alternatives. The forum experienced intermittent outages for several weeks, after which the community leading the campaign lost interest, traffic was directed back to the main domain, users quickly returned, and the forum was back online and became even more connected. The forum members themselves stopped discussing the incident shortly thereafter. The net effect was that forum activity, active users, threads, posts and traffic were all cut by about half. The disruption largely affected casual users (of whom roughly 87% left), while half the core members remained engaged. It also drew many newcomers, who exhibited increasing levels of toxicity during the first few weeks of participation. Deplatforming a community without a court order raises philosophical issues about censorship versus free speech; ethical and legal issues about the role of industry in online content moderation; and practical issues on the efficacy of private-sector versus government action. Deplatforming a dispersed community using a series of court orders against individual service providers appears unlikely to be very effective if the censor cannot incapaticate the key maintainers, whether by arresting them, enjoining them or otherwise detering them.
deplatforming, hate, harassment, online forums, website takedown, content moderation; censorship; Kiwi Farms.
## I Introduction
Online content is now prevalent, widely accessible, and influential in shaping public discourse. Yet while online places facilitate free speech, they do the same for hate speech [1], and the line between the two is often contested. Some cases of stalking, bullying, and doxing such as Gamergate have had real-world consequences, including violent crime as well as political mobilisation [2]. Content moderation has become a critical function of tech companies, but also a political tussle space, since abusive accounts may affect online communities in significantly different ways [3]. Online social platforms employ various mechanisms to detect, moderate, and suppress objectionable content [4], including "hard" and "soft" techniques [5]. These range from reporting users of illegal content to the police, through deplatforming users who break terms of service [6], to moderating legal but obnooxious content [7], which may involve actions such as flagging it with user warnings, downranking it in recommendation algorithms, or preventing its being monetized through ads [8, 9, 10].
Deplatforming may mean blocking individual users, but sometimes the target is not a single bad actor, but a whole community, such as one involved in crime [11]. It can be undertaken by industry, as when Cloudflare terminated service for the Daily Stormer after the Unite the Right rally in Virginia in 2017 [12] and for 8chan in August 2019 [13]; or by law enforcement, as with the FBI taking down DDoS-for-hire services in 2018 [14, 15] and 2022 [16], and seizing Raid Forums in 2022 [17]. Industry disruption has often been short-lived; both 8chan and Daily Stormer re-emerged shortly after being disrupted. Police intervention is often slow and less effective, and its impact may also be temporary [11]. After the FBI shut down Silk Road in 2013 [18], the online drug market fragmented among multiple smaller marketplaces [19]. The seizure of Raid Forums led to the emergence of its successor Breach Forums. Furthermore, the takedowns against DDoS-for-hire services cut the attack volume significantly, yet the market recovered rapidly [14, 15].
Kiwi Farms is the largest and longest-running online harassment forum [20, 21]. It is often associated with real-life trolling and doxing campaigns against feminists, gay rights campaigners and minorities such as disabled, transgender, and autistic individuals; some have killed themselves after being harassed [22]. Despite being unpleasant and widely controversial, the forum has been online for a decade and had been shielded by Cloudflare's DDoS protection for years. This came to an end following serious harassment by forum members of a Canadian trans activist, culminating in a swatting incident in August 2022.1 This resulted in a community-led campaign on Twitter to pressure Cloudflare and other tech firms to drop the forum [23, 24, 25]. This escalated quickly, generating significant social media attention and mainstream headlines. A series of tech firms then attempted to take the forum down; they included DDoS protection services, infrastructure providers, and even some Tier-1 networks [26, 27, 28],
[29, 30]. This extraordinary series of events lasted for a few months and was the most sustained effort to date to suppress an active online hate community. It is notable that tech firms gave in to public pressure in this case, while they have in the past resisted substantial pressure from governments.
Existing studies have investigated the efficacy of deplatforming social-media users [31, 32, 33, 34, 35, 36, 37], yet there has been limited research into the effectiveness of industry disruptions against hate communities - both quantitatively and qualitatively. This paper investigates how well the industry dealt with a hate site. Our goals were to evaluate the efficacy of the effort; to understand the impacts and challenges of deplatforming as a means to suppress online hate and harassment; and to examine the role of industry in censorship and content regulation.
We outline the disruption landscape in SSII, then describe our methods and datasets in SSIII. Sections SSIV and SSV assess the impacts on the forum itself and the relevant stakeholders. We discuss the role of industry in tackling online harassment, censorship and content regulation, as well as legal, ethical, and policy implications of the incident in SSVI. Our data collection and analyses were approved by our institutional Ethics Review Board. Our data are available to academics on request.
## II Deplatforming to Suppress Online Hate and Harassment
There is a complex ecosystem of online abuse, which has been evolving for decades [38]. There can be a large grey area between criminal behaviour and socially acceptable behaviour online, just as in real life. And just as a pub landlord will throw out rowdy customers so platforms have acceptable use policies backed by content moderation [39], to enhance their users' experience and protect advertising revenue [40].
### _Deplatforming and its Efficacy_
Deplatforming refers to blocking, excluding or restricting individuals or groups from using online services, on the grounds that their activities are unlawful, or that they do not comply with the platform's acceptable use policy [6]. Various extremists and criminals have been exploiting online platforms for over thirty years, resulting in a complex ecosystem in which some harms are prohibited by the criminal law (such as terrorist radicalisation and child sex abuse material) while many others are blocked by platforms seeking to provide welcoming spaces for their users and advertisers. For a history and summary of current US legislative tussles and their possible side-effects, see Fishman [41]. The idea is that if a platform is used to disseminate abusive speech, removing the speech or indeed the speakers could restrict its spread, make it harder for hate groups to recruit, organise and coordinate, and ultimately protect individuals from mental and physical harm. Deplatforming can be done in various ways, ranging from limiting users' access and restricting their activity for a time period, to suspending an account, or even stopping an entire group of users from using one or more services. For example, groups banned from major platforms can displace to other channels, whether smaller websites or messenger services [6].
Different countries draw the line between free speech and hate speech differently. For example, the USA allows the display of Nazi symbols while France and Germany do not [42]. Private firms offering ad-supported social networks generally operate much more restrictive rules, as their advertisers do not want their ads appearing alongside content that prospective customers are likely to find offensive. People wishing to generate and share such material therefore tend to congregate on smaller forums. Some argue that taking down such forums infringes on free speech and may lead to censorship of legitimate voices and dissenting opinions, especially if it is perceived as politically motivated. Others maintain that deplatforming is necessary to protect vulnerable communities from harm. Debates rage in multiple legislatures; as one example, the UK Online Safety Bill will enable the (politically-appointed) head of Ofcom, the UK broadcast regulator, to obtain court orders to shut down online places that are considered harmful [43]. This lead us to ask: how effective might such an order be?
Most studies assessing the impact of deplatforming have worked with data on social networks. Deplatforming users may reduce activity and toxicity levels of relevant actors on Twitter [31] and Reddit [32, 33], limit the spread of conspiratorial disinformation on Facebook [34], and minimise disinformation and extreme speech on YouTube [35]. But deplatforming has often made hate groups and individuals even more extreme, toxic and radicalised. They may view the disruption of their platform as an attack on their shared beliefs and values, and move to even more toxic places to continue spreading their message. There are many examples: the Reddit ban of r/incels in November 2017 led to the emergence of two standalone forums, incels.is and incels.net, which then grew rapidly; users banned from Twitter and Reddit exhibit higher levels of toxicity when migrating to Gab [36]; users migrated to their own standalone websites after getting banned from r/The_Donald expressed higher levels of toxicity and radicalisation, even though their posting activity on the new platform decreased [44, 45]; the 'Great Deplatforming' directed users to other less regulated, more extreme platforms [46]; the activity of many right-wing users moved to Telegram increased multi-fold after being banned on major social media [37];
Figure 1: Activity levels and major incidents affecting Kiwi Farms during its one-decade lifetime from 2013 to late 2022.
users banned from Twitter are more active on Gettr [47]; and communities migrated to Voat from Reddit can be more resilient [48]. Blocking can also be ineffective for technical and implementation reasons: removing Facebook content after a delay appears to have been ineffective and had limited impact due to the short cycle of users' engagement there [49].
Standalone communities, such as websites and forums, may be more resilient as the admin has control of all the content, facilitating easy backups and restores. Previous work has documented the impacts of law enforcement interventions on online cybercrime marketplaces and services [14, 15, 19], yet how effective the industry can be in dealing with such extreme, radicalised communities remains unstudied.
### _Kiwi Farms and the Disruptions_
Kiwi Farms had been growing steadily over a decade (see Figure 1) and had been under Cloudflare's DDoS protection for some years.2 An increase of roughly 50% in forum activity happened during the Covid-19 lockdown starting in March 2020, presumably as people were spending more time online. Prior interventions have resulted in the forum getting banned from Google Adsense, and from Mastercard, Visa and PayPal in 2016; from hundreds of VPS providers between 2014-2019 [50]; and from selling merchandise on the print-on-demand marketplace Redbubble in 2016. XenForo, a close-source forum platform, revoked its license in late 2021 [51]. DreamHost stopped its domain registration in July 2021 after a software developer killed himself after being harassed by the site's users. This did not disrupt the forum as it was given 14 days to seek another registrar [52]. While these interventions may have had negative effects on its profit and loss account, they did not impact its activity overall. The only significant disruption in the forum's history was between 22 January and 9 February 2017 (19 days), when the forum's owner suspended it himself due to his family being harassed [53].3
Footnote 2: Cloudflare’s service tries to detect suspicious patterns and drop malicious ones, only letting legitimate requests through.
The disruption studied in this work was started by the online community in 2022. A malicious alarm was sent to the police in London, Ontario by a forum member on 5 August, claiming that a Canadian trans activist had committed murders and was planning more, leading to her being swatted [54]. She and her family were then repeatedly tracked, doxced, threatened, and generally harassed. In return, she launched a campaign on Twitter on 22 August under the hashtag #dropkivirarms and organised a protest outside Cloudflare's headquarters to pressure the company to deplatform the site [55]. This campaign generated lots of attention and mainstream headlines, which ultimately resulted in several tech firms trying to shut down the forum. This is the first time that the forum was completely inaccessible for an extended period due to an external action, with no activity on any online places including the dark web. It attempted to recover twice, but even when it eventually returned online, the overall activity was roughly halved.
The majority of actions taken to disrupt the forum occurred within the first two months of the campaign. Most of them were widely covered in the media and can be checked against public statements made by the industry and the forum admins' announcements (see Figure 2). The forum came under a large DDoS attack on 23 August, one day after the campaign started. It was then unavailable from 27 to 28 August due to ISP blackholing. Cloudflare terminated their DDoS prevention service on 3 September - just 12 days after the Twitter campaign started - due to an "unprecedented emergency and immediate threat to human life" [26]. The forum was still supported by DDoS-Guard (a Russian competitor to Cloudflare), but that firm also suspended service on 5 September [27]. The forum was still active on the dark web but this.onion site soon became inaccessible too. On 6 September, hCaptcha dropped support; the forum was removed from the Internet Archive on the same day [56]. This left it under DiamWall's DDoS protection and hosted on VanwaTech - a hosting provider describing themselves as neutral and non-censored [57]. On 15 September 2022, DiamWall terminated their protection [28] and the '.top' domain provider also stopped support [29]. The forum was completely down from 19 to 26 September and
Figure 2: Major incidents disrupting Kiwi Farms from September to December 2022. Green stars indicate the forum recovery.
from 23 to 29 October. From 23 October onwards, several ISPs intermittently rejected announcements or blackholed routes to the forum due to violations of their acceptable use policy, including Voxility and Tier-1 providers such as Lumen, Arellion, GTT and Zayo. This is remarkable as there are only about 15 Tier-1 ISPs in the world. The forum admin devoted extensive effort to maintaining the infrastructure, fixing bugs, and providing guidance to users in response to password breaches. Eventually, by routing through other ISPs, the forum was able to get back online and remain stable, particularly following its second recovery.
## III Methods and Datasets
Our primary approach is data-driven, with findings supported by quantitative evidence derived from multiple longitudinal data sources. Where applicable, we enrich the findings with complementary qualitative content analysis of posts, tweets, announcements, and public statements. Our collection is maintained on a regular basis. All the data used are widely accessible and can be publicly scraped by anyone. We refrain from scraping images due to safety and legality concerns.
### _Forum and Imageboard Discussions_
Besides common mainstream social media channels like Facebook and Twitter, independent platforms such as xenFor4 and Infinity5 have gained popularity as tools for building online communities. Despite being less visible and requiring more upkeep, these can offer greater resistance against external intervention as the operators have full control over the content and databases, thereby allowing easy backup and redeployment in case of disruption. These platforms typically share a hierarchical data structure ranging from bulletin boards down to threads linked to specific topics, each containing several posts. While facilitating free speech, these also increasingly nurture and disseminate hate and abusive speech. We have been scraping the two most active forums associated with online harassment for years due to their increasingly toxic content, as part of the ExtremeBB dataset [21]: Kiwi Farms and Lolcow Farm.
Footnote 4: The xenForq Platform: [https://enforq.com/](https://enforq.com/)
Footnote 5: The Infinity Imageboard: [https://github.com/crtfcttr/infinity/](https://github.com/crtfcttr/infinity/)
Our collection includes not only posts but also associated metadata such as posting time, user profiles, reactions, and levels of _toxicity_, _identity attack_ and _threat_ measured by the Google Perspective API as of January 2023.6 Perspective API also offers other measures such as _insult_ and _profanity_[58], but we exclude these due to lack of relevance to this paper's aim. We strive to ensure data completeness by designing our scrapers to visit all sub-forums, threads, and posts while keeping track of every single crawl's progress to resume incrementally in case of any interruption.
Footnote 6: Google Perspective API: [https://perspeciveapi.com/](https://perspeciveapi.com/)
Kiwi Farms is built on xenForo, but the operators have been maintaining the forum by their own efforts since late 2021 when xenForo officially revoked their license. Our data covers the entire history of the forum from early January 2013 to the end of 2022 with 10.1M posts in 48k threads made by 59k active users, providing a full landscape through its evolution over time. While some extremist forums experienced fluctuating activity and rapid declines in recent years [21], Kiwi Farms has shown stable growth until being significantly disrupted in 2022 (see Figure 1). Our data precisely capture major reported suspensions, including those in 2017 and 2022.
According to Similarweb [59] and Semrush [60], the primary rival is Lolcow Farm, an imageboard built on Infinity. While Kiwi Farms discussions are largely text-based, Lolcow Farm is centred on descriptive images. While Kiwi Farms users adopt pseudonyms, Lolcow Farm users mostly remain hidden under the unified 'Anonymous' handle. We gathered a complete snapshot of Lolcow Farm from its inception in June 2014 to the end of 2022, encompassing 4.6M posts made in 10k threads. Lolcow Farm has much fewer threads, but each typically contains lots of posts. This collection brings the total number of posts for both forums to 14.7M (and still growing). We exclude Lolcow, a smaller competitor to Kiwi Farms (also based on xenForo), as it vanished in mid-2022 and had less than 30k posts in total. As Lolcow Farm is now the largest competitor, analysing it lets us estimate platform displacement when Kiwi Farms was down.
### _Telegram Chats_
During periods of inaccessibility, the activity level increased in a Telegram group, which was mainly used to disseminate announcements and updates, particularly about where and when the forum could be accessed. This channel permits public access, allowing people to join and view historical messages. We used Telethon7 to collect a snapshot of this channel during its lifespan from late August to the end of 2022, encompassing 525k messages, 298k replies, and associated metadata such as view counts and 356k emoji reactions made by 2 502 active users. The data is likely complete as messages and metadata are fully captured through the use of official Telegram APIs. As the forum operators are driven to keep users quickly informed, their announcements provide a reliable incident and response timeline.
Footnote 7: Telethon: [https://telethon.dev/](https://telethon.dev/)
Footnote 8: Other domains include kiwifarms.tw, kiwifarms.hk, and kiwifarms.pl, however they are either new or insignificant so their traffic data is trivial.
### _Web Traffic and Search Trends Analytics_
We found from announcements in the Telegram group that Kiwi Farms could be accessed through six major domains: the primary one is kiwifarms.net and four alternatives are kiwifarms.ru, kiwifarms.top, kiwifarms.is, and kiwifarms.st, while a Pleroma decentralised web version is at kiwifarms.cc.9 To investigate how users navigated across these domains when the forum experienced disruption, we analysed traffic analytics towards all six domains provided by Similarweb - the leading platform in the market providing insights and intelligence into web traffic and performance.10 Their reports aggregate
anonymous statistics from multiple inputs, including their own analytic services, data sharing from ISPs and other measurement companies, data crawled from billions of websites, and device traffic data (both website and app) such as plugins, add-ons and pixel tracking. Their algorithm then extrapolates the substantial aggregated data to the entire Internet space. Their estimation therefore may not be completely precise, but reliably reflects trends at both global and country levels. In a separate paper, we tested the reliability of Similarweb data with a comparison to millions of ground truth traffic records collected from our own infrastructure over 6 months, showing that while Similarweb largely underestimates the amount of traffic, it is able to capture trends with a very high correlation (Pearson's coefficient \(>0.9\)) [citation hidden]. Our analysis in the next section also suggests a high correlation between the traffic data and the forum activity.
As Similarweb does not offer an academic license, we use a free trial account10 to access longitudinal web traffic and engagement data going back the past three months. This includes information about total visits, unique visitors, visit duration, pages per visit, bounce rate, and page views. It also provides figures on search activity, data for marketing such as visit sources (e.g., direct, search, email, social, referral, ads), and non-temporal insight into audience geography and demographics. These data, covering both desktop and mobile traffic, provide valuable perspectives. They span from July to December 2022, two months before and four months after the disruption; this time frame is sufficient as there was no significant industry intervention against the forum in the past (as shown in Figure 1), and the disruption campaign mostly ended after a few months (see SSIV). In addition, we also collected search trends by countries and territories over time from Google Trends, covering the entire lifetime of the forum. Both of these datasets are likely to be complete as they were gathered directly from Similarweb and Google.
Footnote 10: A business subscription offers 6 months of historical data, but neither it nor the free trial provides access to longitudinal country-based records.
### _Tweets Made by the Online Community_
The disruption campaign started on Twitter on 22 August 2022 with tweets posted under the hashtag #dropkiwifarms. We gathered the main tweets plus associated metadata, such as posting time and reactions (e.g., replies, retweets, likes, and quotes) using Snscrape, an open-source Python framework for social network scrapers.11 As they use Twitter APIs as the underlying method, the data are likely to be complete. We collected 11 076 tweets made by 3 886 users, spanning the entire campaign period. This data helps us understand the community reaction throughout the campaign, when the industry took action, and when the forum recovered. There might be more related tweets without the hashtag #dropkiwifarms of which we are unaware, but we believe the trend measured by our collection is reliable.
Footnote 11: Snscrape: [https://github.com/JustAnotherArchivist/snscrape/](https://github.com/JustAnotherArchivist/snscrape/)
### _Data Licensing_
Our datasets and scripts for data collection and analysis are available to academics. However, as both researchers and involved actors such as forum members might be exposed to risk and harm [61], we decline to make our data publicly accessible. It is our standard practice at the Cambridge Cybercrime Centre to require our licensees to sign an agreement to prevent misuse, to ensure the data will be handled appropriately, and to keep us informed about research outcomes. We have a long history of sharing such sensitive data, and robust procedures to enable data sharing in multiple jurisdictions.
### _Ethical Considerations_
Our work was formally approved by our institutional Ethics Review Board (ERB) for data collection and analysis. Our datasets are collected on publicly available forums and channels, which are accessible to all. We collected the forum when it was hosted in the US; according to a 2022 US court case, scraping public data is legal [62]. Our scraping method does not violate any regulations and does not cause negative consequences to the targeted websites e.g., bandwidth congestion or denial of service. It would be impractical to send thousands of messages to gain consent from all forum and Telegram members; we assume they are aware that their posting activity on public online places will be widely accessible.
In contrast to some previous work on online forums, we name the investigated forums in this paper. Pseudonymising the forum name is pointless because of the high-profile campaign being studied. Thus, we avoid the pretence that the forum is not identifiable and shift the focus to accounting for the potential harms to both researchers and involved actors associated with our research. We designed our analysis to operate ethically and collectively by only presenting aggregated behaviours to avoid private and sensitive information of individuals being inferred. This is in accordance with the British Society of Criminology Statement on Ethics [63].
Researchers may be at risk when doing work on sensitive data [61]. Studying extremist forums may introduce a higher risk of retaliation than other forums, resulting in mental or physical harm. We have taken measures to minimise potential harm to researchers and involved actors when doing studies with human subjects and at-risk populations [64, 65]. For example, we consider options to anonymise authors' names or use pseudonyms for any publication related to the project, including this paper, if necessary. We also refrain from directly looking at media; our data collection only scrapes text while discarding images and private/protected posts.
## IV The Impacts on Forum Activity and Traffic
On 3 September, Cloudflare discontinued its DDoS prevention service, which attracted major publicity. This intervention led to a sudden and significant increase in global search interest about Kiwi Farms with a seven-fold spike, along with the web traffic to the six major domains doubling on 4 September (see Figure 3). This phenomenon, known as the Streisand effect, might be caused by people's curiosity
about what happened to the platform, which is relatively rare but mainly seen with 'freedom of speech' issues [11]. It suggests that attempts at censorship may actually end up being counterproductive [66]: disruptive effort aiming to reduce user interactions instead led to the unintended consequences of increased attention, despite such effect lasting for only a few days before declining sharply. We now examine in detail the impacts of the disruption and the forum recovery on Kiwi Farms itself within 6 months from July to December 2022. This time frame provides a sufficient understanding, as the campaign was mostly over and the forum was growing stably before the disruption.
### _The Impacts of Major Disruptions_
While some DDoS attacks were large enough to shut the forum down, their impact was temporary. For example, the DDoS attack on 23 August - which was probably associated with the Twitter campaign the previous day - led to a drop of roughly 35% in posting volume, yet the forum activity recovered the next day to a slightly higher level (see the first graph of Figure 4). The DDoS attack during Christmas 2022 was also short-lived. The ISP blackholing on 26 August was more critical, silencing the forum for two consecutive days, yet it again managed to recover quickly.
The most significant, long-lasting impact was caused by the substantial industry disruption that we analyse in this paper. While activity immediately dropped by around 20% after Cloudflare's action on 3 September, the forum was still online at kiwifarms.ru, hosting the same content. Activity did not degrade significantly until DDoS-Guard's action on 5 September, which took down the Russian domain. By 18 September, all domains were unavailable, including.onion (presumably their hosting was identified); forum activity dropped to zero and stayed there for a week. The operator managed to get the forum back online for the first time on 27 September 2022, after which it ran stably on both the dark web and clear web for roughly one month until Zayo - a Tier-1 ISP - blocked it on 23 October. This led to another silent week before the forum eventually recovered a second time on 30 October. It has been stable since then without serious downtime except for the ISP blackholing on 22 December which led to a 70% drop in activity. In general, although the forum is now back online, hosted on 1776 Solutions - a company also founded by the forum's owner - it has failed to bounce back to the pre-disruption level, with the number of active users and posting volume roughly halved. In short, the industry effort was much more effective than previous DDoS attacks, yet still could not silence the forum for long.
### _Platform Displacement_
The natural behaviour of online communities when their usual gathering place becomes inaccessible is to seek alternative places or channels. The second graph in Figure 4 shows an initial shift of forum activity to Telegram that occurred on 27 August, right after the ISP blackholing. This was accompanied by thousands of emoji reactions on the admin's announcements since commenting was not allowed at that time. Community reactions (e.g., replies, emojis) seem to have been consistent with the overall Telegram posting activity, which increased rapidly afterwards and even occasionally surpassed the forum's activity, especially after the publicity given to the Cloudflare and DDoS-Guard actions. However, significant displacements only occurred when all domains were completely inaccessible on 18 September, and again when Zayo blocked the forum's second incarnation on 22 October. The shift to Telegram appears to be rapid yet rather temporary: users quickly returned to the forum when it became available, while activity on Telegram gradually declined.
There was no significant shift in activity from the forum to its primary competitor Lolow Farm (see the third graph of Figure 4), however, there was an increase in posting on Lolow Farm about the incident, indicating a minor change of discussion topic (see more in SSV-D). It is unclear if these posting users migrated from Kiwi Farms, as Lolow Farm do not use handles, making user counts unavailable. Lolow Farm also experienced downtime on 17 and 18 September (the same day as Kiwi Farms) yet we have no reliable evidence to draw any convincing explanation. Another drop occurred around Christmas 2022 in sync with Kiwi Farms, perhaps because of the holiday. The activity of Lolow Farm returned to its previous level quickly after these drops, suggesting that the campaign did not significantly impact Lolow Farm or drive content between the rival ecosystems; the displacement we observed on Kiwi Farms was mostly 'internal' within its own ecosystem, rather than an 'external' shift to other forums.
### _Traffic Fragmentation_
Before Cloudflare's action on 3 September, traffic towards Kiwi Farms (measured by Similarweb) was relatively steady, mostly occupied by the primary domain. However, we see the Streisand effect (as also seen in Figure 3) with an immediate peak in traffic of around 50% more visits and 85% more visitors once the site was disrupted. The publicity given by the takedown presumably boosted awareness and attracted people to visit both the primary and alternative domains. Traffic to the primary domain was then significantly fragmented to other previously abandoned domains, resulting in the kiwifarms.net accounting for less than 50% one day after Cloudflare's intervention, as shown in Figure 5.
Figure 3: Global search trends and traffic to all forum domains during the disruption. The star indicates the Streisand effect.
Following the unavailability of kiwifarms.net, most traffic was directed to kiwifarms.ru, which was under DDoS Guard's protection (accounting for around 60% total traffic on 4 September). The DDoS-Guard's action on 5 September reduced traffic towards kiwifarms.ru sharply, while traffic towards kiwifarms.top peaked. The suspension of kiwifarms.top on the following day led to increased traffic towards kiwifarms.cc (a Pleroma decentralised web instance), but it only lasted for a couple of days before traffic shifted again to kiwifarms.is. The seizure of kiwifarms.is later led to the traffic shifting to kiwifarms.st, but it was also short-lived.
The forum recovery on 27 September gradually directed almost all traffic back to the primary domain, and by 22 October, kiwifarms.net mostly accounted for all traffic, albeit at about half the volume. This effect is highly consistent with what has been found in our forum data, indicating a reliable pattern. Overall, our evidence suggests a clear traffic fragmentation across different domains, in which people attempted to visit surviving domains when one was disrupted.
## V The Impacts on Relevant Stakeholders
We have looked at the impacts of the disruption on Kiwi Farms itself. This section examines the effects on relevant stakeholders, including the harassed victim, the community leading the campaign, the industry, the forum operators, and active forum users who posted at least once. As our ethics approval does not allow the study of individuals, all measurements are conducted collectively on subsets of users. Besides quantitative evidence, we also qualitatively look at statements made by tech firms about the incident.
### _The Community who Launched the Campaign_
There were 3 886 users in the online community involved in starting the campaign. Of these, 1 670 users (42.97%) were responsible for around 80% of tweets. There was a sharp
Figure 4: Number of daily posting activity, threads, and active users on Kiwi Farms, its Telegram channel, and Lolow Farm, as well as major disruptions and displacement from Kiwi Farms to other platforms. The red star indicates the Streisand effect.
Figure 5: Number of daily estimated visits and the fragmentation from the primary domain to previously abandoned alternatives. We see non-zero traffic to the primary domain when the forum was down, presumably Similarweb counted unsuccessful attempts.
increase in tweets and reactions at the beginning (see Figure 6). The first peak was on 25 August with nearly 900 tweets by around 600 users. However, this dropped rapidly to less than 100 per day after a few weeks when Cloudflare and DDoS-Guard took action, and almost to zero two weeks later. The number of tweets specifically mentioning Cloudflare (such as their official account, as well as those for jobs, help, and developers) was around 200 in the beginning but decreased over time, and dropped to zero after they took action. This lasted for roughly one month until after the forum recovered: we see around 400 tweets about Cloudflare, twice the previous peak, and accounting for almost all such tweets that day. However, having read through these tweets, they appeared to be mainly associated with another campaign namely #stopdoghate. We thus conclude this was a short-lived outlier instead of a genuine Kiwi Farms-related peak.
The trans activist who launched the campaign was engaged at the beginning but then became much less active in posting new tweets, although she still replied to people. Her posting volume was, however, trivial compared to the overall numbers: she made only four tweets on the day the campaign started, the number then dropped quickly to only one on 4th September after Cloudflare took action, and zero thereafter. It suggests that although she sparked the campaign, she might not be the primary maintainer. We see no notable peak of tweets after the forum was completely shut down, suggesting a clear loss of interest in pursuing the campaign, both from people posting tweets and people reacting to tweets. The community seemed to get bored quickly after a few weeks when they appeared to have gotten what they wanted - _'Kiwi Farms is dead, and I am moving on to the next campaign'_, tweeted the activist.
### _The Industry Response_
There is no quantitative data to cover the impact on industry actors, so we switch to qualitative analysis and read through their public statements. Cloudflare stated their abuse policies on 31 August without directly mentioning the Twitter campaign [67]. In summary, the firm offers traffic proxy and DDoS protection to lots of (mostly non-paid) sites regardless of the content hosted, including Kiwi Farms. The firm maintains that abusive content alone is not an issue, and the forum - while immoral - still deserves the same protection as other customers, as long as it does not violate US law.
Although Cloudflare are entitled to refuse business from Kiwi Farms, they initially took the view that doing so because of its content would create a bad precedent, leading to unintended consequences on content regulation and making things harder for Cloudflare. This could affect the whole Internet, as Cloudflare handles a large proportion of network traffic. They did not want to get involved in policing online content, but if they had to do it they would rather do so in response to a court order instead of popular opinion. The firm previously had dropped the neo-Nazi website Daily Stormer [12] and the extremist board 8chan [13] because of their links with terrorist attacks and mass murders, and a false claim about Cloudflare's secret support. They also claimed that dropping service for Kiwi Farms would not remove the hate content, but only slow it down for a while.
Nevertheless, Cloudflare did a U-turn a few days later on 3 September 2022, announcing that they would terminate service for Kiwi Farms[26]. They explained that the escalation of the pressure campaign led to users being more aggressive, which might lead to crime. They reached out to law enforcement in multiple jurisdictions regarding potential criminal acts, but as the legal process was too slow compared to the escalating threat, they made the decision alone [26, 30]. They still claimed that following a legal process would be the correct policy, and denied that the decision was the direct result of community pressure. Cloudflare's action also inadvertently led to the termination of a neo-Nazi group in New Zealand, as it was hosted by the same company as the forum [68].
DDoS-Guard's statements about the incident told a similar story [27]. Although they can restrict access to their customers if they violate the acceptable use policy, content moderation is not their duty (except under a court order) so they do not need to determine whether every website they protect violates the law. DiamWall took the same line; they claimed that they are not responsible, and are unable to moderate content hosted on websites [28]. They also maintained that terminating services in response to public pressure is not good policy, but the case of Kiwi Farms was exceptional due to its'revolving' content. They also noted that their actions could only delay things but not fix the root cause, as the forum could find another provider to get back online. DiamWall's statement was removed afterwards, and it is now only accessible through online archives. It is understandable that infrastructure providers such as Cloudflare and DDoS-Guard do not want to get involved in content moderation the way Facebook and Google have to, as moderation is complex, contentious and expensive.
Figure 6: The number of daily tweets and reactions made by the community about the campaign. Figure scales are different.
### _The Forum Operators_
The disruption of Kiwi Farms led to a cat-and-mouse game where tech firms tried to shut it down by various means while the forum operators tried to get it back up. The forum needed DDoS protection to hide its original IP address and evade cyberattacks, so the operators first switched their third-party DDoS protection to DDoS-Guard, then DiamWall, yet these firms also resigned their business. They then attempted to build an anti-bot mechanism themselves based on HAProxy - an open-source software to stop bots, spam, and DDoS using proof-of-work [69] - and claimed to be resilient to thousands of simultaneous connections. They also changed hosting providers to VanwaTech and eventually their own firm 1776 Solutions, and attempted to route their traffic through other ISPs. They were actively maintaining infrastructure, fixing bugs, and giving instructions to users to deal with their passwords when the forum experienced a breach. The operators' effort seemed to be competent and consistent.
They posted 107 Telegram announcements during the period, mostly about when and where the forum was going to recover, the ongoing problems (e.g., DDoS attacks, industry blocks), and their plan to fix them (see Figure 7). This channel was activated after the Twitter campaign; the admins were very active, for example, sending seven consecutive messages on 23 August that mostly concerned the large DDoS attack on that day. The second peak was on 6 September after Cloudflare and DDoS-Guard's withdrawal of service, mostly about forum availability. The number of announcements then gradually decreased, especially after the second recovery, with many days having no messages. A DDoS attack hitting the forum during Christmas 2022 caught the admins' attention for a while. Their activity was inversely correlated with the forum's stability; they were less active when the site was up and running stably or when there were no new incidents.
### _The Forum Members_
People sharing the same passion naturally coalesce into communities, in which some key actors may play a crucial role in influencing the ecosystem [70, 71, 72]. Kiwi Farms activity is highly skewed, with around 80%12 of pre-disruption posts made by 8.96% most active users (5 158), while the remaining 20% posts were made by the 91.04% less active (52 417). There was around a 30% drop in the number of users after the disruption (as seen in Figure 4); around half of the key users (48.78%) remained engaged while only 13.05% of the less active stayed (86.95% left). There were 1 564 newcomers after the disruption. We focus on those active after the disruption, namely the 'core survivors', 'casual survivors', and 'newcomers'. On average, before the disruption, each 'core survivor' posted 22.2 times more than each 'casual survivor' (1800.99 vs 80.94 posts), while their active period (between their first post and last post) was around 2.5 times longer (1307.84 vs 516.90 days).
Footnote 12: We make use of the 80/20 rule – the Pareto principle [73].
#### Iv-D1 Posting Activity
Before the takedown, each core survivor made about 3.5 posts per day on average, while it was around 3 afterwards - see Figure 8. The activity of the other survivors appears consistent with the pre-disruption period; their average posts were at around 2 per day before the incident and almost unchanged afterwards. These figures suggest that the decreasing posting volume seen in Figure 4 was mainly due to users leaving the forum, instead of surviving ones largely losing interest - they engaged back quickly after the forum recovered. Newcomers posted slightly less than casual survivors before the forum was completely down on 18 September (less than 2 posts per day), yet their average
Figure 8: Number of average posts per day made by surviving actors and newcomers, who posted at least once after the event.
Figure 7: The number of Telegram announcements posted by the forum operators per day since the channel was created.
Figure 9: Average levels of toxicity, identity attack, and threat of survivors and newcomers before and after the disruption.
posting volume then increased quickly. This suggests that the disruption, besides removing a very large proportion of old casual users, drew in many new users who then became roughly as active as the core survivors.
#### Vi-B2 Toxicity Levels
Kiwi Farms has the most toxic posts among 12 extremist forums measured in previous work [21]. We thus further examine the toxicity of posts made by the surviving actors and newcomers, before and after the disruption. Figure 9 shows the average levels of _toxicity_, _identity attack_ and _threat_ of core survivors, casual survivors, and newcomers by days. We separate the pre-disruption and post-disruption by 3 September, when Cloudflare took action.
In general, the _toxicity_, _identity attack_, and _threat_ scores were rather low as most postings are non-toxic (despite some having very high scores). There were small changes in the average scores of surviving actors, notably the peaks occurred 2 days after the campaign sparked on Twitter, with the average scores increasing significantly to around 30-50%, especially _toxicity_ and _identity attack_. However, these dropped quickly a couple of days after and retreated to normal levels.
Newcomers, on the other hand, expressed a significant increase of _toxicity_ and _identity attack_ during the first two weeks after the disruption took place (about 2-2.5 times higher), largely surpassing surviving actors. Their scores for _threat_ did not increase at that time but largely peaked after the forum first recovered on 27 September, with around 2 times higher. These activities suggest that while the surviving members were becoming more toxic when their community was under attack, new users became much more toxic for a few weeks after they engaged in the discussion before declining gradually to the same levels as old users. This is in line with the recent finding that users moving to other platforms can become more toxic than before [36].
#### Vi-B3 Social Interactions
To measure how these survivors interact with each other, we build a social interaction network among Kiwi Farms members over time. We consider each active user as a node, with an edge between two users if they posted in the same thread (weighted by the number of such interactions) [74]. We then explore changes in the network structure with a focus on Degree Centrality, which indicates how well-connected a user is over the entire network [75].
The network had developed stably before the disruption, with around 55.3k nodes and 131.3M edges on 1 July, reaching to around 57.2k nodes and 137.6M edges just before the Twitter campaign started (see Figure 10). There was a rapid increase in both nodes and edges shortly after the Twitter campaign. It suggests that the campaign drew more actors involved in interacting with others. The Cloudflare and DDoS-Guard actions paused the network for a few weeks, yet it resumed shortly after the forum's recovery. Notably, the increasing level of edges was considerably faster than nodes (it was the opposite previously), indicating that people were getting more connected. As of 31 December 2022, the network size is 59.1k nodes and 149.3M edges.
Overall, core users are better connected than casual users. The Twitter campaign largely boosted the centrality of both core and casual survivors. Before that, while core survivors were getting more centralised over time, casual survivors were becoming less centralised. But after the campaign on Twitter, the centralisation of both steadily increased. Newcomers came into play quickly afterwards and the forum recovery also made them more centralised.
#### Vi-B4 Discussion of the Incident
We examine how users talked about the two major involved parties (Kiwi Farms and Cloudflare) during the period by extracting posts containing case-insensitive keywords _'kiwifarm'_, _'kiwifarm'_, _'cloudflare'_, and _'cloud flare'_ from Kiwi Farms, its Telegram channel, and Lolow Farm. Table I shows that discussions about the two parties were highly skewed and it significantly depends on the platforms. Telegram users appeared to discuss things related to Kiwi Farms far more than Cloudflare (13.3 times higher), while the ratios were less skewed for Kiwi Farms and Lolow Farm, with 6.7 and 7.6, respectively. Our qualitative look at messages posted on the channel reveals that people indeed cared more about recovering the forum, instead of solely blaming Cloudflare - although they did that when the disruption happened and when the forum recovered.
Although these posts accounted for a trivial contribution to the total posting volume on all three platforms as shown
Figure 11: The degree centrality of survivors and newcomers in the network over time. Figures are in different scales.
Figure 10: The number of nodes and edges in the social interaction network made by Kiwi Farms members over time.
in Figure 4, most happened after the Twitter campaign, with almost no discussion before. The topic was popular for a short period, as shown in Figure 12. Users on both forums started discussing the incident shortly after the campaign started on 22 August. The topic was energised on both forums after Cloudflare's action on 3 September, peaking on 4 September on Kiwi Farms with over 400 and 600 posts about Kiwi Farms and Cloudflare (around 5% and 7.5% of all posts on that day), respectively. After Kiwi Farms activity was significantly reduced due to DDoS-Guard's action on 5 September, posts mentioning Kiwi Farms and Cloudflare on Lolow Farm peaked at around 80 and 20, respectively.13 Telegram activity regarding the incident was a bit different, as comments were only allowed after the forum was completely down; it followed the same trends as overall activity, with a peak of discussion about Kiwi Farms happening largely when the forum was inaccessible, as part of the forum discussion had moved here.
Footnote 13: The numbers for Lolow Farm are typically lower than Kiwi Farms as Lolow Farm is smaller and centred on images instead of text. We do not collect images for safety and ethical reasons, but we believe the trends observed are likely indicative if not reliable.
Discussion mentioning Kiwi Farms greatly exceeded those mentioning Cloudflare until the day Cloudflare took action (see the first graph in Figure 12). The pattern seen on Lolow Farm suggests that the attention toward the incident was reflected there, although the peak did not correlate with the overall volume observed in Figure 4 as this contribution is trivial compared to the total. There were almost no posts about Cloudflare after Kiwi Farms became completely inaccessible, but there were still around 20 posts about Kiwi Farms seen on Lolow Farm during that week. While nothing changed on Kiwi Farms during the second recovery, there was an increase in posts on Lolow Farm about the incident, presumably as people there got the news.
Overall, attention on Kiwi Farms, Telegram, and Lolow Farm was directed to the incident by the Twitter campaign, with posting volume peaking after the industry action. We believe it shows a genuine effect as none of the users there discussed Cloudflare and Kiwi Farms beforehand. However, the effect was temporary and almost dropped to the pre-disruption level after the second recovery: they lasted for a few days on Kiwi Farms, around one week on Lolow Farm (partly due to many domains of Kiwi Farms being down while Lolow Farm was still active), and a few weeks on Telegram. Users' interest was fleeting; they largely stopped talking about the incident after a few weeks.
## VI Tensions, Challenges, and Implications
The disruption analysed in this paper could be the first time a number of infrastructure firms were involved in a collective effort to shut down a website. While deplatforming can reduce the spread of abusive content and safeguard people's mental and physical safety, and is already routine on social-media platforms like Facebook, doing so without due process raises a number of philosophical, ethical, legal, and practical issues. For this reason Meta set up its own Oversight Board.
### _The Efficacy of the Disruption_
The disruption was more effective than previous DDoS attacks on the forum, as observed from our datasets. Yet the impact, although considerable, was short-lived. While part of the activity was shifted to Telegram, half of the core members returned quickly after the forum recovered. And while most casual users were shaken off, others turned up to replace them. Cutting forum activity and users by half might be a success if the goal of the campaign is just to hurt the forum, but if the objective was to "drop the forum", it has failed.
We are continuing to monitor the forum; it seems to be gradually recovering. There is a lack of data on real-world harassment caused by forum members, such as online complaints or police reports, so we are unable to measure if the campaign had any effect in mitigating the physical and mental harm inflicted on people offline.
One lesson is that while repeatedly disrupting digital infrastructure might significantly lessen the activity of online communities, it may just displace them, which has been also noted in previous work [76]. Campaigners can also get bored after a few weeks, while the disrupted community is more determined to recover their gathering place. As with the re-emergence of extremist forums like 8chan and Daily Stormer, Kiwi Farms is now back online. Deplatforming alone may be insufficient to disperse or suppress an unpleasant online
\begin{table}
\begin{tabular}{l l l l} \hline \hline Platforms & \begin{tabular}{l} Mentioning \\ Kiwi Farms \\ \end{tabular} & \begin{tabular}{l} Mentioning \\ Cloudflare \\ \end{tabular} &
\begin{tabular}{l} Mentioning \\ both parties \\ \end{tabular} \\ \hline Kiwi Farms & 10 096 (1.45\%) & 1515 (0.22\%) & 300 (0.04\%) \\ Telegram & 3 794 (0.72\%) & 286 (0.05\%) & 44 (0.01\%) \\ Lolow Farm & 1 494 (0.31\%) & 197 (0.04\%) & 44 (0.01\%) \\ \hline \hline \end{tabular}
\end{table}
Table I: Number of posts mentioning the two major involved parties during the period, with proportions of the total posts.
Figure 12: Discussion of the event on Kiwi Farms, its Telegram channel, and Lolow Farm. Figure scales are different.
community in the long term, even when concerted action is taken by a series of tech firms over several months. It may weaken a community for a while by fragmenting their traffic and activity, and scare away casual observers, but it may also make core group members even more determined and recruit newcomers via the Streisand effect, whereby attempts at censorship can be self-defeating [11, 77].
### _Censorship versus Free Speech_
One key factor may be whether a community has capable and motivated defenders who can continue to fight back by restoring disrupted services, or whether they can be somehow disabled, whether through arrest, deterrence or exhaustion. This holds whether the defenders are forum operators or distributed volunteers. So under what circumstances might the police take decisive action to decapitate an online forum, as the FBI did for example with Silk Road?
If some of a forum's members break the law, are they a dissident organisation with a few bad actors, or a terrorist group that should be hunted down? Many troublesome organisations do attract hot-headed young members, and activists from animal-rights activists and climate-change protesters through to trade union organisers do occasionally fall foul of the law. But whether they are labelled as terrorists or extremists is often a political matter. Taking down a website on which a whole community relies will often be hard to defend as a proportionate and necessary law-enforcement action. The threat of legal action can be countered by the operator denouncing whatever specific crimes were complained of. In this case, the Kiwi Farms founder denounced SWAT harassment and other blatant criminality [50]. Indeed, a competent provocateur will stop just short of the point at which their actions will call down a vigorous police response.
The freedom of speech protected by the US First Amendment [78] is in clear tension with the mental and physical security of harassment victims. The Supreme Court has over time established tests to determine what speech is protected and what is not, including clear and present danger [79], a sole tendency to incite or cause illegal activity [80], preferred freedoms [81], and compelling state interest [82]; however, the line drawn between them is not always clear-cut. Other countries are more restrictive, with France and Germany banning Nazi symbolism and Turkey banning material disrespectful of Mustafa Kemal Ataturk. In the debates over the Online Safety Bill currently before the UK Parliament, the Government at one point proposed to ban 'legal but harmful' speech online, while not making these speech acts unlawful face-to-face. These proposals related to websites encouraging eating disorders or self-harm. Following the tragic suicide of a teenage girl, tech firms are under pressure to censor such material in the UK using their terms of service or by tweaking their recommendation algorithms.
There are additional implications in taking down platforms whose content is harmful but not explicitly illegal. Requiring firms to do this, as was proposed in the Online Safety Bill, will drastically expand online content regulation. The UK legislation hands the censor's power to the head of Ofcom, the broadcast regulator, who is a political appointee. It will predictably lead to overblocking and invite abuse of power by government officials or big tech firms, who may suppress legitimate voices or dissenting opinions. There is an obvious risk of individuals or groups being unfairly targeted for political or ideological reasons.
### _The Role of Industry in Content Moderation_
The rapid increase of cybercrime-as-a-service throughout the 2010s makes attacks easier than ever. A teenager with as little as $10 can use a DDoS-for-hire service to knock your website offline [83], so controversial websites depend on the grace and favour of a large hosting company or a specialist DDoS prevention contractor. This is just one aspect of a broader trend in tech: that the Internet is becoming more centralised around a small number of big firms, ranging from online social platforms, hosting companies, transit networks, to service providers and exchange points [84]. Many of them claim to be committed to fighting hate, harassment, and abuse yet some are disproportionately responsible for serving online bad content [76], and the effort they put into the fight is variable [85, 86]. Now that activists have pressured infrastructure providers to act as content moderators, policymakers will be tempted too. Some may stand up to political or social pressure, because moderation is both expensive and difficult, but others may fold from time to time because of political pressure or legal compulsion. This would undermine the end-to-end principle of the Internet, as enshrined for example in COPA s 230 in the USA and in the EU's Net Neutrality Law [87].
Private companies must comply and remove illegal content from their infrastructure when directed to do so by a court order. However, deplatforming Kiwi Farms or any other customers does not violate the principle of free speech. It is essentially a contractual matter; they have the right to cease their support for a website that violates their policies. Infrastructure providers may occasionally need to work expediently with law enforcement in the case of an imminent threat to life. Most providers have worked out ways of doing this, but the mechanisms can be too sluggish. Cloudflare attempted to collaborate with law enforcement to sort out the case of Kiwi Farms, yet the process could not keep up with the escalating threats and it ended up taking unilateral action, relying on its terms of service [26]. In an ideal world, we would have an international legal framework for taking down websites that host illegal content or that promote crime; unfortunately, this framework does not exist.
The Budapest Convention criminalises some material on which all states agree, such as child sex abuse images, but even there the boundaries are contested [88]. Online drug markets such as Silk Road and Hansa Market have been taken down because of other laws - drug laws - that also enjoy international standardisation and collaboration. Copyright infringement also gets the attention of international treaties and coordinated action by tech majors, though civil law plays a greater role here than criminal law. Then there is material about which some
states feel strongly but others do not; 'one man's freedom fighter is another man's terrorist'. And then there's a vast swamp of fake news, animal cruelty, conspiracy theories, and other material that many find unpleasant or distressing, and which social networks moderate for the comfort of both their users and their advertisers. Legislators occasionally call for better policing of some of this content.
### _Policy Implications_
The UK Online Safety Bill proposes a new regulator who will be able to apply for a court order mandating that tech firms disrupt an objectionable online activity [43]. One might imagine Ofcom deciding to take down Kiwi Farms if their target had been a resident of Britain rather than Canada, and going to the various tech firms that were involved in the disruption we describe here, serving them one after another with orders signed by a judge in the High Court in London. Even if all the companies were to comply, rather than appealing or just ignoring the court altogether, it is hard to see how such an operation could be anything like as swift, coordinated or effective as the action taken on their own initiative by tech companies that we describe here. Where the censor's OODA loop - the process by which it can observe, orient, decide and act - involves a government agency assessing the effects of each intervention and then going to court to order the next one, the time constant would stretch from hours to months. And in any case, government interventions in this field are often significant but rather short-lived [14, 15]. One reason they can be effective is that the maintainer of a blatantly illegal website may be arrested and jailed, as happened with Silk Road. With a forum like Kiwi Farms, whose operator has denounced criminal acts perpetrated via his infrastructure [50], that option may simply not be available.
Previous work has also explored why governments are less able to take down bad sites than private actors [11]; that work analysed single websites with clearly illegal content, such as those hosting malware, phishing lures or sex-abuse images. This study shows why taking down an active community is likely to be even harder. Even when several tech firms roll their sleeves up and try to suppress a community some of whose members have indulged in crime and against whom there is an industry consensus, the net effect may be modest at best. Our case study may be the best result that could be expected for online censorship, but it only cut the users, posts, threads and traffic by about half. Our findings suggest that using content moderation law to suppress an unpleasant online community may be very challenging.
## VII Conclusion
Online communities may not only act as a discussion place but provide mutual support for members who share common values. For some, it may be where they hang out; for others, it may become part of their identity. Legislators who propose to ban an online community might consider precedents such as Britain's ban on Provisional Sinn Fein from 1988-94 due to its support for the Provisional IRA during the Troubles, or the bans on the Muslim Brotherhood enacted by various Arab regimes.14 Declaring a community to be illegal and thus forcing it underground may foster paranoid worldviews, increase signals associated with toxicity and radicalisation [44, 36] and have many other unintended consequences. The Kiwi Farms disruption, which involved a substantial effort by the industry, is perhaps the best outcome that could be expected even if the censor were agile, competent and persistent. Yet this has demonstrated that merely trying to deplatform an active online community is not enough to deal effectively with online hate and harassment.
Footnote 14: During the Sinn Fein ban, it was illegal to transmit the voice or image of one of their spokesmen in Britain, so the BBC and other TV stations simply employed actors to read the words of Gerry Adams and Martin McGuinness.
We believe the harms and threats associated with online hate communities may justify action despite the right to free speech. But within the framework of the EU and the Council of Europe which is based on the European Convention on Human Rights, such action will have to be justified as proportionate, necessary and in accordance with the law. It is unlikely that taking down a whole community because of a crime committed by a single member can be proportionate. For a takedown to be justified as necessary, it must also be effective, and this case study shows how high a bar that could be. For a takedown to be in accordance with the law, it cannot simply be a response to public pressure. There must be a law or regulation that determines predictably whether a specific piece of content is illegal, and a judge or other neutral finder of fact would have to be involved.
The last time a Labour government won power in Britain, it won on a promise to be 'Tough on Crime, and Tough on the Causes of Crime'. Some scholars of online abuse are now coming to a similar conclusion that the issue may demand a more nuanced approach [3, 21]: as well as the targeted removal of content that passes an objective threshold of illegality, the private sector and governments should collaborate to combine takedowns with measures such as education and psycho-social support [89]. And where the illegality involves violence, it is even more vital to work with local police forces and social workers rather than just attacking the online symptoms [88].
There are multiple research programmes and field experiments on effective ways to detox young men from misogynistic attitudes, whether in youth clubs and other small groups, at the scale of schools, or even by gamifying the identification of propaganda that promotes hate. But most countries still lack a unifying strategy for violence reduction [90]. In both the US and the UK, for example, while incel-related violence against women falls under the formal definition of terrorism, it is excluded from police counterterrorism practice, and the politicisation of misogyny has made this a tussle space in which political leaders and police chiefs have difficulty in taking effective action. In turbulent debates, policymakers should first ask which tools are likely to work, and it is in this context that we offer the present case study.
## Acknowledgments
We are grateful to Richard Clayton, Yi Ting Chua, Ben Collier, Tina Marjanov, Konstantinos Ioannidis, Ilia Shumailov, and our colleagues at the Cambridge Cybercrime Centre for their useful feedback in the early draft of the paper.
|
2307.00208 | Coupling of acoustic phonon to a spin-orbit entangled pseudospin | We consider coupling of acoustic phonon to pseudospins consisting of
electronic spins locked to orbital angular momentum states. We show that a
Berry phase term arises from projection onto the time-dependent lowest energy
manifold. We examine consequences on the phonon modes, in particular mode
splitting, induced chirality and Berry curvatures under an external magnetic
field which Zeeman couples to the pseudospin. | S. -K. Yip | 2023-07-01T03:17:06Z | http://arxiv.org/abs/2307.00208v2 | # Coupling of acoustic phonon to a spin-orbit entangled pseudospin
###### Abstract
We consider coupling of acoustic phonon to pseudospins consisting of electronic spins locked to orbital angular momentum states. We show that a Berry phase term arises from projection onto the time-dependent lowest energy manifold. We examine consequences on the phonon modes, in particular mode splitting, induced chirality and Berry curvatures under an external magnetic field which Zeeman couples to the pseudospin.
## I Introduction
How phonons couple to magnetic field has received a lot of attention recently, with particular boost due to the interest in thermal Hall effects and the question of possible phonon contributions [1; 2; 3; 4; 5; 6; 7; 8; 9; 10]. In this paper, we investigate a mechanism of phonon-magnetic field coupling thereby an acoustic phonon can acquire a Berry curvature, and the otherwise degenerate phonon modes (in the absence of this coupling) would be mixed, producing chiral modes with finite frequency splitting. The general mechanism of generating such coupling between the phonon to the magnetic field is by now well-appreciated. While in the case of optical phonons in strongly ionic solids, the coupling can be directly comprehended as due to motion of the charged ions [11], in general it has to be understood as a Berry phase effect [12; 13; 14; 15; 16]. Phonons are associated with the motion of the atoms or ions in the solid. The electrons, on the other hand, not only provide an effective scalar potential between the ions given in the traditional Born-Oppenheimer approximation, but also carries a Berry phase factor depending on the ionic coordinates. This phase factor, after the electron degrees of freedom have been eliminated, gives rise to an effective vector potential [12; 13; 14; 15; 16; 17] and hence Lorentz force for the motions of the ions or nuclei. Traditional first principle phonon calculations in solids based on density functional theory [18] take into account electron-phonon interactions only via the "interatomic force constant" matrix and thus miss the Berry phase contribution mentioned above, though more recent works (e.g. [16]) have allowed for this contribution. The generation of gauge field on one subsystem via projecting out the other has also been discussed in other branches of physics (e.g. [19; 20; 21]).
We shall here consider phonons coupling to the magnetic field via spins. We shall primarily consider localized spins in the paramagnetic regime, where the spins are not ordered or even non-interacting, with finite polarization only due to the external applied magnetic field. The coupling mechanism we consider is different from those investigated in the literature, such as magnetic anisotropy energy [22] in magnetically ordered systems, or modifications of spin-spin interaction energies due to bond-length or angle changes in the presence of phonons. The specific systems we shall examine are those where the "spins" are actually pseudospins, with electronic spins entangled with orbital angular momentum states, for examples, Ru\({}^{+3}\) ions in \(\alpha-\)RuCl\({}_{3}\)[3; 4; 6; 10], or Ir\({}^{+4}\) in Sr\({}_{2}\)IrO\({}_{4}\)[23; 24; 25; 26] with (Kramers degenerate) ground states well separated from excited states [27]. Systems with such strong spin-orbit entangled pseudospins themselves are under strong recent attention due to interesting physics such as spin-orbit assisted Mott transition, unusual interaction between pseudospins, possible spin liquids and multipolar order etc. [28]. In the presence of the acoustic phonon, the local environment becomes time dependent. If the pseudospin is not excited, then this pseudospin must remain within the ground state manifold though defined according to this instantaneous environment. This time dependence then generates an effective gauge field for the ionic motion. Since the pseudospin Zeeman couples to the magnetic field, direct phonon-magnetic field coupling would result, providing the mechanism we desire in the first paragraph. Explicitly we shall be examining d-electron systems in cubic environment. However, the mechanism seems to be quite general when both crystal field splitting and strong spin-orbit coupling are present when the phonon frequencies lie within suitable frequency ranges. Since a projection into a subspace is necessary, our mechanism is only applicable for such strongly spin-orbit entangled systems.
Our mechanism to be discussed here is distinct from the one which has been investigated also for spin-orbit entangled pseudospins in particular for f-electron systems (e.g. [29; 30; 31; 32]) coupling to optical phonons. There, the coupling, termed magneto-elastic interaction in [29; 30; 31; 32] (but to be distinguished from magneto-elastic couplings which has been discussed in magnetostriction or for acoustic waves in, e.g., [22; 33]), arises from the modification of crystal fields acting on the pseudospins in the presence of the optical phonons. These phonon-pseudospin couplings are parameterized by coupling constants which describe thus the extent that the crystal fields are modified due to the displacements of the ions surrounding the pseudospin under discussion. In this mechanism, the splitting of degenerate phonon modes by the magnetic field is generated by virtual transitions between different energy manifolds [29; 30]. In contrast, our mechanism arises from phase factors generated from projection onto the time-dependent pseudospin ground
state manifold. As we shall see, the "coupling constant" depends on the information entirely of the ground state manifold, and in fact a factor related to the geometric information on the structure of the pseudospin.
The structure of the rest of this paper is as follows. In Sec. II we introduce our specific model, and then derive the phonon-pseudospin coupling. The effect of this coupling on the sound modes frequencies is evaluated in Sec. III. In Sec. IV we evaluate the Berry curvatures. We end with some order of magnitude estimates and discussions in Sec. V.
## II Model
To be specific, consider \(\mathrm{Ir}^{+4}\) ions \(\mathrm{Sr}_{2}\mathrm{IrO}_{4}\) or \(\mathrm{Ru}^{+3}\) ions in \(\mathrm{RuCl}_{3}\), both with five \(d\) electrons. (see, e.g. [23; 24; 25; 26]) In both cases, the ions are situated within an approximately cubic environment formed by the \(\mathrm{O}^{-2}\) and \(\mathrm{Cl}^{-1}\) ions, respectively. The \(d\)-electrons energy levels are crystal-field split into a \(t_{2g}\) and an \(e_{2g}\) manifold. Only the \(t_{2g}\) manifold consisting of the orbitals usually labelled as \(xy\), \(yz\), and \(zx\) are relevent, and together with the electronic spin \(\uparrow\) and \(\downarrow\) degree of freedom, form six levels. The spin-orbit interaction further splits these six levels into one quartet, usually labelled as \(j_{eff}=3/2\), which are occupied, and another Kramer's doublet, usually labelled as \(j_{eff}=1/2\), which is singly occupied. We shall write the wavefunctions for the two levels in this doublet as [34]
\[\mid\Uparrow\rangle = \frac{-i}{\sqrt{3}}\ \left[\left|xy\uparrow\right\rangle+\left|yz \downarrow\right\rangle+i\left|xz\downarrow\right\rangle\right]\] \[\mid\Downarrow\rangle = \frac{i}{\sqrt{3}}\ \left[\left|xy\downarrow\right\rangle-\left|yz \uparrow\right\rangle+i\left|xz\uparrow\right\rangle\right]\, \tag{1}\]
forming a time-reversal pair (we use the convention, under time-reversal, \(\mid\uparrow\rangle\rightarrow\mid\downarrow\rangle\), \(\mid\downarrow\rangle\rightarrow-\mid\uparrow\rangle\), and similarly, \(\mid\Uparrow\rangle\rightarrow\mid\Downarrow\rangle\), \(\mid\Downarrow\rangle\rightarrow-\mid\Uparrow\rangle\)). In the absence of phonons, the orbital parts of the wavefunctions (\(xy\), \(yz\), \(zx\)) as well as the spin parts (\(\uparrow\), \(\downarrow\)) are defined according to fixed axes with respect to the crystal in equilibrium.
Before we consider phonons, let's first note a few relations which we shall use. Denoting the electronic spin operator by \(\vec{s}=\frac{1}{2}\vec{\sigma}\) where \(\vec{\sigma}\) are Pauli matrices operating on the \(\uparrow\) and \(\downarrow\) space, and \(\vec{L}\) the orbital angular momentum operator, their projections onto the subspace of eq. (1) are [35]
\[\vec{s}=-\frac{1}{6}\vec{\tau}\,\qquad\vec{L}=-\frac{2}{3}\vec{\tau}\, \tag{2}\]
where \(\vec{\tau}\) are Pauli matrices within the within the \(\Uparrow\), \(\Downarrow\) space. The energy change under a magnetic field \(\vec{B}\), \(\mu_{B}(\vec{L}+2\vec{s})\cdot\vec{B}\) (with \(\mu_{B}\) the Bohr magneton) with the operators projected again onto this subspace (_i.e._, ignoring thus other contributions), would then be
\[E_{Z}=\mu_{B}(-\frac{2}{3}-\frac{1}{3})\vec{\tau}\cdot\vec{B}\equiv-g\mu_{B} \frac{\vec{\tau}}{2}\cdot\vec{B} \tag{3}\]
with an effective \(g\) factor of 2 [24; 25]. In the first equality of eq (3), \(-\frac{2}{3}\) arises from \(\vec{L}\) and \(-\frac{1}{3}=2\times(-\frac{1}{6})\) arises from \(2\vec{s}\). Eq (2) implies
\[\vec{L}+\vec{s}=-\frac{5}{6}\tau\, \tag{4}\]
a result which we shall use later.
### phonon-pseudospin coupling
Consider a long wavelength acoustic phonon, with a spatial and time dependent displacement vector \(\vec{\xi}(\vec{x},t)\). For simplicity, we shall consider a cubic crystal, and remark on modifications for other symmetries later. As is well-known, we can decompose this into three components: \(\vec{\nabla}\cdot\vec{\xi}\), \(\frac{1}{2}\vec{\nabla}\times\vec{\xi}\) and the tensor \(\frac{1}{2}\left(\frac{\partial\xi_{1}}{\partial x_{j}}+\frac{\partial\xi_{ \vec{\xi}}}{\partial x_{l}}\right)-\frac{1}{3}\delta_{jl}\vec{\nabla}\cdot\vec{\xi}\), corresponding to an isotropic expansion (compression), rotation, and anisotropic deformation respectively [36]. Under a low energy excitations of the crystal [27], the electronic state \(\left|\Psi\right\rangle\) of our ion under consideration should still be within the manifold described by eq (1) though in a frame specified by the local environment. Hence at an instantaneous time \(t\), we should have (up to small terms describing the excitations to higher energy levels)
\[\left|\Psi(t)\right\rangle=\alpha^{\prime}_{\Uparrow}(t)\mid\Uparrow^{\prime}(t )\rangle+\alpha^{\prime}_{\Downarrow}(t)\mid\Downarrow^{\prime}(t)\rangle \tag{5}\]
where \(\mid\Uparrow^{\prime}(t)\rangle\) ( \(\mid\Downarrow^{\prime}(t)\)) are states given by eq (1) except with \(x,y,z\), \(\mid\uparrow\rangle,\mid\downarrow\rangle\) replaced by \(x^{\prime},y^{\prime},z^{\prime},\mid\uparrow^{\prime}\rangle\),\(\mid\downarrow^{\prime}\rangle\) rotated from the former by \(\vec{\Theta}(t)\equiv\frac{1}{2}\vec{\nabla}\times\vec{\xi}(t)\). (The isotropic compression and anisotropic deformation would not affect what we would be discussing below [37] and shall be ignored from now on). Suppose that our ion is under an external field \(\vec{B}\), and let \(\vec{B}^{\prime}\) be the value of this field in the above mentioned rotating frame. The Schrodinger equation of motion for \(\left|\Psi\right\rangle\), employing eq. (5) and noting the time dependence of the basis function \(\mid\Uparrow^{\prime}(t)\rangle,\mid\Downarrow^{\prime}(t)\rangle\), implies
\[i\frac{\partial}{\partial t}\left(\begin{array}{c}\alpha^{\prime}_{\Uparrow} \\ \alpha^{\prime}_{\Downarrow}\end{array}\right)=-g\mu_{B}\vec{B}^{\prime}(t)\cdot \frac{\vec{\tau}}{2}\left(\begin{array}{c}\alpha^{\prime}_{\Uparrow}\\ \alpha^{\prime}_{\Downarrow}\end{array}\right)+\left(\begin{array}{cc}-i\langle \Uparrow^{\prime}|\frac{\partial}{\partial t}|\Uparrow^{\prime}\rangle&-i \langle\Uparrow^{\prime}|\frac{\partial}{\partial t}|\Downarrow^{\prime} \rangle\\ -i\langle\Downarrow^{\prime}|\frac{\partial}{\partial t}|\Uparrow^{\prime}\rangle&-i \langle\Downarrow^{\prime}|\frac{\partial}{\partial t}|\Downarrow^{\prime} \rangle\end{array}\right)\left(\begin{array}{c}\alpha^{\prime}_{\Uparrow}\\ \alpha^{\prime}_{\Downarrow}\end{array}\right) \tag{6}\]
Here \(\vec{\tau}\), which rigorously should have been denoted as \(\vec{\tau}^{\prime}\), are Pauli matrices in the \(\Uparrow^{\prime}\), \(\Downarrow^{\prime}\) subspace, but we shall not make this distinction in notations for simplicity. Since
\(\mid\Uparrow^{\prime}(t)\rangle=e^{-i\vec{\Theta}\cdot(\vec{L}+\vec{\sigma})}|\Uparrow \rangle\approx(1-i\vec{\Theta}\cdot(\vec{L}+\vec{s}))|\Uparrow\rangle\), the time derivatives can be evaluated as, e.g., \(-i\langle\Uparrow^{\prime}\mid\frac{\Uparrow}{\partial t}\Uparrow^{\prime}\rangle\)\(=-(\frac{\partial\vec{\Theta}}{\partial t})\cdot[\langle\Uparrow^{\prime}(|\vec{L}+\vec{s} )|\Uparrow^{\prime}\rangle]\). Using eq (4) (and ignoring a terms \(\propto\vec{\Theta}\times\frac{\partial\Theta}{\partial t}\) which arises due to the difference between the primed and unprimed \(\Uparrow\) and \(\Downarrow\) space), we obtain
\[i\frac{\partial}{\partial t}\left(\begin{array}{c}\alpha_{\Uparrow}^{ \prime}\\ \alpha_{\Downarrow}^{\prime}\end{array}\right)=\left[-g\mu_{B}\vec{B}^{\prime} (t)\cdot\frac{\vec{\tau}}{2}+\frac{5}{6}\frac{\partial\vec{\Theta}}{\partial t }\cdot\vec{\tau}\right]\left(\begin{array}{c}\alpha_{\Downarrow}^{\prime}\\ \alpha_{\Downarrow}^{\prime}\end{array}\right) \tag{7}\]
It would be more convenient to have an equation of motion involving directly \(\vec{B}\) instead. We observe that \(\vec{B}^{\prime}=\vec{B}-\vec{\Theta}\times\vec{B}\) and hence \(\vec{B}^{\prime}\cdot\vec{\tau}=e^{i\frac{\Theta}{2}\cdot\vec{\tau}}\vec{B} \cdot\vec{\tau}e^{-i\frac{\vec{\Theta}}{2}\cdot\vec{\tau}}\). Introducing
\[\left(\begin{array}{c}\tilde{\alpha}_{\Uparrow}\\ \tilde{\alpha}_{\Downarrow}\end{array}\right)=e^{-i\frac{\Theta}{2}\cdot\vec{ \tau}}\left(\begin{array}{c}\alpha_{\Uparrow}^{\prime}\\ \alpha_{\Downarrow}^{\prime}\end{array}\right) \tag{8}\]
we obtain finally
\[i\frac{\partial}{\partial t}\left(\begin{array}{c}\tilde{\alpha}_{\Uparrow} \\ \tilde{\alpha}_{\Downarrow}\end{array}\right)=\left[-\frac{g\mu_{B}}{2}\vec{B} +\frac{4}{3}\frac{\partial\vec{\Theta}}{\partial t}\right]\cdot\vec{\tau} \left(\begin{array}{c}\tilde{\alpha}_{\Uparrow}\\ \tilde{\alpha}_{\Downarrow}\end{array}\right) \tag{9}\]
where we have again dropped a term involving second powers in \(\Theta\). \(\frac{4}{3}\) arises from \(\frac{1}{2}-(-\frac{5}{6})\) thus is due to the difference between the rotational matrix for ordinary spin-1/2 and our pseudospin (eq. (4)). The direction of the pseudospin, defined as the expectation value of \(\vec{\tau}\) with the "spin" wavefunction \((\tilde{\alpha}_{\Uparrow},\tilde{\alpha}_{\Downarrow})\), is given by
\[\frac{\partial}{\partial t}\hat{\tau}=\hat{\tau}\times\left[\vec{\omega}_{0}+r \frac{\partial\vec{\Theta}}{\partial t}\right]=\hat{\tau}\times\left[\vec{ \omega}_{0}+\frac{r}{2}(\nabla\times\frac{\partial\vec{\xi}}{\partial t})\right] \tag{10}\]
with \(\vec{\omega}_{0}=g\mu_{B}\vec{B}\) and \(r=-\frac{8}{3}\). The former is the standard precession due to the external field and the second extra term is due to the rotational properties of our basis functions derived above.
### Lagrangian
We construct now the Lagrangian for the coupled phonon and pseudospin system. To simplify the writing, when no confusion arises, we shall often just write "spin" for the pseudospin.
First, the acoustic phonon alone can be described by the Lagrangian density
\[L_{0,ph}=\frac{1}{2}\rho_{M}\left(\frac{\partial\xi_{j}}{\partial t}\right)^{ 2}-U_{elastic} \tag{11}\]
where \(U_{elastic}=\frac{1}{2}\left[\lambda_{1}(\frac{\partial\xi_{j}}{\partial x_{l }}\frac{\partial\xi_{j}}{\partial x_{j}})+\lambda_{2}\frac{\partial\xi_{j}}{ \partial x_{j}}\frac{\partial\xi_{l}}{\partial x_{l}}\right]\) is the elastic energy density. Here \(\rho_{M}\) is the mass density (dimension mass times inverse volume) and sums over repeated indices are implicit. We have also ignored a term \(\lambda_{3}(\frac{\partial\xi_{j}}{\partial x_{j}}\frac{\partial\xi_{j}}{ \partial x_{j}})\) which is allowed in cubic symmetry for simplicity. Its effects will be discussed later. Under this simplification, for a system without coupling to spin, sound velocities are independent of direction of propagation \(\hat{q}\), with longitudinal and transverse sound velocities given by \(v_{L}=[(\lambda_{1}+\lambda_{2})/\rho_{M}]^{1/2}\) and \(v_{T}=[\lambda_{1}/\rho_{M}]^{1/2}\) respectively.
For the spin, first we recall that, for a spin \(S\) under a magnetic field along \(\hat{z}\), the Lagrangian can be written as [38]\(L_{s}=g\mu_{B}SB\cos\theta+S\cos\theta\frac{\partial\phi}{\partial t}\) where \(\theta\) and \(\phi\) are the angles for the spin direction in spherical coordinates, the first term being from the Zeeman energy and the second a Berry phase term. To produce the equation of motion (10), we need only to replace \(g\mu_{B}SB\cos\theta\) by \(\frac{\vec{\tau}}{2}\cdot\left[g\mu_{B}\vec{B}+\frac{\vec{\tau}}{2}(\nabla \times\frac{\partial\vec{\xi}}{\partial t})\right]\) (now specializing to pseudospin 1/2). The last term allows us to identify the pseudospin - phonon coupling.
The Lagrangian \(L=L_{ph}+L_{s}+L_{ph-s}\) is a sum of the phonon term (11), the spin term and the phonon-spin coupling term. We then have, for a net effective spin density \(\rho_{s}\) per unit volume,
\[L_{s}=\rho_{s}\frac{1}{2}\left[g\mu_{B}\vec{B}\cdot\hat{\tau}+\cos\theta\frac{ \partial\phi}{\partial t}\right] \tag{12}\]
\[L_{ph-s}=\frac{r\rho_{s}}{4}\left[\hat{\tau}\cdot(\nabla\times\frac{\partial \vec{\xi}}{\partial t})\right] \tag{13}\]
with \(\hat{\tau}\) the net (pseudo-)spin direction. The phonon-pseudospin coupling is dictated by the factor \(r\) derived in the last subsection. As is evident from our derivation above, this coupling arises from the Berry phase due to the rotating frame of reference for the pseudospin in the presence of the transverse acoustic phonon. We remind the readers here that this coupling thus has an entirely different origin from the magneto-elastic coupling discussed by, e.g., [22] for magnetic materials, which describes the change in magnetic energies in the presence of stress.
### effective equation of motion
The equation of motion for \(\hat{\tau}\) was already obtained in (10), which reads, after Fourier transform and linearizing about the equilibrium where \(\hat{\tau}=\hat{z}\),
\[-i\omega\hat{\tau}(\omega,\vec{q})=\omega_{0}(\hat{\tau}\times\hat{z})+\frac{r \omega}{2}\left[\hat{z}\times(\vec{q}\times\vec{\xi})\right] \tag{14}\]
where \(\vec{q}\) is the wavevector and \(\omega\) the angular frequency.
The equation of motion for the displacement is
\[\rho_{M}\omega^{2}\xi_{j}-\frac{r\omega}{4}\rho_{s}(\vec{q}\times\hat{\tau})_{j}= \lambda_{1}q^{2}\xi_{j}+\lambda_{2}q_{l}(q_{j}\xi_{l}) \tag{15}\]
We now study the consequences of eq. (14) and (15). Equation (14) implies that \(\hat{\tau}_{z}\) is just a constant. The components orthogonal to the field direction (\(j=x,y\)) obeys
\[\tau_{j}=\frac{r\omega/2}{\omega_{0}^{2}-\omega^{2}}\left[\omega_{0}(\vec{q} \times\vec{\xi})_{j}-i\omega(\hat{z}\times(\vec{q}\times\vec{\xi}))_{j}\right] \tag{16}\]
Putting this into eq. (15) gives us the equation of motion entirely expressed in terms of \(\xi_{j}\):
\[0=\rho_{M}\omega^{2}\xi_{j}-\left[\lambda_{1}q^{2}\xi_{j}+\lambda_{2}q_{l}(q_ {j}\xi_{l})\right]-\frac{r^{2}\rho_{s}}{8}\frac{\omega_{0}\omega^{2}}{\omega_{ 0}^{2}-\omega^{2}}\left[-q_{z}^{2}\xi_{j}+q_{z}q_{j}\xi_{z}+(q_{z}(q_{l}\xi_{l })-q^{2}\xi_{z})\delta_{jz}\right]-i\frac{r^{2}\rho_{s}}{8}\frac{\omega^{3}}{ \omega_{0}^{2}-\omega^{2}}q_{z}(\vec{q}\times\vec{\xi})_{j} \tag{17}\]
Coupling of the pseudospin to the phonon results in the last two new terms. Here the factor \(\delta_{jz}=1\) if \(j=z\) and vanishes otherwise. We note the factor \(q_{z}\) in the last term, which is generated from the last term in eq. (16). This factor reflects the fact that the time dependent parts of \(\tau\) only have \(x\) and \(y\) components.
We now analyze eq. (17) in two different limits.
## III Sound modes
### small magnetic field: anti-adiabatic regime
For small fields, \(\omega_{0}\) is much smaller than the phonon frequencies, eq. (17) approximately reads
\[0=\rho_{M}\omega^{2}\xi_{j}-\left[\lambda_{1}q^{2}\xi_{j}+ \lambda_{2}q_{l}(q_{j}\xi_{l})\right]+i\frac{r^{2}\rho_{s}}{8}\omega q_{z}( \vec{q}\times\vec{\xi})_{j} \tag{18}\]
Longitudinal sound, with \(\xi\) parralled to \(\vec{q}\), is not affected. Physically, there is no rotation of the environment surrounding the pseudospin in this case. The two polarizations of the transverse sound are coupled via the spins, turning them into circular polarized ones. Writing \(\vec{\xi}=\xi_{\theta}\hat{\theta}+\xi_{\phi}\hat{\phi}\), we get
\[\left(\begin{array}{cc}\omega^{2}-q^{2}v_{T}^{2}&-i\frac{g_{ \omega}r^{2}}{8\omega_{0}}\omega q^{2}\cos\theta_{q}\\ +i\frac{\rho_{s}r^{2}}{8\rho_{M}}\omega q^{2}\cos\theta_{q}&\omega^{2}-q^{2}v _{T}^{2}\end{array}\right)\left(\begin{array}{c}\xi_{\theta}\\ \xi_{\phi}\end{array}\right)=0 \tag{19}\]
Here \(\theta_{q}\) is the angle between \(\hat{q}\) and \(\hat{z}\). To lowest order in the phonon-pseudospin coupling, the frequencies are given by
\[\omega_{\pm}=qv_{T}\left[1\pm Z{\rm cos}\theta_{\rm q}\right] \tag{20}\]
for the modes with right ( \((\xi_{\theta},\xi_{\phi})\propto(1,i)\)) and left ( \((\xi_{\theta},\xi_{\phi})\propto(1,-i)\)) circular polarization, with \(Z\) a \(q\)-dependent dimensionless parameter
\[Z\equiv\frac{\rho_{s}r^{2}q}{16\rho_{M}v_{T}}. \tag{21}\]
Thus the fractional splitting increases with \(q\), reflecting that a shorter wavelength implies a larger rotation motion of the lattice \(\vec{q}\times\vec{\xi}\) and hence a stronger coupling to our pseudospin. This is different from a naive picture of hybridization between the phonon modes with the Larmor precession of the spins, where the induced splitting would decrease with increasing frequencies away from \(\omega_{0}\). From eq. (20), we see that for \(q_{z}>0\), the lower (higher) frequency mode is left (right)-circularly polarized. The reverse is the case if \(q_{z}<0\). See Fig 1a.
### low frequency: adiabatic regime
For very small \(q\), the phonon frequency \(\sim qv_{T}\) is much smaller than \(\omega_{0}\). In this case, the effective equation of motion for the phonon coordinate can be written as
\[0=\rho_{M}\omega^{2}\xi_{j}-\left[\lambda_{1}q^{2}\xi_{j}+\lambda_{2}q_{l}(q_ {j}\xi_{l})\right]-\frac{r^{2}\rho_{s}\omega^{2}}{8\omega_{0}}\left[-q_{z}^{ 2}\xi_{j}+q_{z}q_{j}\xi_{z}+(q_{z}(q_{l}\xi_{l})-q^{2}\xi_{z})\delta_{jz} \right]-i\frac{r^{2}\rho_{s}}{8}\frac{\omega^{3}}{\omega_{0}^{2}}q_{z}(\vec{q} \times\vec{\xi})_{j} \tag{22}\]
Note the sign differences between the last terms of eqs. (18) and (22) in two different frequency regimes, similar
to the case of, e.g., driven harmonic oscillator for above versus below resonance. Formally the last term is one higher order in \(\omega_{0}^{-1}\) than the second last, but we shall explain shortly why we keep this term. Longitudinal sound is again unaffected. The eigenvector has \(\vec{\xi}\) parallel to \(\vec{q}\), as can be checked by multiplying eq. (22) by \(q_{j}\) and the sum over \(j\) (there is no contribution from either the last or second last terms). The transverse sounds obey
\[\left(\begin{array}{cc}\omega^{2}-q^{2}v_{T}^{2}+\frac{\rho_{s}r^{2}}{8_{PM} \omega_{0}}q^{2}\omega^{2}&i\frac{\rho_{s}r^{2}}{8_{PM}\omega_{0}^{2}}\omega^ {3}q^{2}\cos\theta_{q}\\ -i\frac{\rho_{s}r^{2}}{8_{PM}\omega_{0}^{2}}\omega^{3}q^{2}\cos\theta_{q}& \omega^{2}-q^{2}v_{T}^{2}+\frac{\rho_{s}r^{2}}{8_{PM}\omega_{0}}q^{2}\omega^ {2}\cos^{2}\theta_{q}\end{array}\right)\left(\begin{array}{c}\xi_{\theta} \\ \xi_{\phi}\end{array}\right)=0 \tag{23}\]
For \(\theta_{q}\) not too close to \(0\) or \(\pi\), we can ignore the off-diagonal terms in this matrix equation as they are second order in \(\omega_{0}^{-1}\). We obtain two non-degenerate modes with frequencies \(\omega=qv_{T}(1+X)^{-1/2}\) (for \(\vec{\xi}\) along \(\hat{\theta}\)) and \(\omega=qv_{T}/(1+X\cos^{2}\theta_{q})^{-1/2}\) (for \(\vec{\xi}\) along \(\hat{\phi}\)). Here \(X\equiv\frac{\rho_{s}r^{2}q^{2}}{8\rho_{M}\omega_{0}}\) is a q-dependent dimensionless parameter. Thus the mode with \(\vec{\xi}\) along \(\hat{\theta}\) has a lower frequency than the one with \(\hat{\phi}\) due to the coupling to the pseudospin. For \(\theta_{q}=0\) or \(\pi\), these two modes are degenerate up to \(\omega_{0}^{-1}\). The off-diagonal term then turns these transverse modes to circularly polarized. For \(\theta_{q}=0\), the modes with \((\xi_{\theta},\xi_{\phi})\propto(1,+\dot{\pi})\) have frequencies roughly given by \(\omega\approx qv_{T}(1+X)^{-1/2}[1\mp X_{2}]\), with the dimensionless parameter \(X_{2}\equiv\frac{\rho_{s}r^{2}q^{2}}{16\rho_{M}\omega_{0}}\frac{qv_{T}}{ \omega_{0}}\). Note that both \(X\) and \(X_{2}\) are increasing functions of \(q\). Similar to the case in subsection III.1, the sign in front of \(X_{2}\) in this expression for \(\omega\) needs to be reversed for \(\theta_{q}=\pi\). Note that \(X_{2}\ll X\) since we are now considering \(qv_{T}\ll\omega_{0}\) and also that the circular polarization for the higher frequency mode is opposite to the anti-adiabatic case for a given \(\hat{q}\). For general \(\theta\), the modes are elliptically polarized. See Fig 1b.
## IV Berry curvature
We here discuss the Berry curvature for the phonon modes. Our methodology here closely follows [39] and Supplemental Materials of [40]. In the Appendix we collect some of the relevant formulas. We shall again first investigate the small magnetic field regime (Sec. IV.1) and then the high magnetic one (Sec. IV.2) The second regime is included here for completeness but the information therein is not essential for our final Discussion section, so readers can choose to skip Sec. IV.2.
### Anti-abiabatic
The Lagrangian density that reproduces the equation of motion (18) can easily found to be
\[L=L_{0,ph}+\frac{r^{2}\rho_{s}}{16}\epsilon_{jkl}\left(\frac{\partial^{2}\xi_ {j}}{\partial z\partial x_{k}}\right)\left(\frac{\partial\xi_{l}}{\partial t}\right) \tag{24}\]
The last term, in the form of an effective Lorentz force, might have been expected from phenomenological grounds. An initial guess might be a term proportional to \(\hat{z}\cdot(\vec{\xi}\times\frac{\partial\xi}{\partial t})\): this term does arise in the case of optical phonons [31; 32; 41], but here this is not acceptable since the appearance of \(\vec{\xi}\) violates translational invariance. Instead, in eq. (24), a second order spatial derivative appears instead, similar to what has been discussed in [13; 40], though in our case the precise form, as derived in Sec II, is different here.
The conjugate momentum \(\Pi_{j}\) is given by
\[\Pi_{j}\equiv\frac{\partial L}{\partial\hat{\xi}_{j}}=\rho_{M}\left(\frac{ \partial\xi_{j}}{\partial t}\right)-\frac{r^{2}\rho_{s}}{16}\epsilon_{jkl} \left(\frac{\partial^{2}\xi_{l}}{\partial z\partial x_{k}}\right) \tag{25}\]
with the equation of motion (18) just the same as
Figure 1: Schematic dispersions for the transverse phonon modes for \(q_{z}>0\). \(+\) (\(-\)) labels right (left) circularly or elliptically polarized. For \(q_{z}<0\), the \(\pm\) labels in the above figures have to be reversed.
\(\frac{\partial L}{\partial\xi_{j}}\). After Fourier transforming the spatial coordinates, these two equations can be written in matrix form
\[\frac{\partial}{\partial t}\left(\begin{array}{cc}\rho_{M}\hat{1}&0\\ \rho_{M}\Omega&\hat{1}\end{array}\right)\left(\begin{array}{c}\xi\\ \Pi\end{array}\right)=\left(\begin{array}{cc}-\rho_{M}\Omega&\hat{1}\\ -\mathcal{Q}&0\end{array}\right)\left(\begin{array}{c}\xi\\ \Pi\end{array}\right) \tag{26}\]
where \(\Omega\), \(\mathcal{Q}\), \(\hat{1}\) are \(3\times 3\) matrices: \(\Omega\equiv Z(qv_{T})\cos\theta_{q}\hat{\Omega}\) with \(Z\) defined in eq (21), \(\hat{\Omega}_{jk}\equiv-\epsilon_{jkl}\hat{q}_{l}\), \(\mathcal{Q}_{jk}\equiv\lambda_{1}q^{2}\delta_{jk}+\lambda_{2}q_{j}q_{k}\), and \(\hat{1}_{jk}=\delta_{jk}\).
Eq (26) can be rewritten as
\[\frac{\partial}{\partial t}\left(\begin{array}{c}\xi\\ \Pi\end{array}\right)=-i\mathcal{S}\left(\begin{array}{c}\xi\\ \Pi\end{array}\right) \tag{27}\]
with \(\xi\), \(\Pi\) column matrices consisting of elements \(\xi_{x,y,z}\) and \(\Pi_{x,y,z}\), and \(\mathcal{S}\) a \(6\times 6\) matrix given by
\[\mathcal{S}=\left(\begin{array}{cc}-i\Omega&i/\rho_{M}\\ -i\mathcal{Q}&-i\Omega\end{array}\right) \tag{28}\]
where, rigorously speaking, the lower left element should have been \(-i\mathcal{Q}+i\rho_{M}\Omega^{2}\), and we have taken the simpler form since \(\Omega^{2}\) is second order in \(1/\omega_{0}\) and hence higher order than the other terms we kept.
Following [40], we search for the row vectors \((\vec{u},\vec{v})\) which satisfy, for positive frequencies \(\omega\),
\[\omega(\vec{u},\vec{v})=(\vec{u},\vec{v})\mathcal{S} \tag{29}\]
Once \((\vec{u},\vec{v})\)'s are found, the Berry curvatures \(\vec{\Omega}_{B}\) can then be evaluated via the formulas collected in Appendix B. For the longitudinal mode, \((\vec{u},\vec{v})=(u_{q}\hat{q},v_{q}\hat{q})\). The transverse modes can be more easily written in terms of \(u_{\theta,\phi}\) and \(v_{\theta,\phi}\) defined via \(\vec{u}=u_{\theta}\hat{\theta}+u_{\phi}\hat{\phi}\) and similarly for \(\vec{v}\). They obey (observe that \(\hat{\theta}\hat{\Omega}=-\hat{\phi}\) and \(\hat{\phi}\hat{\Omega}=\hat{\theta}\))
\[\omega\left(\begin{array}{c}u_{\theta}\\ u_{\phi}\\ v_{\theta}\\ v_{\phi}\end{array}\right)=\left(\begin{array}{cc}-iZqv_{T}\cos\theta_{q}&-i \lambda_{1}q^{2}\\ i\rho_{M}&-i\lambda_{1}q^{2}\\ i\rho_{M}&iZqv_{T}\cos\theta_{q}\\ i\rho_{M}&iZqv_{T}\cos\theta_{q}\end{array}\right)\left(\begin{array}{c}u _{\theta}\\ u_{\phi}\\ v_{\theta}\\ v_{\phi}\end{array}\right) \tag{30}\]
The right (left) circular polarized mode has eigenvector (normalized according to eq. (100))
\[\left(\frac{(\rho_{M}qv_{T})^{1/2}}{2},\pm\frac{i(\rho_{M}qv_{T})^{1/2}}{2}, \frac{i}{2(\rho_{M}qv_{T})^{1/2}},\mp\frac{1}{2(\rho_{M}qv_{T})^{1/2}}\right), \tag{31}\]
frequencies \(\omega=qv_{T}(1\pm Z\cos\theta_{q})\) (c.f. eq ( 20)) and curvature \(\vec{\Omega}_{B}=\pm\hat{q}/q^{2}\).
### adiabatic
In this regime, eq. (22) indicates that the equation for the frequency is cubic. This creates complications if we want to treat the problem in the same way as in the last subsection. However, since we are treating the pseudospin-phonon coupling as small, we can simplify the problem by noting the fact that since the last term in eq. (22) is thus already small, we can replace \(\omega^{2}\) there by the " unperturbed" transverse sound frequency \((qv_{T})^{2}\) (transverse since the last term affects only the transverse modes). Thus we now consider the effective equation of motion
\[0=\rho_{M}\omega^{2}\xi_{j}-\left[\lambda_{1}q^{2}\xi_{j}+\lambda_{2}q_{l}(q_{ j}\xi_{l})\right]-\frac{r^{2}\rho_{s}\omega^{2}}{8\omega_{0}}\left[-q_{z}^{2}\xi_{j}+q_{ z}q_{j}\xi_{z}+(q_{z}(q_{l}\xi_{l})-q^{2}\xi_{z})\delta_{jz}\right]-i\frac{r^{2} \rho_{s}}{8}\frac{\omega(qv_{T})^{2}}{\omega_{0}^{2}}q_{z}(\vec{q}\times \vec{\xi})_{j} \tag{32}\]
This equation reproduces the sound velocites discussed near the end of Sec. III.2 and we can check that the displacement eigenvectors found below are proportional to those found there.
The Lagrangian density that reproduces this equation of motion can easily found to be
\[L=L_{0,ph}+\frac{r^{2}\rho_{s}}{8\omega_{0}}\left[\frac{1}{2}\left(\frac{ \partial^{2}\xi_{l}}{\partial z\partial t}\right)^{2}-\left(\frac{\partial^{2} \xi_{z}}{\partial z\partial t}\right)\left(\frac{\partial^{2}\xi_{l}}{ \partial x_{l}\partial t}\right)+\frac{1}{2}\left(\frac{\partial^{2}\xi_{z}}{ \partial x_{l}\partial t}\right)^{2}\right]+\frac{r^{2}\rho_{s}v_{T}^{2}}{16 \omega_{0}^{2}}\nabla^{2}\vec{\xi}\cdot\vec{\nabla}\times\left(\frac{\partial^{2} \vec{\xi}}{\partial z\partial t}\right) \tag{33}\]
Carrying out the same procedure as in the last subsection, we obtain
\[\frac{\partial}{\partial t}\left(\begin{array}{cc}\rho_{M}(1+X\hat{\Lambda})&0 \\ -\rho_{M}\hat{\Omega}&1\end{array}\right)\left(\begin{array}{c}\xi\\ \Pi\end{array}\right)=\left(\begin{array}{cc}\rho_{M}\hat{\Omega}&1\\ -\mathcal{Q}&0\end{array}\right)\left(\begin{array}{c}\xi\\ \Pi\end{array}\right) \tag{34}\]
where \(\tilde{\Omega}\equiv X_{2}qv_{T}\cos\theta_{q}\hat{\Omega}\) (dimension frequency) with \(X,X_{2}\) defined in III.2 and \(\hat{\Omega}_{jk}\)\({\cal Q}_{jk}\) already defined in subsection IV.1,
\[\hat{\Lambda}\equiv\left(\begin{array}{ccc}\hat{q}_{z}^{2}&0&-\hat{q}_{x}\hat {q}_{z}\\ 0&\hat{q}_{z}^{2}&-\hat{q}_{y}\hat{q}_{z}\\ -\hat{q}_{z}\hat{q}_{x}&-\hat{q}_{z}\hat{q}_{y}&q_{x}^{2}+q_{y}^{2}\end{array}\right) \tag{35}\]
We have again the equation (27) with now
\[{\cal S}=\left(\begin{array}{ccc}i[1+X\hat{\Lambda}]^{-1}\tilde{\Omega}&i/ \rho_{M}[1+X\hat{\Lambda}]^{-1}\\ -i{\cal Q}+i\rho_{M}\tilde{\Omega}[1+X\hat{\Lambda}]^{-1}\tilde{\Omega}&i \tilde{\Omega}[1+X\hat{\Lambda}]^{-1}\end{array}\right) \tag{36}\]
which, in accordance with our approximations, the second term in the lower left element can be dropped.
We can solve for the eigenvectors \((\vec{u},\vec{v})\) as before. It is useful to note the vector relations \(\hat{q}\hat{\Lambda}=0\), \(\hat{\theta}\hat{\Lambda}=\hat{\theta}\) and \(\hat{\phi}\hat{\Lambda}=\cos^{2}\theta_{q}\hat{\phi}\). Once more, for longitudinal modes, \((\vec{u},\vec{v})=(u_{q}\hat{q},v_{q}\hat{q})\) is unaffected by the pseudospin. If \(\theta_{q}\) is not too close to 0 or \(\pi\), in the first approximation we can ignore the effects of \(\tilde{\Omega}\). The modes are thus linearly polarized with either \(\vec{u}\), \(\vec{v}\) entirely along \(\hat{\theta}\) or \(\hat{\phi}\) with frequencies already given in subsection III.2. The normalized eigenvectors are, respectively,
\[(u_{\theta},v_{\theta})_{0}=\left(\frac{(\rho_{M}qv_{T})^{1/2}(1+X)^{1/4}}{ \sqrt{2}},\frac{i}{\sqrt{2}(\rho_{M}qv_{T})^{1/2}(1+X)^{1/4}}\right) \tag{37}\]
and
\[(u_{\phi},v_{\phi})_{0}=\left(\frac{(\rho_{M}qv_{T})^{1/2}(1+X\cos^{2}\theta_ {q})^{1/4}}{\sqrt{2}(\rho_{M}qv_{T})^{1/2}(1+X\cos^{2}\theta_{q})^{1/4}}\right) \tag{38}\]
for the lower and higher frequency mode. Here the subscript 0 reminds us that we have ignored \(\tilde{\Omega}\). The effect of finite \(\tilde{\Omega}\) can be included by perturbation theory, using eqs. (37) and (38) as the "unperturbed" solutions. For the lower frequency mode, the wavevector can be written as \((\vec{u},\vec{v})=(u_{\theta,0}\hat{\theta},v_{\theta,0}\hat{\theta})+\beta(u _{\phi,0}\hat{\phi},v_{\phi,0}\hat{\phi})\) where \(\beta\) is a small coefficient. We find that \(\beta\) is imaginary with
\[{\rm Im}\beta=\frac{X_{2}}{2X}\frac{\cos\theta_{q}}{\sin^{2}\theta_{q}}(1+X)^{ 1/4}(1+X\cos^{2}\theta_{q})^{1/4}\left[(1+X\cos^{2}\theta_{q})^{1/2}+(1+X\cos^ {2}\theta_{q})^{1/2}\right] \tag{39}\]
hence \({\rm Im}\beta\) has the same sign as \(\cos\theta_{q}\). For \(q_{z}>0\), the lower frequency mode is right elliptically polarized (vice versa for \(q_{z}<0\)). Similarly, the higher frequency mode (the \(\phi\) mode before perturbation) becomes left elliptically polarized, with the degree of ellipticity characterized by the same coefficient \({\rm Im}\beta\).
For \(\theta_{q}=0\), the modes are circularly polarized, with normalized eigenvectors
\[(u_{\theta},u_{\phi},v_{\theta},v_{\phi})=\left(\frac{(\rho_{M}qv_{T})^{1/2}(1 +X)^{1/4}}{2},\mp\frac{i(\rho_{M}qv_{T})^{1/2}(1+X)^{1/4}}{2},\frac{i}{2(\rho_{ M}qv_{T})^{1/2}(1+X)^{1/4}},\frac{\pm 1}{2(\rho_{M}qv_{T})^{1/2}(1+X)^{1/4}}\right) \tag{40}\]
for the higher (left-circularly polarized) and lower (right-circularly polarized) frequency modes, respectively. The opposite signs are to be taken if \(\theta_{q}=\pi\).
Eq. (39) together with (37) and (38) allow us to obtain the Berry curvature. \(\tilde{\Omega}_{B}\) has no \(\phi\) component. For \(\theta_{q}\) not too close to 0 or \(\pi\), for the lower frequency mode,
\[\vec{\Omega}_{B}\cdot\hat{\theta}=\frac{2v_{T}}{q\omega_{0}}\frac{\cos^{2} \theta_{q}}{\sin^{3}\theta_{q}}\, \tag{41}\]
\[\vec{\Omega}_{B}\cdot\hat{q}=\frac{4v_{T}}{q\omega_{0}}\frac{\cos\theta_{q}}{ \sin^{4}\theta_{q}}. \tag{42}\]
Here we have only kept the lowest order finite terms and have used \(\frac{1}{q^{2}}\frac{X_{2}}{X}=\frac{v_{T}}{2\varphi_{0}}\). For the higher frequency mode, there is an extra negative sign for these formulas.
For \(\theta_{q}=0\), we obtain \(\vec{\Omega}_{B}=\mp 1/q^{2}\) for the two modes in eq. (40) [42].
Discussions
We begin with a rough estimate for the factor \(Z\) in eq. (21), which gives the fractional splitting in section III.1. Consider the case of one ion per unit cell, and let \(\rho_{0}\) (dimension inverse volume) be the number of ions per unit volume, and \(M\) is the mass per unit cell. Then \(Z\approx\frac{\rho_{s}}{\rho_{0}}\frac{\hbar g}{Mv_{T}}\). ( From here on we restore the Boltzmann constant \(k_{B}\) and Planck constant \(\hbar\).) Suppose that \(v_{T}\approx 1\)km/sec, \(M\sim 100\) proton mass, and if the spins are polarized (\(\rho_{s}=\rho_{0}\)), we get \(Z\sim 10^{-3}\) for a 1 meV phonon, a very large value compared with those predicted in the literature [11; 16] for other systems. For a paramagnet with small fields, \(\rho_{s}/\rho_{0}\sim\mu_{B}B/k_{B}T\), this number will be reduced, but still not necessarily small for not too small fields and not too high temperatures.
For the parameter \(X\) in sec III.2, ( note that \(X\sim\frac{qv_{T}}{\omega_{0}}Z\)) we obtain \(X\approx 10^{-2}\frac{\rho_{s}}{\rho_{0}}\frac{(\hbar qv_{T}/{\rm meV})^{2}}{( B/{\rm Tesla})}\). For a 100 Tesla field and 1 meV phonon we have a \(10^{-4}\) splitting if we take \(\rho_{s}=\rho_{0}\).
Phonons with finite Berry curvature will have an intrinsic contribution to the thermal Hall effect. Though this contribution is seemingly small and unlikely to be at least the sole mechanism for the observed thermal Hall effect for any systems, with thus extrinsic effects also called for (e.g. [40; 43]), we here provide an estimate since it is often also evaluated in the theoretical literature. Considering small external magnetic field and the simplified situation in Sec. III.1 where we have two opposite circularly polarized modes, from the formulas in [13; 39] we estimate [44]\(\kappa_{xy}/T\sim\frac{\delta\omega}{v_{T}}\frac{k_{B}^{2}}{\hbar}\), where \(\delta\omega\) is the typical splitting between the two oppositely polarized phonons at a given temperature, _i.e._, \(\delta\omega\sim Z(qv_{T})\) with \(\hbar qv_{T}\sim k_{B}T\), thus
\[\frac{\kappa_{xy}}{T}\sim\frac{\rho_{s}}{\rho_{0}}\frac{(k_{B}T)^{2}}{\hbar dv _{T}^{3}}\frac{k_{B}^{2}}{\hbar}\.\]
We obtain that \(\kappa_{xy}>0\) (see remark below eq (21) and footnote [44]), independent of sign of \(r\). Inserting the numbers, and taking again \(\rho_{s}/\rho_{0}\sim\mu_{B}B/k_{B}T\), we get
\[\kappa_{xy}\sim 10^{-8}(T/{\rm K})^{2}(B/{\rm Tesla}){\rm W/Km}. \tag{43}\]
\(\kappa_{xy}\) is proportional to \(T^{2}\) instead of \(T^{3}\) in [13; 40] due to the temperature dependence of \(\rho_{s}\) just mentioned above. Eq. (43) gives, for \(B\sim 10\) Tesla and \(T\sim 100\) K, \(\kappa_{xy}\sim\) mW/ K m, a value comparable to those in, e.g., [40], and for \(T\sim 30\) K, \(\kappa_{xy}\sim 0.1\) mW/Km, about an order of magnitude smaller than the peak value found experimentally for the non-monotonic temperature dependent \(\kappa_{xy}\) reported in [6]. Our number here however is likely to be an overestimate. The Berry curvature in our model relies on mixing between transverse modes. If we take into account that rotational symmetries in crystals are discrete rather than continuous, transverse phonon modes are already split for most propagating directions. For these directions the sound modes are only ellipticallly polarized rather than circular, and the Berry curvature will be reduced. A calculation would be similar to what we had in Sec. IV.2. Since the mixing term between the two transverse modes is \(\sim Zqv_{T}\), if the transverse mode velocites differ by \(\Delta v_{T}\), the curvature would be reduced by a factor \(\sim Z/(\Delta v_{T}/v_{T})\).
The mechanism discussed in this paper should be quite general, applicable to other systems so long as the pseudospin has spin and orbital degrees of freedom entangled [28] with the lowest multiplets not fully filled and not an orbital singlet, with energy well separated from the higher energy ones, when the phonon frequencies lie within the suitable interval between these "gaps". Details will differ according to the precise symmetry, and simple vector relation eq. (4) between the rotational matrix and the pseudospin Pauli matrices may not hold for lower symmetries, the proportionality factor \(r\) will differ from our value given etc., but otherwise the induced phase factors, mixing between phonon branches, and effective Lorentz forces will remain.
Our mechanism would also be relevant for magnetically ordered systems. In this case, the coupling between the pseudospins that have been ignored so far will have to be taken into account, and our phonon-pseudospin coupling would appear as a phonon-magnon coupling. There are already quite a number of papers dealing with phonon-magnon couplings [45; 46] with interesting predictions, furthermore mechanisms of inducing Berry curvature and chirality in the coupled phonon-magnon modes have also been proposed (e.g. [45]). However, our mechanism is of a qualitatively different nature as it arises from the Berry phase generated from a time-dependent frame of reference of the pseudospin due to the sound mode. Instead, the mechanisms in [45; 46] ultimately are both based on the modifications of the spin-spin interactions due to the phonons, with spin-orbital coupling arising from dipole-dipole interactions or magnetic anisotropy energies. (see also other theoretical works [47; 48] for \(\alpha-\)RuCl\({}_{3}\)). To what extent our present mechanism will be important for magnetically ordered systems remains to be investigated.
## VI Acknowledgement
This work is supported by the Ministry of Science and Technology, Taiwan, under grant number MOST-104-2112-M-001-006-MY3.
## Appendix
Here we summarize some of the equations from [39] (hereafter MSM) and the Supplemental Materials of [40] (CKS-SM) that we have used in text. To simplify our notations, we shall drop labels corresponding to the components, different eigenvalues, etc.
## Appendix A Eigenvectors
After Fourier transform into wavevector \(\vec{q}\) space, \(\xi_{\vec{q}}\) and \(\Pi_{\vec{q}}^{\dagger}=\Pi_{-\vec{q}}\) satisfies the commutation relation
\[[\xi_{\vec{q}},\Pi_{\vec{q}}^{\dagger}]=i\hbar \tag{10}\]
Hence
\[\beta_{\vec{q}} = \frac{1}{\sqrt{2}}(\xi_{\vec{q}}+i\Pi_{\vec{q}})\] \[\beta_{-\vec{q}}^{\dagger} = \frac{1}{\sqrt{2}}(\xi_{\vec{q}}-i\Pi_{\vec{q}}) \tag{11}\]
defines a set of annihilation and creation operators. Let \(\gamma_{\vec{q}}\), \(\gamma_{-\vec{q}}^{\dagger}\) be instead the operators that actually diagonalize the bosonic Hamiltonian, and define the transformation matrix between \(\gamma_{\vec{q}}\) and \(\beta_{\vec{q}}\) be \({\cal T}^{-1}\), (_c.f._ MSM (6))), _i.e._
\[\left(\begin{array}{c}\gamma_{\vec{q}}\\ \gamma_{-\vec{q}}^{\dagger}\end{array}\right)={\cal T}^{-1}\left(\begin{array} []{c}\beta_{\vec{q}}\\ \beta_{-\vec{q}}^{\dagger}\end{array}\right) \tag{12}\]
which can also be re-written as (_c.f._ CKS-SM (11))
\[\left(\begin{array}{c}\gamma_{\vec{q}}\\ \gamma_{-\vec{q}}^{\dagger}\end{array}\right)={\cal M}\left(\begin{array}{c} \xi_{\vec{q}}\\ \Pi_{\vec{q}}\end{array}\right) \tag{13}\]
with thus
\[{\cal T}^{-1}=\frac{{\cal M}}{\sqrt{2}}\left(\begin{array}{cc}1&1\\ -i&i\end{array}\right) \tag{14}\]
\({\cal T}\) satisfies (MSM (10))
\[{\cal T}\left(\begin{array}{cc}1&0\\ 0&-1\end{array}\right){\cal T}^{\dagger}=\left(\begin{array}{cc}1&0\\ 0&-1\end{array}\right) \tag{15}\]
and hence also the same equation with \({\cal T}\) replaced by \({\cal T}^{-1}\). Eq (14) then shows that
\[i{\cal M}\left(\begin{array}{cc}0&1\\ -1&0\end{array}\right){\cal M}^{\dagger}=\left(\begin{array}{cc}1&0\\ 0&-1\end{array}\right) \tag{16}\]
thus equivalently CKS-SM (7).
Since we write the equation of motion for the operators \(\xi_{\vec{q}},\Pi_{\vec{q}}\) in the form eq (27) and we have defined \((u,v)\) via (29), comparison with CKS-SM (4) and (5) shows that \((u,v)\) are just the rows of the matrix \({\cal M}\). The normalization condition
\[i(\vec{u}\cdot\vec{v}^{*}-\vec{v}\cdot\vec{u}^{*})=1 \tag{17}\]
follows from (16).
## Appendix B Berry Curvature
The Berry curvature for a given band \(n\) is given in MSM's eq. (34):
\[\Omega_{B,j}=i\epsilon_{jkl}\left[\left(\begin{array}{cc}1&0\\ 0&-1\end{array}\right)\frac{\partial{\cal T}^{\dagger}}{\partial q_{k}}\left( \begin{array}{cc}1&0\\ 0&-1\end{array}\right)\frac{\partial{\cal T}}{\partial q_{l}}\right]_{nn} \tag{18}\]
Eq. (16) implies that
\[\left(\begin{array}{cc}1&0\\ 0&-1\end{array}\right){\cal T}^{\dagger}\left(\begin{array}{cc}1&0\\ 0&-1\end{array}\right)=\frac{{\cal M}}{\sqrt{2}}\left(\begin{array}{cc}1&1\\ -i&i\end{array}\right) \tag{19}\]
Substituting this into eq. (18) we get
\[\Omega_{B,j}=\epsilon_{jkl}\left[\frac{\partial{\cal M}}{\partial q_{k}}\left( \begin{array}{cc}0&-1\\ 1&0\end{array}\right)\frac{\partial{\cal M}^{\dagger}}{\partial q_{l}}\left( \begin{array}{cc}1&0\\ 0&-1\end{array}\right)\right]_{nn} \tag{20}\]
Using that the rows of \({\cal M}\) are \((\vec{u},\vec{v})\), we obtain the Berry curvature
\[\Omega_{B,j}=-\epsilon_{jkl}\left(\frac{\partial\vec{u}}{\partial q_{k}} \cdot\frac{\partial\vec{v}^{*}}{\partial q_{l}}-\frac{\partial\vec{v}}{\partial q _{k}}\cdot\frac{\partial\vec{u}^{*}}{\partial q_{l}}\right) \tag{21}\]
Note that the right hand side of this equation is real [49].
The Berry curvature can be easily evaluated using eq. (21). We display some formulas for the transverse modes, where \(\vec{u}=u_{\theta}\hat{\theta}+u_{\phi}\hat{\phi}\), \(\vec{v}=v_{\theta}\hat{\theta}+v_{\phi}\hat{\phi}\), with \(u_{\theta}\),.. \(v_{\phi}\) depending only on \(q\), \(\theta\) but not \(\phi\):
\[\vec{\Omega}_{B}\cdot\hat{q}=-\frac{2}{q^{2}}{\rm Re}\left[\left(u_{\theta}v_{ \phi}^{*}-u_{\phi}v_{\theta}^{*}\right)+\frac{\cos\theta}{\sin\theta}\left(- \frac{\partial}{\partial\theta}(u_{\theta}v_{\phi}^{*})+\frac{\partial}{ \partial\theta}(u_{\phi}v_{\theta}^{*})\right)\right] \tag{22}\]
\[\vec{\Omega}_{B}\cdot\hat{\theta}=-\frac{2\cos\theta}{q\sin\theta}{\rm Re} \left[\frac{\partial}{\partial q}(u_{\theta}v_{\phi}^{*}-u_{\phi}v_{\theta}^{*})\right] \tag{23}\]
\[\vec{\Omega}_{B}\cdot\hat{\phi}=-\frac{2}{q}{\rm Re}\left[\frac{\partial u_{ \theta}}{\partial q}\frac{\partial v_{\theta}^{*}}{\partial\theta}+\frac{ \partial u_{\phi}}{\partial q}\frac{\partial v_{\phi}^{*}}{\partial\theta}- \frac{\partial u_{\theta}}{\partial\theta}\frac{\partial v_{\theta}^{*}}{ \partial q}+\frac{\partial u_{\phi}}{\partial\theta}\frac{\partial v_{\phi}^{*}}{ \partial q}\right] \tag{24}\]
In eqs. (22-24) we have dropped the subscripts \(q\) of \(\theta_{q}\) to simplify the notation. |
2304.01446 | Integrating Commercial and Social Determinants of Health: A Unified
Ontology for Non-Clinical Determinants of Health | The objectives of this research are 1) to develop an ontology for CDoH by
utilizing PubMed articles and ChatGPT; 2) to foster ontology reuse by
integrating CDoH with an existing SDoH ontology into a unified structure; 3) to
devise an overarching conception for all non-clinical determinants of health
and to create an initial ontology, called N-CODH, for them; 4) and to validate
the degree of correspondence between concepts provided by ChatGPT with the
existing SDoH ontology | Navya Martin Kollapally, Vipina Kuttichi Keloth, Julia Xu, James Geller | 2023-04-04T01:43:58Z | http://arxiv.org/abs/2304.01446v1 | Integrating Commercial and Social Determinants of Health: A Unified Ontology for Non-Clinical Determinants of Health
###### Abstract
_The pivotal impact of Social Determinants of Health (SDoH) on people's health and well-being has been widely recognized and investigated. However, the effect of Commercial Determinants of Health (CDoH) is only now garnering increased attention. Developing an ontology for CDoH can offer a systematic approach to identifying and categorizing the commercial factors affecting health. Those factors, including the production, distribution, and marketing of goods and services, may exert a substantial influence on health outcomes. The objectives of this research are 1) to develop an ontology for CDoH by utilizing PubMed articles and ChatGPT; 2) to foster ontology reuse by integrating CDoH with an existing SDoH ontology into a unified structure; 3) to devise an overarching conception for all non-clinical determinants of health and to create an initial ontology, called N-CODH, for them; 4) and to validate the degree of correspondence between concepts provided by ChatGPT with the existing SDoH ontology._
## Introduction
According to World Health Organization (WHO), the social determinants of health refer to the conditions that influence people's well-being, including their birth, upbringing, living and working environment, and access to healthcare[1]. These conditions are impacted by broader factors such as economics, social policies, politics, and commercial factors that affect health. _Commercial determinants of health_ are situations, actions and omissions of business entities that affect individual and population health[2]. These determinants, driven by activities in pursuit of profit, include factors such as access to healthy food options, marketing and advertising strategies, as well as workplace practices. For example, marketing and advertising strategies used by corporations can impact consumer behavior and choices, potentially leading to unhealthy behaviors and lifestyles. Consequently, these factors can impact modifiable risk behaviors such as tobacco use, unhealthy diet, lack of physical activity, and harmful alcohol consumption, leading to overweight and obesity, elevated blood pressure, increased blood glucose levels, high cholesterol, and ultimately life-threatening diseases such as heart disease, cancer, liver cirrhosis, chronic respiratory disease, and diabetes. These non-communicable diseases may include lifestyle diseases and mental health issues. When non-communicable diseases are not under the control of individuals, but instead are caused by commercial activities, they could be called "industrial epidemics" or corporate-driven diseases[3]. Cardiovascular diseases account for most deaths among non-communicable diseases (17.9 million people annually), followed by cancers (9.3 million), chronic respiratory diseases (4.1 million), and diabetes (2.0 million). It is estimated that in the United States 88% of deaths annually are caused by such ailments, as well as 14% of premature deaths (dying at an age between 30 to 70)[4].
In the literature, various definitions and frameworks have been proposed to describe CDoH. Kickbusch et al.[2] define CDoH as "private sector strategies and approaches for promoting products and choices detrimental to health." The authors identify consumer and health behavior, individualization, and choices as subcategories of CDoH at the micro level, while at the macro level they include "global risk society," the "global consumer society," and the "political economy of globalization." West et al.[5] describe CDoH as "factors influencing health stemming from the profit motive." Drawing on existing CDoH and SDoH definitions, Lacy-Vawdon et al.[6] define CDoH as "a series of systems that initially materialize around systems of commercial and/or corporate power." While acknowledging that CDoH systems' influences may be positive or negative, the primary focus must be on preventing and reducing harm according to the author. Mialon[7] developed a framework for CDoH based on Kickbusch et al., which lists as factors 1) the production of unhealthy commodities by corporations; 2) the use of business, market and political practices that are harmful to health; and 3) global drivers of ill-health, shaped by the practices of corporations. Nevertheless, the author notes that there is limited research on CDoH, a lack of attention to the global drivers of ill-health, and limited studies on the activities of industries other than the food, alcohol, and tobacco industries, that contribute to CDoH. Overall, these definitions and frameworks highlight the complex interplay between commercial activities and health outcomes, and the need for further research and interventions to address CDoH and promote health equity.
As alluded to above, the existing body of literature has extensively studied the impact of commercial and corporate interests on population health. However, integrating these factors within the CDoH framework is a nascent area of research[3, 5, 6]. The current definitions of CDoH fail to address the linkage between CDoH and risk behaviors associated with non-communicable diseases[3]. Additionally, these definitions do not take into consideration both the positive and negative impact of CDoH factors on population health. Hence, there is a need to standardize the concepts and categorization of CDoH to cover this terminology gap. To this end, one of our objective is to develop an ontology for CDoH to address these issues. According to Gruber[8], an ontology is a formal and explicit specification of a shared conceptualization of a desired domain of interest. Ontologies have the potential to combine diverse information sources on the schema level and can be leveraged for information retrieval from unstructured text by elevating keywords to the level of ontological concepts and relationships. By developing an ontology for CDoH, we aim to provide a means of standardizing knowledge management efforts under a common conceptual model within this specific domain[9].
In order to develop an ontology, it is essential to have a comprehensive list of terms/concepts that cover the domain under consideration[10]. To enrich a domain ontology, the developers often rely on relevant research articles to gather concepts extending the depth and breadth of the ontology. In this study, we used PubMed Central (PMC) as a source for identifying relevant research in this field and eventually for harvesting concepts. However, even with extensive search, it can be challenging to gather all relevant concepts and ensure that the ontology is comprehensive. NLP techniques and chatbots have been used for the task of text summarization. GPT (Generative Pretrained Transformer)[11] models are a type of language model, which has been trained on large datasets of text. ChatGPT[12] is a chatbot developed by the company OpenAI. It is built on top of OpenAI's GPT-3.5/4 family of large language models, and it is fine-tuned using both supervised and reinforcement learning techniques. In this research, we supplement our search strategy, utilizing ChatGPT, that can generate human-like responses to natural language prompts. We propose a novel human-AI collaborative concept collection approach for developing an ontology for CDoH utilizing ChatGPT to expand our concept set.
To comprehensively integrate the influences of commercialization within the CDoH framework, it may be necessary to broaden the scope of the existing paradigms to address ways in which they impact the SDoH. For example, business policies and practices related to employment and working conditions can impact the social and economic status of workers, which can have downstream effects on health outcomes. By broadening the scope of the CDoH framework to consider the ways in which commercial activities interact with SDoH, we can gain a better understanding of how these factors impact health outcomes. Hence, in this research, we are drawing on the idea of Non-Clinical Determinants of Health[13] (for which we introduce the N-CODH ontology: _Non-Clinical Ontology of Determinants of Health_; pronounce as _en-code_), integrating the health impact of SDoH and CDoH. We released the initial version of N-CODH[14] by integrating our existing SDoH ontology (\(\mathbb{Q}\)H\(\mathbb{Q}\)H\(\mathbb{Q}\)H\(\mathbb{Q}\)H, see Glossary of all ontologies used at the end of the paper) with the CDoH ontology presented in this paper. We hypothesize that the N-CODH ontology has the potential to transform how we approach research, policy, and public health practice, providing a more comprehensive and nuanced understanding of the complex and interrelated factors that shape health outcomes.
In summary, we are developing an ontology for CDoH by utilizing PMC articles and ChatGPT. We are supporting ontology reuse by integrating CDoH with our existing SDoH ontology (\(\mathbb{Q}\)H\(\mathbb{Q}\)H). We are presenting an overarching conception for all non-clinical determinants of health, and we created an initial ontology called N-CODH. Finally, we are validating the degree of correspondence between concepts provided by ChatGPT with the \(\mathbb{Q}\)H\(\mathbb{Q}\) ontology.
#### Methods
_Literature Review and CDoH Concept Extraction_
We developed the CDoH ontology using ontology development principles as per Noy[10]. One of the ontology design principles is content reuse from existing ontologies[15]. As such our first step was to search the NCBO BioPortal[16], which is a unified collection of various ontologies and terminologies, currently containing 1,052 of them, with 15,644,567 classes, and 36,286 properties. Ontologies in BioPortal reuse content from other ontologies to facilitate the modeling of new classes, cover a subject domain, save development work, and support applications. The domain of our ontology is the definition of health effects of CDoH, but we could not locate any such ontology in BioPortal[17]. To arrive at this determination, we performed keyword searches using: "Commercial determinants of health," "Corporate determinants of health," "Commercial drivers of ill health," "Commercial determinants of ill health," "Commercial drivers of non-communicable disease," "Commercial determinants of non-communicable devices,"
"Commercial determinants of obesity" and their variations in the _find an ontology_ and _class search_ fields in BioPortal. Since our searches did not yield any results, we proceeded to develop the CDoH ontology from scratch.
We utilized the Preferred Reporting Items for Systematic reviews and Meta-Analyses framework (PRISMA 2020)[18] as outlined in Figure 1. For collecting the relevant articles for developing the CDoH ontology, we did a scoping search in PubMed Central (PMC)[19] using the query: _(commercial [All Fields] AND determinants [All Fields] AND ("health"[MeSH Terms] OR "health"[All Fields]) AND +framework [All Fields]) AND ("2018/01/17"[PDat] : "2023/01/15"[PDat])_. The search returned a total of 23,342 full-text articles. After removing embargoed articles, 23,094 full-text documents were moved to the next phase of screening. In this phase, 23,071 articles were eliminated that met the exclusion criteria: a "study on subpopulation without broader implication" and those articles that "did not discuss the health/climatic impacts of CDoH in the title/abstract." We identified 23 full-text articles that did not meet the exclusion criterion. We performed forward learning (extracting relevant articles from bibliographies of identified sources) and backward learning (extracting documents that cited the identified articles). Forward learning helped us identify nonacademic articles, including policy documents and population statistics from government websites that resulted in the addition of 14 articles from outside of PMC.
One issue that ontology builders routinely confront is that they need to work with "expensive" subject matter experts and ontology experts. Ideally, contributors to an ontology should possess both subject matter and ontology expertise. To address the issue that such experts are hard to recruit, we performed a pilot study to explore the use of ChatGPT as a "contributor." We extracted unique impacts of CDoH on public health by interrogating ChatGPT. Example prompts were "impact of CDoH on health outcome," "subcategories of the health impact of CDoH," "factors that impact health due to commercial drivers and corporates," "climatic hazards from CDoH," "10 effects of climate change that cause ill-health contributed by corporates," "list 20 subcategories of factors in private sector that cause lifestyle diseases," etc. We posed several semantically similar questions and were able to extract 40 unique impacts from ChatGPT. Each of these impacts was validated by searching for corresponding articles in PMC, using the extracted impacts as our search keywords. This analysis resulted in adding 72 articles that were excluded from the previous review. After the inclusion phase we had 109 full text research articles/reports and policy documents for concept extraction. We did a manual review of these 109 documents to extract all the concepts for developing CDoH ontology. (In the future, we will revisit this step using late breaking NLP methods.)
#### Development of the CDoH Ontology
After conducting a thorough analysis of all the concepts extracted during the concept collection phase, we divided the CDoH concepts into five main categories. They are 1) elements attributed by commercial factors, 2) elements attributed by economic factors, 3) elements attributed by environmental factors, 4) elements attributed by individual factors and 5) elements attributed by social factors. We used Protege 5.5.0[20] for implementing the CDoH ontology in Web Ontology Language (OWL). Protege refers to "concepts" as "classes," and allows adding properties and relationships between the classes. The class "Thing" is predefined in Protege, and is used as the root of every ontology created with it. Protege enables users to edit ontologies in OWL and to use a reasoner to validate the consistency and coherence of the developed ontologies. We have performed consistency checking in Protege by utilizing HermiT reasoner version 1.4.3.456 [21]. We have also added object and data properties to concepts, allowing us to capture complex relationships between elements attributed to different factors. Examples of the object properties are "_have education level,"_ which associates "person" with "education level," and "_have contaminants,"_ which relates "available source of drinking water" with chemicals such as "radon," "fluoride," etc.
#### Creating the N-CODH Ontology by integrating the CDoH ontology with three other ontologies
Considering the overlap between commercial and social determinants, as commercial activities can influence social factors and vice versa, to effectively address this complex interplay, it is important to take a comprehensive and integrated approach. Now that we have developed the CDoH ontology, we describe our approach to developing the N-CODH ontology. We imported three existing ontologies with factors affecting nonclinical outcomes to improve the coverage and flexibility of the CDoH ontology [22]. We integrated the designed ontology with our previously developed Social determinants Of Health Ontology (\(\widehat{\text{\small{NH}}}\)) available in BioPortal[23]. \(\widehat{\text{\small{NH}}}\)lays great emphasis on covering the healthcare consequences of health inequities in hospitals and among practitioners by reusing the HOME ontology[24]
(Healthcare Ontology for Minority Equity). HOME specifically deals with healthcare impacts due to implicit bias within and outside of healthcare. By standardizing the contributors of non-communicable diseases in an ontology, we can address the challenge of heterogeneity embedded in their definitions, categorizations, and applications. Therefore, by adding SDoH concepts to the CDoH ontology, we have created an ontology that has a comprehensive coverage of non-clinical determinants of health. The development of N-CODH is a major achievement of this study. Additionally, to represent the time progression of events, we also imported the Time Event Ontology (TEO) from BioPortal into N-CODH. Data properties, such as "parts_per_million," were added to N-CODH to represent, for example, the maximum chemical contaminant levels in drinking water. We annotated N-CODH with CURIES IDs, which ensures interoperability and makes it easier to use it as gold standard for NLP tasks.
#### Ontology evaluation
Ontology evaluation is defined as the process of deciding the quality of an ontology considering a set of evaluation criteria. The four main methods of ontology evaluation are gold-standard comparison, application-based evaluation, data sources comparison, and human-centric evaluation[25]. We performed application-based and human-centric evaluation, as gold-standard and data source comparisons do not apply due to unavailability of such data to us. In addition to using the HermiT reasoner, we also used OntoMetrics[26] for application-based evaluation. Due to the absence of an existing ontology that deals with factors contributing to ill health from non-clinical determinants, we opted to add a human-centric evaluation. We involved two subject matter experts (VK, JX) with extensive experience in biomedical ontology evaluation to assess the N-CODH ontology. Below we describe this evaluation in detail.
#### Application based evaluation
The HermiT reasoner can be used to determine whether the ontology is consistent and coherent. On the other hand, OntoMetrics [26]is intended to evaluate certain aspects of ontologies and their potential for knowledge representation. Metrics provided by OntoMetrics describe domain-independent aspects of the ontology and provide deeper insights than HermiT. The OWL file developed using Protege was uploaded to OntoMetrics as an
Figure 1: PRISMA diagram of study inclusion.
XML file to calculate the metrics, especially schema metrics. _Schema metrics_ are used to evaluate the depth, width, richness, and inheritance of the designed ontology. Relationship richness reflects the diversity of relations and placement of relations in the ontology. Attribute richness reflects the number of attributes that are defined for each class. It can indicate both the quality of ontology design and the amount of information pertaining to instance data. Inheritance richness is a measure that describes the distribution of information across different levels of the ontology's inheritance tree or the fan-out of parent classes. This is a good indication of how well knowledge is grouped into different categories and subcategories in the ontology. Class richness is related to how instances are distributed across classes.
_Human expert evaluation_: After validating the N-CODH ontology for consistency, coherence, and semantic correctness, we utilized human expert evaluation to investigate whether the developed ontology covers the pertinent aspects of the domain under consideration correctly. We designed a spreadsheet with concept pairs of the form "Parent \(\&\) IS-A- Child" to minimize ambiguity. The parent and child concepts are connected using an IS-A relationship. Table 1 shows a snippet from the evaluation sheet with concept pairs.
Both human evaluators (VK and JX) were provided with the same spreadsheet of 100 concept pairs. Among the 100 pairs, we provided 10 concept pairs as training samples to present the flavour of the ontology and 90 pairs that needed to be evaluated. The spreadsheet contained three kinds of concepts pairs: pairs related as parent-child, pairs related as ancestor_or_grandparent-child, and pairs that were not hierarchically related. Both VK and JX were aware of the fact that the spreadsheet contained these different kinds of concept pairs. The spreadsheet contained three empty columns with the headings "Child," "Farther away," and "Reason if unrelated." The 10 samples provided to the evaluators included five of the "Child" fields filled with "No" and corresponding reasons were provided in "Reason if unrelated." Three of the "Child" fields were filled with "Yes," and two of the "Farther away" fields were filled with "Yes."
For each pair, the fourth column ("Child?" in Table 1) had to be filled with "Yes," if the evaluator felt that the concepts were connected by a parent-child (IS-A) relationship, and "No" otherwise. If the answer was "No," they were asked to fill in the reason in the column "Reason if unrelated." During the evaluation phase, these reasons provided us with directions on how to make improvements to the design of the ontology. The evaluators were asked to fill in the "Farther away" column with "Yes," whenever they felt that the concepts were related by a grandparent or ancestor relationship, i.e., a _chain_ of IS-A relationships. The evaluators were also asked to give reasons in this case (in the "Reason if unrelated column"). VK and JX independently reviewed the pairs, and we used Cohen's kappa[27] test to identify the level of agreement. Cohen's kappa (\(\kappa\)) is a statistical coefficient that represents the degree of agreement between two raters. A \(\kappa>0.4\) is considered as moderate agreement and \(\kappa=1\) means perfect agreement. To evaluate the statistical significance of their individual results, we used Fisher's exact test[28].
\begin{table}
\begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline
**Parent** & **Relation (same)** & **Child** & **Child?** & **Farther away** & **Reason if unrelated** \\ \hline Effect of climatic changes & \(\&\)IS-A-food & Marketing of unhealthy food products & No & Child concept relates to promotion of unhealthy food products and has no bearing on parent concept which relates to climate change. \\ \hline Eating related psychopathology & \(\&\)IS-A- & Binge eating disorder & Yes & & \\ \hline Chemical risk in drinking water & \(\&\)IS-A- & Social media affected health outcomes & No & Health outcomes affected by social media cannot be a child of chemical risk in drinking water \\ \hline Trade and globalisation effect on health disparities & \(\&\)IS-A- & Violating labour standards & Yes & The concepts share a grandparent-child relationship \\ \hline \end{tabular}
\end{table}
Table 1: A snippet of the spreadsheet with concept pairs provided for evaluation to the human expert.
_Evaluating the Concordance of the ontology with ChatGPT_
To explore the concordance of the ontology with ChatGPT, we employed the evaluation sheet developed for \(\mathtt{\widehat{SmH}}\) in previous work. The Social Determinants of Health Ontology (\(\mathtt{\widehat{SmH}}\)) is available in BioPortal[23] and had been evaluated by two medical ontology experts and a physician. For this study, ChatGPT was given concept pairs using the natural user query pattern: _"Neighborhood and built-in environment" \(\boldsymbol{\bigstar}\)IS-A-- "Proximity to industrial facilities"_ along with the question "is this a valid IS-A relationship?" The arrow was part of the input to ChatGPT. ChatGPT responded either with a positive answer (along the lines of "Yes, this is a valid parent-child relationship") or a negative answer ("No, these concepts do not share a strict IS-A relationship") along with explanations for either case. In cases where ChatGPT responded negatively, we asked follow-up questions to determine how the relationship could be defined or how the child concept could be modified. Table 2 illustrates a few of the concept pairs presented to ChatGPT. For the \(\mathtt{\widehat{SmH}}\) validation study, out of 60 concept pairs, 20 pairs shared a parent-child relationship, 20 pairs were unrelated, and the remaining 20 pairs shared a grandparent relationship (i.e., the concepts were related but not directly related).
For those concept pairs that ChatGPT did not consider as related by an IS-A link, but instead considered it to be related by a grandparent-child relation, we experimented with a novel way of evaluation, performed by prompting ChatGPT with a series of questions diagrammatically explained in Figure 2. Subfigure a) shows that we proposed to ChatGPT that B is a child of A. However, ChatGPT indicated that it "thinks" of B as a grandchild of A. Subfigure b) represents this graphically. We then challenged ChatGPT to tell us the children of A (Subfigure c)). Interestingly, in some cases it returned B as a child of A (Subfigure d)) while in other cases it did not.
As per \(\mathtt{\widehat{SmH}}\) "Poor housing" \(\boldsymbol{\bigstar}\)is--a "pest infested house" but ChatGPT disagreed with the relation stating that: _"Poor housing" and "pest infested house" can have a distant hierarchical relationship_. With respect to ChatGPT, poor housing can encompass a variety of conditions that make a dwelling substandard, and one of those conditions could be pest infestation. Next, we prompted ChatGPT to return 10 concepts that have IS-A relationships to "Poor housing." The response from ChatGPT included _insect or pest infestation_ along with other concepts such as overcrowding in house, lack of basic amenities, exposure to environmental hazards, lack of ventilation, homelessness etc. In the Results section, we will present the breakdown of these cases. A total of 276 prompts were used to get evaluation results for the 60 pairs from ChatGPT.
### Results
_Developed N-CODH Ontology_
The CDoH ontology developed using Protege contains 317 classes and 675 axioms along with 27 object properties and 19 data properties. Figure 3 represents the main categories and the direct subclasses of the CDoH ontology in Protege. The IS-A relationships are indicated by indentation in the figure. N-CODH is a domain ontology that
\begin{table}
\begin{tabular}{|l|l|l|} \hline
**Parent** & **Relation** & **Child** \\ \hline Impact of food insecurity & \(\boldsymbol{\bigstar}\)is–a & Metabolic disturbances from poor nutrition \\ \hline Poor Housing & \(\boldsymbol{\bigstar}\)is–a & Bullying at school \\ \hline Economic instability & \(\boldsymbol{\bigstar}\)is–a & Inability to enroll in federal assistance \\ \hline Poor Workplace condition & \(\boldsymbol{\bigstar}\)is–a & Poor pairing of team members at work \\ \hline \end{tabular} For those concept pairs that ChatGPT did not consider as related by an IS-A link, but instead considered it to be related by a grandparent-child relation, we experimented with a novel way of evaluation, performed by prompting ChatGPT with a series of questions diagrammatically explained in Figure 2. Subfigure a) shows that we proposed to ChatGPT that B is a child of A. However, ChatGPT indicated that it “thinks” of B as a grandchild of A. Subfigure b) represents this graphically. We then challenged ChatGPT to tell us the children of A (Subfigure c)). Interestingly, in some cases it returned B as a child of A (Subfigure d)) while in other cases it did not.
\end{table}
Table 2: Sample of concept pairs given to ChatGPT.
Figure 2: Evaluation framework for concept pairs not connected with IS-A relationship as per ChatGPT. a) Is concept B a sub concept of A? b) ChatGPT states Concept B is a grandchild of concept A. c) ChatGPT is prompted to list all the child concepts of Concept A. d) ChatGPT lists all child concepts of A _including_ B, contradicting itself.
integrates the CDoH ontology with the existing SDoH ontology \(\mathfrak{Shll}\), the Healthcare equity ontology (HOME) and the Time event ontology (TEO). N-CODH contains 611 classes and 2603 axioms. To reference biomedical entities, Compact Uniform Resource Identifiers (CURIEs) have been added to the ontology [29]. We defined 41 object properties and 28 data properties in the first version of N-CODH. The top-level classes of N-CODH are depicted in a partial conceptual framework shown in Figure 4. The N-CODH OWL file is available on GitHub [22] and the NCBO BioPortal [14].
#### Metrics quality of N-CODH
According to the HermiT reasoner running in Protege, N-CODH is a coherent and consistent ontology. We performed an analysis of the ontology using OntoMetrics to obtain the schema metrics which is presented in Table 3. The N-CODH ontology aims to be a comprehensive representation covering the impacts of commercial determinants of health **and** of social determinants of health. It is characterized by low attribute richness and higher inheritance richness. The inheritance richness represents the horizontal nature of the ontology, indicating fewer levels of inheritance and a higher number of subclasses per class. N-CODH consists mainly of class-subclass relationships (as opposed to semantic relationships), leading to lower semantic relationship richness, which represents the diversity of relations and their placement in the ontology.
#### Human evaluation results of N-CODH
An evaluation by human experts was performed to ensure that the domain knowledge represented in N-CODH correctly reflects human intuitions. Both VK and JX independently evaluated 90 random concept pairs, which included 32 IS-A pairs, 14 grandparent-child pairs, and 44 unrelated pairs connected erroneously with IS-A relations. Table 4 shows the input values used to calculate Cohen's kappa. We obtained a \(\kappa\)=0.50502, which indicates 74.44% agreement and in turn shows that there is a fair agreement about the ontology between the two evaluators. The confusion matrix for the Fisher exact test corresponding to each evaluator is provided in Table 5 and Table 6. In the metric input, hierarchically related concept pairs include both IS-A relationships and ancestor_grandchild relationships. For both the evaluators, we obtained a p value \(<\)0.0001, which is less than p=0.05. This implies that the evaluation is statistically significant [28]. Based on the feedbacks from the experts, we renamed two of the parent concepts in N-CODH for better clarity. "Access to farmers market" was changed to "transportation access to farmers market." "Fear of deportation" was changed to "fear of deportation of illegal workers in hazardous jobs."
Figure 3: Main classes and direct subclasses of the CDoH ontology in Protege.
_Validation study results of ChatGPT:_ ChatGPT agreed that the 20 nonrelated concept pairs taken from \(\mathbb{S}\)HH should not be connected by an IS-A relationship. It also correctly identified the 20 grandparent relationships. Results for the parent-child relationships were less strong. For parent-child pairs, the initial number of agreements=9, and the number of disagreements was 11. We attempted to establish the parent-child relationship for 7 of the 11 according to Figure 2.c). For 5 of the 11 pairs, children were recognized as such in the second step, corresponding to Figure 2.d). For the remaining two that was not the case. Among the remaining 4 (=11\(-\)5\(-\)2) concept pairs, 3 concept pairs were linked by "part-of" relationships and one concept pair was connected by a "type-of" relationship, according to ChatGPT. We consider the type-of relationship sufficiently similar to the parent-child (IS-A) relationship for our purposes.
**Table 5.** Confusion matrix of evaluator 1.
\begin{tabular}{|p{113.8pt}|p{113.8pt}|} \hline
**Confusion matrix** & Hierarchical related concept pairs & Unrelated concept pairs \\ \hline Evaluated as hierarchical related concept pairs & 39 & 0 \\ \hline Evaluated as unrelated concept pairs & 7 & 44 \\ \hline \end{tabular}
**Table 6.** Confusion matrix of evaluator 2.
\begin{tabular}{|p{113.8pt}|p{113.8pt}|} \hline
**Confusion matrix** & Hierarchical related concept pairs & Unrelated concept pairs \\ \hline Evaluated as Hierarchical related concept pairs & 42 & 11 \\ \hline Evaluated as unrelated concept pairs & 4 & 33 \\ \hline \end{tabular}
**Table 4.** Cohen Kappa input metrics
\begin{tabular}{|p{113.8pt}|p{113.8pt}|} \hline
**Description** & **Count** \\ \hline Both evaluators agree to include & 31 \\ \hline Both evaluators agree to exclude & 36 \\ \hline First evaluator wants to include & 3 \\ \hline Second evaluator wants to include & 20 \\ \hline \end{tabular}
**Table 3.** Schema metric returned by OntoMetrics.
\begin{tabular}{|p{113.8pt}|p{113.8pt}|} \hline
**Metrics** & **Value** \\ \hline Attribute richness & 0.008876 \\ \hline Inheritance richness & 0.98816 \\ \hline Relationship richness & 0.12336 \\ \hline Axioms/Class ratio & 4.49905 \\ \hline Class/relation ratio & 0.88713 \\ \hline \end{tabular}
**Table 4.** Cohen Kappa input metrics
\begin{tabular}{|p{113.8pt}|p{113.8pt}|} \hline
**Metrics** & **Value** \\ \hline Attribute richness & 0.008876 \\ \hline Inheritance richness & 0.98816 \\ \hline Relationship richness & 0.12336 \\ \hline Axioms/Class ratio & 4.49905 \\ \hline Class/relation ratio & 0.88713 \\ \hline \end{tabular}
**Table 3.** Schema metric returned by OntoMetrics.
\begin{tabular}{|p{113.8pt}|p{113.8pt}|} \hline
**Metrics** & **Value** \\ \hline Attribute richness & 0.008876 \\ \hline Inheritance richness & 0.98816 \\ \hline Relationship richness & 0.12336 \\ \hline Axioms/Class ratio & 4.49905 \\ \hline Class/relation ratio & 0.88713 \\ \hline \end{tabular}
**Table 3.** Schema metric returned by OntoMetrics.
\begin{tabular}{|p{113.8pt}|p{113.8pt}|} \hline
**Metrics** & **Value** \\ \hline Attribute richness & 0.008876 \\ \hline Inheritance richness & 0.98816 \\ \hline Relationship richness & 0.12336 \\ \hline Axioms/Class ratio & 4.49905 \\ \hline Class/relation ratio & 0.88713 \\ \hline \end{tabular}
**Table 4.** Cohen Kappa input metrics
\begin{tabular}{|p{113.8pt}|p{113.8pt}|} \hline
**Description** & **Count** \\ \hline Both evaluators agree to include & 31 \\ \hline Both evaluators agree to exclude & 36 \\ \hline First evaluator wants to include & 3 \\ \hline Second evaluator wants to include & 20 \\ \hline \end{tabular}
[MISSING_PAGE_POST]
**Table 42.** Confusion
**Conclusions**
In this project, we created an ontology to address the health impacts of CDoH, including concepts such as health hazards from climatic changes triggered by commercial actions. Using Protege 5.5.0, we developed the CDoH ontology with 675 axioms and 317 classes along with 27 object properties and 19 data properties. Our research on CDoH indicated a need to integrate it with our previously developed Social Determinants of Health Ontology (\(\mathbb{N}\)HI), the Health care Ontology for Minority Equity (HOME), and the Time Event Ontology (TEO), resulting in the development of the N-CODH. The initial N-CODH ontology includes 611 classes and 2603 axioms.
To evaluate the N-CODH ontology, we utilized the HermiT reasoner and the OntoMetrics tool along with two human experts' evaluation for domain coverage. We also conducted a validation study to determine whether ChatGPT could be used to support the development of an ontology. By leveraging ChatGPT as a "contributor," we were able to supplement our publication and concept collection efforts and expand the breadth of our ontology's coverage. This human-AI collaborative approach has the potential to reduce the cost and time required to build an ontology, while still maintaining a high level of accuracy and rigor. During the validation study, ChatGPT provides us with the insight that 11 pairs out of 60 concept pairs were recognized as not strictly IS-A related. _Thus, it would be beneficial for ontology developers in general to revisit and review their parent-child pairs with ChatGPT and make necessary adjustments._ In other words, ChatGPT can be utilized as an important tool to validate additional relevant concept pairs, enriching the ontology to the desired level of granularity.
**Limitation and Future Work**
One limitation of this study is that concepts were identified by human review. A second limitation is that only research articles from PMC were included. A third limitation is that the exclusion criteria that we applied could have resulted in omitting pertinent concepts. To address these gaps, in the future, NLP techniques will be utilized to extract relevant concepts from policy documents, population surveys, mortality surveys, clinical notes, scientific publications, etc. Exclusion criteria will be relaxed. As a final notable limitation, human review was limited to two experts. A third expert will be added to the team, when available.
**Glossary**
Social Determinants of Health Ontology: On BioPortal, previously developed by this team.
CDoH Ontology: Commercial Determinants of Health Ontology (No Acronym): On BioPortal, developed in this paper.
HOME: Health Ontology for Minority Equity: On BioPortal, previously developed by this team.
N-CODH (_en-code_):
Non-Clinical Ontology of Determinants of Health: On BioPortal/GitHub, developed in this paper
TEO: Time Event Ontology: On BioPortal. Imported into N-CODH.
|
2303.05687 | Scattering and Gathering for Spatially Varying Blurs | A spatially varying blur kernel $h(\mathbf{x},\mathbf{u})$ is specified by an
input coordinate $\mathbf{u} \in \mathbb{R}^2$ and an output coordinate
$\mathbf{x} \in \mathbb{R}^2$. For computational efficiency, we sometimes write
$h(\mathbf{x},\mathbf{u})$ as a linear combination of spatially invariant basis
functions. The associated pixelwise coefficients, however, can be indexed by
either the input coordinate or the output coordinate. While appearing subtle,
the two indexing schemes will lead to two different forms of convolutions known
as scattering and gathering, respectively. We discuss the origin of the
operations. We discuss conditions under which the two operations are identical.
We show that scattering is more suitable for simulating how light propagates
and gathering is more suitable for image filtering such as denoising. | Nicholas Chimitt, Xingguang Zhang, Yiheng Chi, Stanley H. Chan | 2023-03-10T03:39:23Z | http://arxiv.org/abs/2303.05687v2 | # Scattering and Gathering for Spatially Varying Blurs
###### Abstract
A spatially varying blur kernel \(h(\mathbf{x},\mathbf{u})\) is specified by an input coordinate \(\mathbf{u}\in\mathbb{R}^{2}\) and an output coordinate \(\mathbf{x}\in\mathbb{R}^{2}\). For computational efficiency, we sometimes write \(h(\mathbf{x},\mathbf{u})\) as a linear combination of spatially invariant basis functions. The associated pixelwise coefficients, however, can be indexed by either the input coordinate or the output coordinate. While appearing subtle, the two indexing schemes will lead to two different forms of convolutions known as _scattering_ and _gathering_, respectively. We discuss the origin of the operations. We discuss conditions under which the two operations are identical. We show that scattering is more suitable for simulating how light propagates and gathering is more suitable for image filtering such as denoising.
Spatially varying blur, basis representation, scattering, gathering
## I Introduction
In the two-dimensional space, the convolution between an input image \(J(\mathbf{x})\) and a shift-invariant kernel \(h(\mathbf{x})\) produces an output image \(I(\mathbf{x})\) via the well-known integral
\[I(\mathbf{x})=\int_{-\infty}^{\infty}h(\mathbf{x}-\mathbf{u})J(\mathbf{u})\ d \mathbf{u}. \tag{1}\]
In this equation, \(\mathbf{x}\in\mathbb{R}^{2}\) is a two-dimensional coordinate in the output space and \(\mathbf{u}\in\mathbb{R}^{2}\) is a coordinate in the input space. This definition is ubiquitous in all shift-invariant systems.
If the kernel \(h\) is spatially _varying_, then it is no longer a function of the coordinate difference \(\mathbf{x}-\mathbf{u}\) but a function of two variables \(\mathbf{x}\) and \(\mathbf{u}\). The resulting kernel \(h(\mathbf{x},\mathbf{u})\) will give the input-output relationship via the integral
\[I(\mathbf{x})=\int_{-\infty}^{\infty}h(\mathbf{x},\mathbf{u})J(\mathbf{u})\ d \mathbf{u}, \tag{2}\]
also known as the superposition integral. That is, at every output coordinate \(\mathbf{x}\), there is a kernel \(h(\mathbf{x},\mathbf{u})\) which is a function of the input coordinate \(\mathbf{u}\).
While spatially varying kernels are more difficult to analyze because they cannot be directly handled by Fourier transforms [1], they are common in image _formation_ and image _processing_ which are twins in many situations, such as in the case of kernel estimation [2, 3, 4] or image restoration [5, 6, 7]. In image formation, the spatially varying kernels are used to model how light propagates from the object plane to the image plane. These kernels are known as the point spread functions (PSF) which may be spatially varying due to various degradations in the medium or aberrations in the imaging system (such as spherical aberration). In image processing, the spatially varying kernels are used to filter the input image for applications such as denoising or interpolation. The spatially varying nature in these situations can come from examples such as non-local edge-aware filters shapes and orientations of the filters change depending on the image.
The theme of this paper is about the decomposition of the kernel in terms of basis functions. If \(h\) is _spatially invariant_, we may express it via the equation
\[h(\mathbf{x}-\mathbf{u})=\sum_{m=1}^{M}a_{m}\varphi_{m}(\mathbf{x}-\mathbf{u }), \tag{3}\]
where \(\{\varphi_{1},\varphi_{2},\ldots,\varphi_{M}\}\) are orthogonal basis functions. These functions could be as simples as the derivatives of the Gaussians, or they can be learned from a dataset of kernels via the principal component analysis. The scalars \(\{a_{1},a_{2},\ldots,a_{M}\}\) are the basis coefficients. They are often constructed according the local image statistics or the underpinning physics.
The decomposition of \(h\) into orthogonal basis functions can be useful from a parameterization point of view. While \(h\) is high-dimensional, the decomposition allows us to represent \(h\) in a low-dimensional space using the set of \(M\) coefficients. For applications such as blind deconvolution, this low-dimensional representation can be effective for kernel estimation because the search space is smaller.
In the case of _spatially varying_ kernels, the subject of this paper, the basis representation in (3) needs to be modified so that it can take into account of the two variables \(\mathbf{x}\) and \(\mathbf{u}\). However, there is an ambiguity due to the existence of two options which we call _gathering_ and _scattering_:
\[\text{(Gathering)} h(\mathbf{x},\mathbf{u})=\sum_{m=1}^{M}a_{\mathbf{x},m}\varphi_{m}( \mathbf{x}-\mathbf{u}), \tag{4}\] \[\text{(Scattering)} h(\mathbf{x},\mathbf{u})=\sum_{m=1}^{M}a_{\mathbf{u},m}\varphi_{m}( \mathbf{x}-\mathbf{u}). \tag{5}\]
In both options, the spatially varying kernel \(h(\mathbf{x},\mathbf{u})\) is written as a combination of _invariant_ kernels \(\{\varphi_{1},\varphi_{2},\ldots,\varphi_{M}\}\). These \(\varphi_{m}\)'s are spatially invariant, so they can be written as \(\varphi_{m}(\mathbf{x}-\mathbf{u})\). The difference between the two options lies in the coefficient \(a_{\mathbf{x},m}\) and \(a_{\mathbf{u},m}\). In the former case, the basis coefficient \(a_{\mathbf{x},m}\) is indexed by the output coordinate \(\mathbf{x}\). For any pixel \(\mathbf{x}\) in the output space, the equation linearly combines the basis functions at that output pixel location through \(\{a_{\mathbf{x},1},\ldots,a_{\mathbf{x},M}\}\). In the latter case, the basis coefficient
is indexed by the input coordinate \(\mathbf{u}\). It is not immediately obvious why one would want to do so, but now we do so via symmetry, opting to discuss this later in the paper.
**Remark**: Readers may wonder if we can define \(h(\mathbf{x},\mathbf{u})\) using a global \(a_{m}\) instead of a pixelwise \(a_{\mathbf{x},m}\) or \(a_{\mathbf{u},m}\). If we do so, for example by defining \(h(\mathbf{x},\mathbf{u})=\sum_{m=1}^{M}a_{m}\varphi_{m}(\mathbf{x}-\mathbf{u})\), then \(h(\mathbf{x},\mathbf{u})\) will be invariant because it is a linear combination of invariant basis functions. This will defeat the purpose studying a set of varying kernels. \(\square\)
At the first glance, the two choices above seem subtle to an extent that one may expect a minor difference in terms image quality. However, the two equations have two fundamentally different physical meanings. Even though the resulting images may look similar, one of them is better suited for image formation and the other is for image processing. To give readers a preview of the main claims of the paper, we summarize them as follows:
* **Gathering**: \(a_{\mathbf{x},m}\) is for image _processing_ such as denoising filter, interpolation filter, etc.
* **Scattering**: \(a_{\mathbf{u},m}\) is for image _formation_ such as modeling atmospheric turbulence.
It should be noted that the study of time variant systems goes back to classical time-variant filter banks, wavelets, or Kalman filtering/state-space applications [8, 9, 10, 11, 12, 13]. The study of spatially varying kernels as a sum of invariant ones has been performed in works such as [14, 15, 16]. In these works, particular aspects of the problem are analyzed as an end-use case. In this work, we focus on motivating the difference between the two approximations from the side of modeling and describing where each one is more appropriately applied.
## II Computational Aspects of Gathering and Scattering
### _Understanding Gathering_
Gathering is a decomposition using the _output_ coordinates. Suppose that we pass an image \(J(\mathbf{u})\) through a spatially varying kernel \(h(\mathbf{x},\mathbf{u})\). Assuming that the spatially varying kernel \(h(\mathbf{x},\mathbf{u})\) has a basis representation shown in (4), by substituting it into (2), we can show that
\[I(\mathbf{x}) \stackrel{{\text{by}(\ref{eq:1})}}{{=}}\int_{- \infty}^{\infty}h(\mathbf{x},\mathbf{u})J(\mathbf{u})\ d\mathbf{u}\] \[\stackrel{{\text{by}(\ref{eq:2})}}{{=}}\int_{- \infty}^{\infty}\left(\sum_{m=1}^{M}a_{\mathbf{x},m}\varphi_{m}(\mathbf{x}- \mathbf{u})\right)J(\mathbf{u})\ d\mathbf{u}\] \[=\sum_{m=1}^{M}a_{\mathbf{x},m}\left(\int_{-\infty}^{\infty} \varphi_{m}(\mathbf{x}-\mathbf{u})J(\mathbf{u})\ d\mathbf{u}\right).\]
Recognizing that the integral is a spatially invariant convolution, we can show that
\[I(\mathbf{x})=\underbrace{\sum_{m=1}^{M}a_{\mathbf{x},m}\ \underbrace{(\varphi_{m}\ \raisebox{-1.0pt}{\scalebox{0.8}{$\circ$}} \hskip-1.0pt\raisebox{-1.0pt}{\scalebox{0.8}{$\circ$}}\hskip-1.0pt\raisebox{-1.0 pt}{\scalebox{0.8}{$\circ$}}\hskip-1.0pt\raisebox{-1.0pt}{\scalebox{0.8}{$ \circ$}}\hskip-1.0pt\raisebox{-1.0pt}{\scalebox{0.8}{$\circ$}}\hskip-1.0pt \raisebox{-1.0pt}{\scalebox{0.8}{$\circ$}}\hskip-1.0pt\raisebox{-1.0pt}{\scalebox{0. 8}{$\circ$}}\hskip-1.0pt\raisebox{-1.0pt}{\scalebox{0.8}{$\circ$}} \hskip-1.0pt\raisebox{-1.0pt}{\scalebox{0.8}{$\circ$}}\hskip-1.0pt \raisebox{-1.0pt}{\scalebox{0.8}{$\circ$}}\hskip-1.0pt\raisebox{-1.0pt}{ \scalebox{0.8}{$\circ$}}\hskip-1.0pt\raisebox{-1.0pt}{\scalebox{0.8}{$ \circ$}}\hskip-1.0pt\raisebox{-1.0pt}{\scalebox{0.8}{$\circ$}}\hskip-1.0pt \raisebox{-1.0pt}{\scalebox{0.8}{$\circ$}}\hskip-1.0pt\raisebox{-1.0pt}{ \scalebox{0.8}{$\circ$}}\hskip-1.0pt\raisebox{-1.0pt}{\scalebox{0.8}{$ \circ$}}\hskip-1.0pt\raisebox{-1.0pt}{\scalebox{0.8}{$\circ$}}\hskip-1.0pt \raisebox{-1.0pt}{\scalebox{0.8}{$\circ$}}\hskip-1.0pt\raisebox{-1.0pt}{ \scalebox{0.8}{$\circ$}}\hskip-1.0pt\raisebox{-1.0pt}{\scalebox{0.8}{$ \circ$}}\hskip-1.0pt\raisebox{-1.0pt}{\scalebox{0.8}{$\circ$}}\hskip-1.0pt \raisebox{-1.0pt}{\scalebox{0.8}{$\circ$}}\hskip-1.0pt\raisebox{-1.0pt}{ \scalebox{0.8}{$\circ$}}\hskip-1.0pt\raisebox{-1.0pt}{\scalebox{0.8}{$ \circ$}}\hskip-1.0pt\raisebox{-1.0pt}{\scalebox{0.8}{$\circ$}}\hskip-1.0pt \raisebox{-1.0pt}{\scalebox{0.8}{$\circ$}}\hskip-1.0pt\raisebox{-1.0pt}{ \scalebox{0.8}{$\circ$}}\hskip-1.0pt\raisebox{-1.0pt}{\scalebox{0.8}{$ \circ$}}\hskip-1.0pt\raisebox{-1.0pt}{\scalebox{0.8}{$\circ$}}\hskip-1.0pt \raisebox{-1.0pt}{\scalebox{0.8}{$\circ$}}\hskip-1.0pt\raisebox{-1.0pt}{ \scalebox{0.8}{$\circ$}}\hskip-1.0pt\raisebox{-1.0pt}{\scalebox{0.8}{$ \circ$}}\hskip-1.0pt\raisebox{-1.0pt}{\scalebox{0.8}{$\circ$}}\hskip-1.0pt \raisebox{-1.0pt}{\scalebox{0.8}{$\circ$}}\hskip-1.0pt\raisebox{-1.0pt}{ \scalebox{0.8}{$\circ$}}\hskip-1.0pt\raisebox{-1.0pt}{\scalebox{0.8}{$\circ$}} \hskip-1.0pt\raisebox{-1.0pt}{\scalebox{0.8}{$\circ$}}\hskip-1.0pt \raisebox{-1.0pt}{\scalebox{0.8}{$\circ$}}\hskip-1.0pt\raisebox{-1.0pt}{ \scalebox{0.8}{$\circ$}}\hskip-1.0pt\raisebox{-1.0pt}{\scalebox{0.8}{$\circ$}} \hskip-1.0pt\raisebox{-1.0pt}{\scalebox{0.8}{$\circ$}}\hskip-1.0pt\raisebox{-1.0pt}{ \scalebox{0.8}{$\circ$}}\hskip-1.0pt\raisebox{-1.0pt}{\scalebox{0.8}{$\circ$}} \hskip-1.0pt\raisebox{-1.0pt}{\scalebox{0.8}{$\circ$}}\hskip-1.0pt\raisebox{-1.0pt}{ \scalebox{0.8}{$\circ$}}\hskip-1.0pt\raisebox{-1.0pt}{\scalebox{0.8}{$\circ$}} \hskip-1.0pt\raisebox{-1.0pt}{\scalebox{0.8}{$\circ$}}\hskip-1.0pt\raisebox{-1.0pt}{ \scalebox{0.8}{$\circ$}}\hskip-1.0pt\raisebox{-1.0pt}{\scalebox{0.8}{$\circ$}} \hskip-1.0pt\raisebox{-1.0pt}{\scalebox{0.8}{$\circ$}}\hskip-1.0pt\raisebox{-1.0pt}{ \scalebox{0.8}{$\circ$}}\hskip-1.0pt\raisebox{-1.0pt}{\scalebox{0.8}{$\circ$}} \hskip-1.0pt\raisebox{-1.0pt}{\scalebox{0.8}{$\circ$}}\hskip-1.0pt\raisebox{-1.0pt}{ \scalebox{0.8}{$\circ$}}\hskip-1.0pt\raisebox{-1.0pt}{\scalebox{0.8}{$\circ$}} \hskip-1.0pt\raisebox{-1.0pt}{\scalebox{0.8}{$\circ$}}\hskip-1.0pt\raisebox{-1.0pt}{ \scalebox{0.8}{$\circ$}}\hskip-1.0pt\raisebox{-1.0pt}{\scalebox{0.8}{$\circ$}} \hskip-1.0pt\raisebox{-1.0pt}{\scalebox{0.8}{$\circ$}}\hskip-1.0pt \raisebox{-1.0pt}{\scalebox{0.8}{$\circ$}}\hskip-1.0pt\raisebox{-1.0pt}{ \scalebox{0.8}{$\circ$}}\hskip-1.0pt\raisebox{-1.0pt}{\scalebox{0.8}{$\circ$}} \hskip-1.0pt\raisebox{-1.0pt}{\scalebox{0.8}{$\circ$}}\hskip-1.0pt\raisebox{-1.0pt}{ \scalebox{0.8}{$\circ$}}\hskip-1.0pt\raisebox{-1.0pt}{\scalebox{0.8}{$\circ$}} \hskip-1.0pt\raisebox{-1.0pt}{\scalebox{0.8}{$\circ$}}\hskip-1.0pt\raisebox{-1.0pt}{ \scalebox{0.8}{$\circ$}}\hskip-1.0pt\raisebox{-1.0pt}{\scalebox{0.8}{$\circ$}} \hskip-1.0pt\raisebox{-1.0pt}{\scalebox{0.
What is **gathering**?
* We apply spatially invarying kernels first, and then combine the results with weights.
* Consistent with the convolution we learned in Oppenheim and Wilsky [20], "flip, shift, and integrate".
* Equivalent to the "convolution" in deep neural networks.
### _Understanding Scattering_
We now discuss the scattering equation. For now, we follow the same approach as we did for the gathering case by analyzing the computation. Substituting (5) into (2), we see that
\[I(\mathbf{x}) \stackrel{{\text{by(\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:
coordinate.
We summarize our findings here:
What is **scattering?**
* We apply weighted averaging first, and then filter the weighted averages, and finally add.
* Consistent with how light propagates. See Goodman's _Fourier Optics_[22] and our next section.
* Equivalent to the "transposed convolution" in deep neural networks.
### _Conditions for Equivalence_
After elaborating on the computations of gathering and scattering, we now explain the conditions under which the two are equivalent. As expected, the two are equivalent if and only if the underlying blur is spatially invariant.
The first observation is about the very first equation of convolution defined in (1). When comparing this equation to the operation illustrated in Figure 2, we actually made an implicit assumption about the interchangeability of \(J\) and \(h\):
\[\underbrace{\int h(\mathbf{x}-\mathbf{u})J(\mathbf{u})d\mathbf{u}}_{\text{ how scattering operates}}=\underbrace{\int h(\mathbf{u})J(\mathbf{x}-\mathbf{u})d\mathbf{u}}_{\text{ how gathering operates}}. \tag{8}\]
While this may appear subtle (and perhaps we always take it for granted), it is this interchangeability that allows us to shift-and-crop a region \(J(\mathbf{x}-\mathbf{u})\) from the image \(J(\mathbf{u})\) to carry out the gathering integration efficiently. In fact, the operation defined on the left hand side of (8) matches with how scattering operates, whereas the operation on the right hand side matches with how gathering operates. The interchangeability implies that the two operations are valid as long as the convolution is shift-invariant so that \(h(\mathbf{x},\mathbf{u})\) can be written as \(h(\mathbf{x}-\mathbf{u})\). When \(h\) is spatially varying, then we cannot interchange the order.
Let us look at the equations more carefully through the lens of matrices and vectors. Let \(\mathbf{H}\in\mathbb{R}^{N\times N}\) be the matrix representation of the spatially varying blur kernel \(h(\mathbf{x},\mathbf{u})\). We assume that there is a set of _circulant_ matrices \(\mathbf{H}_{1},\mathbf{H}_{2},\ldots,\mathbf{H}_{M}\) representing the set of \(M\) spatially invariant basis functions \(\{\varphi_{1},\varphi_{2},\ldots,\varphi_{M}\}\).
We consider two sets of diagonal matrices. For every index \(m\), we define
\[\mathbf{D}_{m}^{\mathbf{x}}=\text{diag}\left\{\begin{bmatrix}a_{\mathbf{x}_{1 },m}\\ \vdots\\ a_{\mathbf{x}_{N},m}\end{bmatrix}\right\},\ \ \mathbf{D}_{m}^{\mathbf{u}}=\text{diag} \left\{\begin{bmatrix}a_{\mathbf{u}_{1},m}\\ \vdots\\ a_{\mathbf{u}_{N},m}\end{bmatrix}\right\}\]
Then, the gathering and scattering equations are
\[(\text{Gathering}): \mathbf{H}^{\mathbf{x}}=\sum_{m=1}^{M}\mathbf{D}_{m}^{\mathbf{x }}\mathbf{H}_{m} \tag{9}\] \[(\text{Scattering}): \mathbf{H}^{\mathbf{u}}=\sum_{m=1}^{M}\mathbf{H}_{m}\mathbf{D}_ {m}^{\mathbf{u}} \tag{10}\]
In other words, the difference lies in how we order the diagonal matrices and the spatially invariant convolution matrices.
It is not difficult to show that the two constructions can never be the same unless \(\mathbf{D}_{m}^{\mathbf{x}}\) and \(\mathbf{D}_{m}^{\mathbf{u}}\) are identity matrices up to a scalar multiple. To see this, we consider the case where \(M=1\).
**Theorem 1**: _Let \(\mathbf{H}\in\mathbb{R}^{N\times N}\) be a square matrix. Let \(\mathbf{A}=\text{diag}[a_{1},\ldots,a_{N}]\) and \(\mathbf{B}=\text{diag}[b_{1},\ldots,b_{N}]\) be two diagonal matrices. Then, \(\mathbf{A}\mathbf{H}=\mathbf{H}\mathbf{B}\) if and only if \(\mathbf{A}=\mathbf{B}=\lambda\mathbf{I}\) for some constant \(\lambda\) where \(\mathbf{I}\) is the identity matrix._
We just need to write out the terms. For \(\mathbf{A}\mathbf{H}\), we can show that
\[\mathbf{A}\mathbf{H}=\begin{bmatrix}a_{1}h_{11}&a_{1}h_{12}&\ldots&a_{1}h_{1N }\\ a_{2}h_{21}&a_{2}h_{22}&\ldots&a_{2}h_{2N}\\ \vdots&\vdots&\ddots&\vdots\\ a_{N}h_{N1}&a_{N}h_{N2}&\ldots&a_{N}h_{NN}\end{bmatrix},\]
and for \(\mathbf{H}\mathbf{B}\), we can show that
\[\mathbf{H}\mathbf{B}=\begin{bmatrix}b_{1}h_{11}&b_{2}h_{12}&\ldots&b_{N}h_{1N }\\ b_{1}h_{21}&b_{2}h_{22}&\ldots&b_{N}h_{2N}\\ \vdots&\vdots&\ddots&\vdots\\ b_{1}h_{N1}&b_{2}h_{N2}&\ldots&b_{N}h_{NN}\end{bmatrix},\]
By comparing terms, we can see that the only possibility for \(\mathbf{A}\mathbf{H}=\mathbf{H}\mathbf{B}\) is to require \(\mathbf{A}=\mathbf{B}=\lambda\mathbf{I}\).
The result of the previous theorem implies that if we have a convolution matrix \(\mathbf{H}_{m}\) (which is circulant), for the scattering and gathering operations to be equivalent, we need
\[\mathbf{D}_{m}^{\mathbf{x}}\mathbf{H}_{m}=\mathbf{H}_{m}\mathbf{D}_{m}^{ \mathbf{u}},\quad\text{for all }m.\]
Theorem 1 asserts that we need \(\mathbf{D}_{m}^{\mathbf{x}}=\mathbf{D}_{m}^{\mathbf{u}}=\lambda\mathbf{I}\). But if \(\mathbf{D}_{m}^{\mathbf{x}}=\mathbf{D}_{m}^{\mathbf{u}}=\lambda\mathbf{I}\), then the underlying blur must be spatially invariant.
Another observation of Theorem 1 is that in general,
\[\sum_{m=1}^{M}\mathbf{D}_{m}^{\mathbf{x}}\mathbf{H}_{m}\neq\sum_{m=1}^{M} \mathbf{H}_{m}\mathbf{D}_{m}^{\mathbf{u}}. \tag{11}\]
Therefore, the gathering and scattering equations (4) and (5) are _mutually exclusive_. If we say that \(h(\mathbf{x},\mathbf{u})\) can be exactly represented by the gathering equation, then there will be an approximation error when representing \(h(\mathbf{x},\mathbf{u})\) using the scattering equation, and vice versa.
Under what **conditions** would scattering = gathering?
* When the underlying blur is spatially invariant.
* Scattering and gathering are mutually exclusive. We cannot simultaneously have (4) and (5) for a spatially varying blur. If one is the correct representation, the other will have approximation error.
### _Normalization_
When performing a convolution, sometimes it is necessary to ensure that the image intensity is not amplified or attenuated due to an improper normalization. For example, in a spatially invariant blur, we almost always require that
\[\int h(\mathbf{u})d\mathbf{u}=1,\]
assuming that \(h(\mathbf{u})\geq 0\) for all \(\mathbf{u}\). Otherwise, if the integral is less than unity, the resulting (convolved) image will appear to be dimmer. Translated to matrices and vectors, this is equivalent to \(\mathbf{H}\mathbf{1}=\mathbf{1}\) for an all-one vector \(\mathbf{1}\), assuming that \(\mathbf{H}\) is circulant.
Suppose that we have a sequence of spatially invariant blurs \(\mathbf{H}_{1},\mathbf{H}_{2},\ldots,\mathbf{H}_{M}\) satisfying the property that \(\mathbf{H}_{m}\mathbf{1}=\mathbf{1}\) for all \(m\). We want the diagonal matrices \(\mathbf{D}_{1}^{\mathbf{x}},\mathbf{D}_{2}^{\mathbf{x}},\ldots,\mathbf{D}_{M} ^{\mathbf{x}}\) to be defined in such a way that the gathering equation (9) will give us
\[\mathbf{1}\quad\stackrel{{\text{(we want)}}}{{=}}\quad\mathbf{H}^{ \mathbf{x}}\mathbf{1}=\left(\sum_{m=1}^{M}\mathbf{D}_{m}^{\mathbf{x}}\mathbf{H }_{m}\right)\mathbf{1}=\sum_{m=1}^{M}\mathbf{D}_{m}^{\mathbf{x}}\mathbf{1}.\]
Therefore, as long as we can ensure that the sum of the \(M\) diagonal matrices \(\{\mathbf{D}_{m}^{\mathbf{x}}\,|\,m=1,\ldots,M\}\) is a vector of all one's, we are guaranteed to have \(\mathbf{H}^{\mathbf{x}}\) to have unit rows. Converting this into the basis representation, it is equivalent to asking
\[\sum_{m=1}^{M}a_{\mathbf{x},m}=1,\qquad\text{for all }m, \tag{12}\]
which is reasonably easy to satisfy. For implementation, if \(\mathbf{H}^{\mathbf{x}}\) does not have rows sum to the unity such that \(\mathbf{H}^{\mathbf{x}}\mathbf{1}\neq\mathbf{1}\), the simplest approach is to define a diagonal matrix \(\mathbf{D}\) such that \(\mathbf{D}^{-1}\mathbf{H}^{\mathbf{x}}\mathbf{1}=\mathbf{1}\). From the derivations above, it is clear that the diagonal matrix should have the elements
\[\mathbf{D}=\text{diag}\left\{\sum_{m=1}^{M}\mathbf{D}_{m}^{\mathbf{x}}\mathbf{ H}_{m}\mathbf{1}\right\}. \tag{13}\]
Therefore, the overall operation applied to an image is
\[\widehat{\mathbf{I}}=\text{diag}\left\{\sum_{m=1}^{M}\mathbf{D}_{m}^{\mathbf{ x}}\mathbf{H}_{m}\mathbf{1}\right\}^{-1}\left(\sum_{m=1}^{M}\mathbf{D}_{m}^{ \mathbf{x}}\mathbf{H}_{m}\mathbf{J}\right), \tag{14}\]
where \(\mathbf{J}\in\mathbb{R}^{N}\) is the vector representation of the input, and \(\widehat{\mathbf{I}}\in\mathbb{R}^{N}\) is the output.
The normalization of the scattering equation is more complicated because the diagonal matrices do not commute with the spatially invariant blurs, and so we are not able to simplify the equation:
\[\mathbf{1}\quad\stackrel{{\text{(we want)}}}{{=}}\quad\mathbf{H}^{ \mathbf{u}}\mathbf{1}=\sum_{m=1}^{M}\mathbf{H}_{m}(\mathbf{D}_{m}^{\mathbf{u} }\mathbf{1}).\]
The only work around solution is to define a diagonal matrix \(\mathbf{D}\) such that \(\mathbf{D}^{-1}\mathbf{H}^{\mathbf{u}}\mathbf{1}=\mathbf{1}\). This would require that
\[\mathbf{D}=\text{diag}\left\{\sum_{m=1}^{M}\mathbf{H}_{m}\mathbf{D}_{m}^{ \mathbf{u}}\mathbf{1}\right\}. \tag{15}\]
In other words, when computing the scattering equation and if normalization is required, then we need to compute the set of convolutions _twice_: once for the input, and once for the normalization constant:
\[\widehat{\mathbf{I}}=\text{diag}\left\{\sum_{m=1}^{M}\mathbf{H}_{m}\mathbf{ D}_{m}^{\mathbf{x}}\mathbf{1}\right\}^{-1}\left(\sum_{m=1}^{M}\mathbf{H}_{m} \mathbf{D}_{m}^{\mathbf{x}}\mathbf{J}\right). \tag{16}\]
To summarize the normalization:
If **normalization** is needed, then
* For gathering: Either ensure \(\sum_{m=1}^{M}a_{\mathbf{x},m}=1\) for all \(\mathbf{x}\), or perform the calculation in (14).
* For scattering: Perform the calculation in (16). The cost is twice the number of convolutions than gathering.
## III Origin of the Scattering Equation
In this section, we explain the origin of the scattering equation from a physics point of view. To make our discussions concrete, the running example we use is a point source propagating a random medium as shown in Figure 5.
When we discuss optics, the coordinate system can sometimes be confusing. For the purpose of our discussion, we follow the coordinate system defined Figure 5. The object plane uses the coordinate \(\mathbf{u}\in\mathbb{R}^{2}\). We can think of it as the _input_ coordinate. As light propagates through the random medium, it reaches at the aperture of a lens. The coordinate on the lens is denoted by \(\boldsymbol{\rho}\in\mathbb{R}^{2}\). When the light reaches the image plane, the coordinate becomes \(\mathbf{x}\in\mathbb{R}^{2}\), which is also the _output_ coordinate.
Deriving the PSF equation from the Rayleigh-Sommerfeld integral will be too lengthy for our paper. Therefore, we skip the derivation and refer the readers to [22, Ch. 4 & 5]. What we shall do here is to highlight the four components of Figure 5.
**Source to Aperture**: The propagation of a point source from the object plane to the aperture, in the absence of the random medium, is characterized by the free-space propagation. The electromagnetic field defined upon the aperture is given by [22, Eq 5-25]
\[U(\mathbf{u},\boldsymbol{\rho})=\frac{1}{j\lambda z_{1}}\exp\left\{j\frac{k}{2 z_{1}}|\mathbf{u}-\boldsymbol{\rho}|^{2}\right\}, \tag{17}\]
where \(k=2\pi/\lambda\) is the wave number, and \(z_{1}\) is the distance from the source to the aperture. The notation \(|\mathbf{u}-\boldsymbol{\rho}|\) denotes the Euclidean distance between the two coordinates \(\mathbf{u}\) and \(\boldsymbol{\rho}\). This equation describes a parabolic wave propagating outward from \(\mathbf{u}\). The farther apart \(\mathbf{u}\) and \(\boldsymbol{\rho}\) is, the weaker the field \(U(\mathbf{u},\boldsymbol{\rho})\) will become.
**Aperture and lens**: Right at the lens, the incident field will be imparted by the pupil function of the lens and its phase response. For a lens with a focal length of \(f\), the field at the
Fig. 5: The coordinate system of a typical optical system, in the presence of a random medium.
exit of the aperture is [22, Eq 5-26]
\[U^{\prime}(\mathbf{u},\boldsymbol{\rho})=U(\mathbf{u},\boldsymbol{\rho})P( \boldsymbol{\rho})\exp\left\{j\frac{k}{2f}|\boldsymbol{\rho}|^{2}\right\}, \tag{18}\]
where \(P(\boldsymbol{\rho})\) is the pupil function, typically chosen to be a circular indicator function.
**Aperture to image**: When the incident field exits the lens, it propagates via Fresnel diffraction to the image plane. Referring to [22, Eq 5-27], we can show that
\[\underbrace{U^{\prime\prime}(\mathbf{x},\mathbf{u})}_{=h(\mathbf{x},\mathbf{u })}=\frac{1}{j\lambda z_{2}}\int_{-\infty}^{\infty}U^{\prime}(\mathbf{u}, \boldsymbol{\rho})\exp\left\{j\frac{k}{2z_{2}}|\mathbf{x}-\boldsymbol{\rho}|^ {2}\right\}\,d\boldsymbol{\rho}. \tag{19}\]
Notice that the final electromagnetic field \(U^{\prime\prime}(\mathbf{x},\mathbf{u})\) arriving at the image plane is originated from a point source. As such, \(U^{\prime\prime}(\mathbf{x},\mathbf{u})\) is the point spread function \(h(\mathbf{x},\mathbf{u})\).
The PSF \(h(\mathbf{x},\mathbf{u})\) can be expressed (with some approximation) as [22, Eq 5-36]:
\[h(\mathbf{x},\mathbf{u})=\underbrace{\frac{1}{\lambda^{2}z_{1}z_{2}}\int_{- \infty}^{\infty}P(\boldsymbol{\rho})\exp\left\{-j\frac{k}{z_{2}}(\mathbf{x}- S\mathbf{u})^{T}\boldsymbol{\rho}\right\}d\boldsymbol{\rho}}_{=h(\mathbf{x}- \mathbf{u})\text{ if }S=1}, \tag{20}\]
where \(S=-z_{2}/z_{1}\) is the magnification factor. If, for simplicity, we assume \(z_{1}=-z_{2}\) so that \(S=1\), then \(h(\mathbf{x},\mathbf{u})\) is completely characterized by the coordinate difference \(\mathbf{x}-\mathbf{u}\). This will give us \(h(\mathbf{x},\mathbf{u})=h(\mathbf{x}-\mathbf{u})\), and so \(h(\mathbf{x},\mathbf{u})\) represents a spatially _invariant_ kernel.
**Random medium**. The fourth element we need to discuss, which is also the source of the problem, is the random medium. The random medium introduces a random amplitude and phase distortion as
\[R_{\mathbf{u}}(\boldsymbol{\rho})=\underbrace{A_{\mathbf{u}}(\boldsymbol{\rho })}_{\text{amplitude}}\,\times\,\underbrace{\exp\left\{-j\phi_{\mathbf{u}}( \boldsymbol{\rho})\right\}}_{\text{phase}}. \tag{21}\]
Notice that in this definition, the distortion has a coordinate pair \((\mathbf{u},\boldsymbol{\rho})\). In the case of a random medium, the position of the source and aperture will parameterize the slice of the atmosphere the wave can be said to propagate through. Therefore, the distortion must be defined by these two coordinates. Taking the position of our imaging system as our reference, the distortions can be completely parameterized by \(\mathbf{u}\).
The impact of the random medium takes place at source-to-aperture, i.e., appending (21) to (17). Thus, the electromagnetic field incident upon the aperture is
\[U(\mathbf{u},\boldsymbol{\rho})=\frac{1}{j\lambda z_{1}}A_{ \mathbf{u}}(\boldsymbol{\rho})\exp\left\{-j\phi_{\mathbf{u}}(\boldsymbol{\rho })\right\}\] \[\times\exp\left\{j\frac{k}{2z_{1}}|\mathbf{u}-\boldsymbol{\rho}|^{ 2}\right\}.\]
Consequently, the point spread function is
\[h(\mathbf{x},\mathbf{u})=\frac{1}{\lambda^{2}z_{1}z_{2}}\int_{- \infty}^{\infty}\underbrace{A_{\mathbf{u}}(\boldsymbol{\rho})\exp\left\{-j \phi_{\mathbf{u}}(\boldsymbol{\rho})\right\}P(\boldsymbol{\rho})}_{=g_{ \mathbf{u}}(\boldsymbol{\rho})}\] \[\times\exp\left\{-j\frac{k}{z_{2}}(\mathbf{x}-\mathbf{u})^{T} \boldsymbol{\rho}\right\}d\boldsymbol{\rho} \tag{22}\]
where again we assumed \(S=1\). Defining the constant \(\kappa=1/(\lambda^{2}z_{1}z_{2})\), we see that (22) takes the form of
\[h(\mathbf{x},\mathbf{u})=\kappa\int_{-\infty}^{\infty}g_{\mathbf{u}}( \boldsymbol{\rho})e^{-j2\pi\xi^{T}\boldsymbol{\rho}}d\boldsymbol{\rho},\]
where we defined \(\boldsymbol{\xi}=(\mathbf{x}-\mathbf{u})/(\lambda z_{2})\) and
\[g_{\mathbf{u}}(\boldsymbol{\rho})\stackrel{{\text{def}}}{{=}}A_{ \mathbf{u}}(\boldsymbol{\rho})\exp\left\{-j\phi_{\mathbf{u}}(\boldsymbol{\rho })\right\}P(\boldsymbol{\rho}).\]
Therefore, (22) can be seen as the Fourier transform of the random medium and the pupil function via
\[h(\mathbf{x},\mathbf{u})=\text{Fourier}\Big{\{}g_{\mathbf{u}}(\boldsymbol{ \rho})\Big{\}}\Big{|}_{\frac{\kappa-\mathbf{u}}{\lambda z_{2}}}, \tag{23}\]
where we specify that the transform is evaluated at the coordinate \((\mathbf{x}-\mathbf{u})/(\lambda z_{2})\).
At this point, it should become clear that the PSF \(h(\mathbf{x},\mathbf{u})\) must be a function of \(\mathbf{x}-\mathbf{u}\). However, since \(g_{\mathbf{u}}(\boldsymbol{\rho})\) is also indexed by \(\mathbf{u}\), the resulting PSF \(h(\mathbf{x},\mathbf{u})\) should inherent the index \(\mathbf{u}\). This will give us
\[h(\mathbf{x},\mathbf{u}) =\text{some function of }(\mathbf{x}-\mathbf{u})\text{, index by }\mathbf{u},\] \[=\sum_{m=1}^{M}a_{\mathbf{u},m}\varphi_{m}(\mathbf{x}-\mathbf{u}). \tag{24}\]
where in the last step we use the linear combination of basis functions as the model for such \(h(\mathbf{x},\mathbf{u})\). The basis function \(\varphi_{m}\) here captures the spatial invariance, whereas the coefficients \(a_{\mathbf{u},m}\) capture the spatially varying indices.
The implication of our derivation is important as it explains why in optics simulation, such as imaging through random medium, must follow the _scattering_ equation if we choose to represent the PSF using a set of spatially invariant basis functions. This is due to the fact that nature relies on the _source_ location to determine the response of the system. Though our derivation relied upon a random medium, this concept extends to cases such as spherical aberrations or defocus blur in a scene which has objects which are beyond the depth of field, both of which may be formulated as a phase error. In both of these examples, the source location is what parameterizes the phase error.
## IV Do These Actually Matter?
The question to ask now is: Given the gathering equation and the scattering equation, does it really matter if we choose the "wrong" one? The goal of this section is to answer the question through a few examples.
### _Scattering Works for Optical Simulation_
In the first example, we consider the problem of simulating the resulting image given a specific light source. The light source \(J(\mathbf{u})\) we consider here consists of two delta functions:
\[J(\mathbf{u})=\delta(\mathbf{u}+\Delta)+\delta(\mathbf{u}-\Delta),\]
where \(\Delta\) is a small displacement from \(\mathbf{u}\). For convenience, we consider a plane with two halves. The separation is located at the origin; any pixel that is on the left is denoted as "\(\mathbf{u}<0\)" (this notation means that the _horizontal_ component of \(\mathbf{u}\) is less
than zero). Similarly, any pixel that is on the right is denoted as "\(\mathbf{u}\geq 0\)". Therefore, \(\delta(\mathbf{u}+\Delta)\) is on the left, and \(\delta(\mathbf{u}-\Delta)\) is on the right.
Imagine that in front of the light source, we put two transparent sheets with different phase profiles (which can be engineered using a meta-material.) This will give us a spatially varying blur kernel \(h(\mathbf{x},\mathbf{u})\), and for simplicity, if light is emitted on the left hand side, then the blur uses a smaller radius; if the light is emitted on the right hand side, then the blur uses a larger radius. Thus, we write
\[h(\mathbf{x},\mathbf{u})=\begin{cases}\frac{1}{2\pi\sigma_{1}^{2}}\exp\left\{ -\frac{\left\|\mathbf{x}-\mathbf{u}\right\|^{2}}{2\sigma_{2}^{2}}\right\} \overset{\text{def}}{=}\varphi_{1}(\mathbf{x}-\mathbf{u}),&\mathbf{u}<0,\\ \frac{1}{2\pi\sigma_{2}^{2}}\exp\left\{-\frac{\left\|\mathbf{x}-\mathbf{u} \right\|^{2}}{2\sigma_{2}^{2}}\right\}\overset{\text{def}}{=}\varphi_{2}( \mathbf{x}-\mathbf{u}),&\mathbf{u}\geq 0,\end{cases} \tag{25}\]
where \(\sigma_{1}<\sigma_{2}\). Figure 6(b) illustrates these spatially varying blur kernels. For visualization purposes, we show only the PSFs at a grid of points. In reality, the PSFs is defined continuously over \(\mathbf{u}\).
Before we do any calculus, we can perform a thought experiment. Figure 7 illustrates a hypothetical experimental setup. On the object plane there are two points emitting light through a meta surface with two different phase profiles. As the light propagates outward from the source through diffraction, the electromagnetic fields superimpose over each other. When the light reaches the aperture, the two diffraction patterns overlap. Therefore, the resulting image \(I(\mathbf{x})\), without any calculation, should be one big diffraction pattern. It is impossible to obtain a sharp cutoff and two diffraction patterns.
With this thought experiment in mind, we can now talk about the gathering and the scattering equations. For the gathering equation, since the PSF has a simple binary structure, we can define it as
\[h^{\text{gather}}(\mathbf{x},\mathbf{u}) =\underbrace{\mathbb{I}\{\mathbf{x}\text{ \in left}\}}_{=a_{\mathbf{x},1}}\times\varphi_{1}(\mathbf{x}-\mathbf{u})\] \[\qquad\underbrace{\mathbb{I}\{\mathbf{x}\text{ \in right}\}}_{=a_{\mathbf{x},2}}\times\varphi_{2}(\mathbf{x}-\mathbf{u}), \tag{26}\]
where \(\mathbb{I}\{\cdot\}\) is an indicator function For scattering, the equation takes a similar form
\[h^{\text{scatter}}(\mathbf{x},\mathbf{u}) =\underbrace{\mathbb{I}\{\mathbf{u}\text{ \in left}\}}_{=a_{\mathbf{u},1}}\times\varphi_{1}(\mathbf{x}-\mathbf{u})\] \[\qquad\underbrace{\mathbb{I}\{\mathbf{u}\text{ \in right}\}}_{=a_{\mathbf{u},2}}\times\varphi_{2}(\mathbf{x}-\mathbf{u}), \tag{27}\]
where we replaced \(a_{\mathbf{x},m}\) by \(a_{\mathbf{u},m}\).
By comparing the gathering equation (26) and the scattering equation (27) with the original spatially varying \(h(\mathbf{x},\mathbf{u})\) in (25), it is clear that only the scattering equation will match with the original \(h(\mathbf{x},\mathbf{u})\) because they are both indexed by \(\mathbf{u}\). However, to confirm that this is indeed the case, it would be helpful to look at the resulting image, as illustrated in Figure 8. Let us explain how these figures are obtained.
The resulting image of the gathering equation can be shown as follows:
\[I^{\text{gather}}(\mathbf{x})=\int\left(a_{\mathbf{x},1}\varphi_{1}(\mathbf{x }-\mathbf{u})+a_{\mathbf{x},2}\varphi_{2}(\mathbf{x}-\mathbf{u})\right)J( \mathbf{u})\,d\mathbf{u}.\]
Since \(a_{\mathbf{x},1}=1\) if \(\mathbf{x}\geq 0\) and \(a_{\mathbf{x},1}=0\) if \(\mathbf{x}<0\) (similarly for \(a_{\mathbf{x},2}\)), the coefficients \(a_{\mathbf{x},1}\) and \(a_{\mathbf{x},2}\) will create two cases as
\[I^{\text{gather}}(\mathbf{x})\] \[=\begin{cases}\int_{-\infty}^{\infty}\varphi_{1}(\mathbf{x}- \mathbf{u})J(\mathbf{u})\,d\mathbf{u},&\mathbf{x}<0,\\ \int_{-\infty}^{\infty}\varphi_{2}(\mathbf{x}-\mathbf{u})J(\mathbf{u})\,d \mathbf{u},&\mathbf{x}\geq 0.\end{cases}\] \[=\begin{cases}\int_{-\infty}^{\infty}\varphi_{1}(\mathbf{x}- \mathbf{u})(\delta(\mathbf{u}+\Delta)+\delta(\mathbf{u}-\Delta))\,d\mathbf{u},& \mathbf{x}<0,\\ \int_{-\infty}^{\infty}\varphi_{2}(\mathbf{x}-\mathbf{u})(\delta(\mathbf{u}+ \Delta)+\delta(\mathbf{u}-\Delta))\,d\mathbf{u},&\mathbf{x}\geq 0.\end{cases}\] \[=\begin{cases}\varphi_{1}(\mathbf{x}+\Delta)+\varphi_{1}( \mathbf{x}-\Delta),&\mathbf{x}<0,\\ \varphi_{2}(\mathbf{x}+\Delta)+\varphi_{2}(\mathbf{x}-\Delta),&\mathbf{x}\geq 0.\end{cases} \tag{28}\]
If we draw \(I^{\text{gather}}(\mathbf{x})\), we will obtain the figure shown in Figure 8(a).
Fig. 8: Comparison between gathering and scattering for the setup in Figure 7. Notice that for this optical experiment, we should expect the resulting image to contain one big diffraction pattern. However, only the scattering equation demonstrates this.
Fig. 6: Visualization of an example with (a) \(J(\mathbf{u})\) and (b) a grid of spatially varying blur kernels.
Fig. 7: Thought experiment with two points on the object plane, diffracting through two different metasurfaces. The resulting image should, in principle, be one superimposed diffraction pattern.
For scattering, we can carry out the same derivation and show that
\[I^{\text{scater}}(\mathbf{x})=\int_{-\infty}^{\infty}\left(a_{ \mathbf{u},1}\varphi_{1}(\mathbf{x}-\mathbf{u})+a_{\mathbf{u},2}\varphi_{2}( \mathbf{x}-\mathbf{u})\right)J(\mathbf{u})\,d\mathbf{u}\] \[=\int_{-\infty}^{\infty}\varphi_{1}(\mathbf{x}-\mathbf{u})[a_{ \mathbf{u},1}J(\mathbf{u})]\,d\mathbf{u}\] \[\qquad\qquad+\int_{-\infty}^{\infty}\varphi_{2}(\mathbf{x}- \mathbf{u})[a_{\mathbf{u},2}J(\mathbf{u})]\,d\mathbf{u}\] \[=\int_{-\infty}^{\infty}\varphi_{1}(\mathbf{x}-\mathbf{u})\delta (\mathbf{u}+\Delta)\,d\mathbf{u}\] \[\qquad\qquad+\int_{-\infty}^{\infty}\varphi_{2}(\mathbf{x}- \mathbf{u})\delta(\mathbf{u}-\Delta)\,d\mathbf{u}\] \[=\varphi_{1}(\mathbf{x}+\Delta)+\varphi_{2}(\mathbf{x}-\Delta), \tag{29}\]
where we used the fact that \(a_{\mathbf{u},1}=\mathbb{I}\{\mathbf{u}\in\text{left}\}\) and so \(a_{\mathbf{u},1}\delta(\mathbf{u}+\Delta)=\delta(\mathbf{u}+\Delta)\) whereas \(a_{\mathbf{u},2}\delta(\mathbf{u})=0\). Similarly we have \(a_{\mathbf{u},1}\delta(\mathbf{u}-\Delta)=0\) and \(a_{\mathbf{u},2}\delta(\mathbf{u}-\Delta)=\delta(\mathbf{u}-\Delta)\).
If we draw the resulting image, we will obtain the figure shown in Figure 8(b). This is consistent with what we expect in Figure 7 and the theoretical derivation in Section III.
### _Gathering Works for Image Filtering_
In the second example, we consider the problem of _image filtering_. The scenario is that we are given a noisy image \(J(\mathbf{u})\) that contains two regions:
\[J(\mathbf{u}) =\begin{cases}\theta_{1}+\text{Gauss}(0,\sigma_{1}^{2}),& \mathbf{u}<0,\\ \theta_{2}+\text{Gauss}(0,\sigma_{2}^{2}),&\mathbf{u}\geq 0.\end{cases}\] \[=(\theta_{1}+W_{1}(\mathbf{u}))\times(1-\text{Step}(\mathbf{u}))\] \[\qquad\qquad\qquad+(\theta_{2}+W_{2}(\mathbf{u}))\times\text{Step }(\mathbf{u}) \tag{30}\]
with two signal levels \(\theta_{1}\) and \(\theta_{2}\), and two noise standard deviations \(\sigma_{1}\) and \(\sigma_{2}\) such that \(\sigma_{1}>\sigma_{2}\). In this equation, \(W_{1}(\mathbf{u})\sim\text{Gauss}(0,\sigma_{1}^{2})\) and \(W_{2}(\mathbf{u})\sim\text{Gauss}(0,\sigma_{2}^{2})\) denote the Gaussian noise. The function \(\text{Step}(\mathbf{u})\) denotes the horizontal step function where \(\text{Step}(\mathbf{u})=1\) for any \(\mathbf{u}\geq 0\), i.e., residing on the right, and \(\text{Step}(\mathbf{u})=0\) for any \(\mathbf{u}<0\). For illustration, we show in Figure 9(a) the case where \(\theta_{1}=0.8\), \(\theta_{2}=0.2\), \(\sigma_{1}=0.1\) and \(\sigma_{2}=0.02\).
To denoise this image, we consider the simplest approach assuming that we _knew_ the partition of the two regions. Suppose that we want to denoise the left side. Since we know that the noise is stronger, we shall apply a stronger filter. As illustrated in Figure 10, for this filter to be effective along the boundary, we should apply a mask _after_ the filtering is done.
The spatially varying filter we propose here takes the form
\[h(\mathbf{x},\mathbf{u})=\begin{cases}\frac{1}{\sqrt{2\pi s_{1}^{2}}}\exp \left\{-\frac{\|\mathbf{x}-\mathbf{u}\|^{2}}{2s_{1}^{2}}\right\}\overset{ \text{def}}{=}\varphi_{1}(\mathbf{x}-\mathbf{u}),&\mathbf{x}<0,\\ \frac{1}{\sqrt{2\pi s_{2}^{2}}}\exp\left\{-\frac{\|\mathbf{x}-\mathbf{u}\|^{2} }{2s_{2}^{2}}\right\}\overset{\text{def}}{=}\varphi_{2}(\mathbf{x}-\mathbf{u }),&\mathbf{x}\geq 0,\end{cases}\]
where we assume that \(s_{1}>s_{2}\). We are careful about the index in this equation, remarking that the conditions are applied to \(\mathbf{x}\) instead of \(\mathbf{u}\). We will illustrate what will happen if the conditions are applied to \(\mathbf{u}\).
The gathering and the scattering equation for this example are identical to those in (26) and (27). Most importantly, the coefficients \(a_{\mathbf{x},m}\) and \(a_{\mathbf{u},m}\) are binary masks indicating whether the pixel \(\mathbf{x}\) (or \(\mathbf{u}\)) is on the left / right hand side.
The resulting images \(I^{\text{gather}}(\mathbf{x})\) and \(I^{\text{scater}}(\mathbf{x})\) follow a similar derivation as in (28) and (29). For the gathering equation, we recognize that
\[I^{\text{gather}}(\mathbf{x})=\int_{-\infty}^{\infty}\left(a_{ \mathbf{x},1}\varphi_{1}(\mathbf{x}-\mathbf{u})+a_{\mathbf{x},2}\varphi_{2}( \mathbf{x}-\mathbf{u})\right)J(\mathbf{u})\,d\mathbf{u}\] \[=a_{\mathbf{x},1}\int_{-\infty}^{\infty}\varphi_{1}(\mathbf{x}- \mathbf{u})\left\{\left[\theta_{1}+W_{1}(\mathbf{u})\right](1-\text{Step}( \mathbf{u}))\right.\] \[\qquad\qquad\qquad\qquad+\left[\theta_{2}+W_{2}(\mathbf{u}) \right]\text{Step}(\mathbf{u})\right\}d\mathbf{u}\] \[\quad+a_{\mathbf{x},2}\int_{-\infty}^{\infty}\varphi_{2}( \mathbf{x}-\mathbf{u})\left\{\left[\theta_{1}+W_{1}(\mathbf{u})\right](1-\text{ Step}(\mathbf{u}))\right.\] \[\qquad\qquad\qquad\qquad\qquad+\left[\theta_{2}+W_{2}(\mathbf{u })\right]\text{Step}(\mathbf{u})\right\}d\mathbf{u}\] \[=\begin{cases}[((\theta_{1}+W_{1}(\mathbf{u}))(1-\text{Step}( \mathbf{u}))\] \[\qquad\qquad\qquad+(\theta_{2}+W_{2}(\mathbf{u}))\text{Step}(\mathbf{u})) \otimes\varphi_{1}(\mathbf{u})](\mathbf{x}),&\mathbf{x}<0,\\ [((\theta_{1}+W_{1}(\mathbf{u}))(1-\text{Step}(\mathbf{u}))\] \[\qquad\qquad\qquad\qquad\left.+(\theta_{2}+W_{2}(\mathbf{u}))\text{Step}( \mathbf{u}))\otimes\varphi_{2}(\mathbf{u})](\mathbf{x}),&\mathbf{x}\geq 0,\end{cases} \tag{31}\]
where the last equality holds because \(a_{\mathbf{x},1}\) and \(a_{\mathbf{x},2}\) are indicator functions.
Fig. 10: Thought experiment with two noisy half-planes. As we perform the denoising step, we would hope that the sharp boundary is preserved.
Fig. 9: Thought experiment of two noisy regions in an image. To denoise this image, ideally we would want to apply to different filters with a sharp boundary at the transition.
For the scattering equation, we recognize that
\[I^{\text{center}}(\mathbf{x})=\int_{-\infty}^{\infty}\left(a_{ \mathbf{u},1}\varphi_{1}(\mathbf{x}-\mathbf{u})+a_{\mathbf{u},2}\varphi_{2}( \mathbf{x}-\mathbf{u})\right)J(\mathbf{u})\,d\mathbf{u}\] \[\quad=\int_{-\infty}^{\infty}a_{\mathbf{u},1}\varphi_{1}( \mathbf{x}-\mathbf{u})[\theta_{1}+W_{1}(\mathbf{u})](1-\text{Step}(\mathbf{u} ))\,d\mathbf{u}\] \[\quad\quad+\int_{-\infty}^{\infty}a_{\mathbf{u},2}\varphi_{2}( \mathbf{x}-\mathbf{u})[\theta_{2}+W_{2}(\mathbf{u})]\text{Step}(\mathbf{u}) \,d\mathbf{u},\]
where we used the facts \(a_{\mathbf{u},1}=(1-\text{Step}(\mathbf{u}))\) and \(a_{\mathbf{u},2}=\text{Step}(\mathbf{u})\), and hence \(a_{\mathbf{u},1}\text{Step}(\mathbf{u})=0\) and \(a_{\mathbf{u},2}(1-\text{Step}(\mathbf{u}))=0\). As a result, we can simplify the above equations as
\[I^{\text{scatter}}(\mathbf{x})=[((\theta_{1}+W_{1}(\mathbf{u}))(1- \text{Step}(\mathbf{u})))\oplus\varphi_{1}(\mathbf{u})](\mathbf{x})\\ +[((\theta_{2}+W_{2}(\mathbf{u}))\text{Step}(\mathbf{u})) \oplus\varphi_{2}(\mathbf{u})](\mathbf{x}). \tag{32}\]
So, the result is the _sum_ of two convolutions with the input image \(J(\mathbf{u})\).
We visualize the results in Figure 11. As we can see here, while both of them can offer denoising to some extent, the gathering approach handles the boundary much better because the masking is performed _after_ the filtering. If the masking is performed before the filtering, then (32) tells us that we are summing two convolutions of the same image. Therefore, the edge is blurred.
## V Discussions
As we write this paper, two underpinning questions are constantly asked: (1) what utility does it bring? (2) How does it affect how we solve an inverse problem? In this section, we briefly share our findings.
### _Utility: Simulating atmospheric turbulence_
One of the biggest utilities (for the scattering approach) is the simulation of atmospheric turbulence. The latest turbulence simulators, based on phase-over-aperture [23], phase-to-space (P2S) transform [21], and dense-field P2S [24], are all using the _gathering_ equation. This present paper explains why these prior simulators have a potential hindsight when decomposing the spatially varying blur into spatially invariant blurs, which should have been using the scattering equation instead.
Table I shows a comparison between the dense-field P2S simulator [24] (which is based on the gathering equation), and a new simulator implemented using the scattering equation.1 Our testing dataset is based on the text recognition dataset released to the UG2+ challenge [25]. We tested three image reconstruction models: TSRWGAN [26], ESTRNN [27], and TMT [28]. For each model, we report the recognition accuracy in terms of CRNN / DAN / ASTER from the restored images.
Footnote 1: The new simulator contains a few other modifications including expanding the parameter space, and expanding the kernel supports. However, the biggest change is the adoption of the scattering equation.
As we can see in this table, the new simulator indeed offers a sizable amount of improvement over the previous simulator. Since the change from the gathering to the scattering equation attributes to a substantial portion of the simulator update, the utility of our study is evident.
### _We have an inverse problem, which model to use?_
Solving an inverse often requires an optimization. For a spatially varying blur problem, the typical formulation is
\[\widehat{\mathbf{J}}=\underset{\mathbf{J}}{\text{argmin}}\ \ \|\mathbf{I}- \mathbf{H}\mathbf{J}\|^{2}+\lambda R(\mathbf{J}), \tag{33}\]
for some regularization functions \(R(\mathbf{J})\). However, if we want to be exact about the forward model, we need to go back to the light propagation physics and use the scattering equation to represent \(\mathbf{H}\). This will give us
\[\widehat{\mathbf{J}}=\underset{\mathbf{J}}{\text{argmin}}\ \ \left\|\mathbf{I}- \left(\sum_{m=1}^{M}\mathbf{H}_{m}\mathbf{D}_{m}^{\mathbf{u}}\right)\mathbf{J} \right\|^{2}+\lambda R(\mathbf{J}). \tag{34}\]
This problem is extremely difficult to solve because even if \(\mathbf{D}_{m}^{\mathbf{u}}\) is binary so that it forms a partition of the image, we cannot solve for individual regions and combine them. Moreover, we cannot solve for individual blurs because the \(\mathbf{H}_{m}\)'s are summed. As a result, the decomposition offers little computational benefit although it accurately reflects the physics.
On the other hand, the gathering equation will give us
\[\widehat{\mathbf{J}}=\underset{\mathbf{J}}{\text{argmin}}\ \ \left\|\mathbf{I}- \left(\sum_{m=1}^{M}\mathbf{D}_{m}^{\mathbf{x}}\mathbf{H}_{m}\right)\mathbf{J} \right\|^{2}+\lambda R(\mathbf{J}). \tag{35}\]
If \(\mathbf{D}_{m}^{\mathbf{x}}\) is binary (as in Nagy and O'Leary [18]), we can partition the image into \(M\) smaller regions and solve them individually. The parallelism offered by the model is computationally appealing, and it has been proven useful to some extent. The caveat is that, of course, the gathering equation does not match the physics, and so we only solve a proxy to the original optimization. However, in the case of a binary \(\mathbf{D}_{m}^{\mathbf{x}}\), there will be regions in which the two forms will be identical provided that the support of the impulse response is contained within the region spanned by \(\mathbf{D}_{m}^{\mathbf{x}}\). In other words, if a region can be approximated thought of as spatially invariant, the two solutions in this region will be identical. In a general sense for arbitrary \(\mathbf{D}_{m}^{\mathbf{x}}\), any solution we obtain, regardless if we can prove convergence of the optimization algorithm, will not resemble the true solution to (33).
Fig. 11: Comparison between gathering and scattering for the setup in Figure 9. Notice that for this denoising experiment, the better method should produce a sharp transition along the boundary.
Our advice to practitioners, when solving an inverse problem related to the spatially varying blur, is to have the sense of awareness for this kind of mismatch. For deep neural networks, the mismatch is often less of an issue when the capacity of the network is large enough. However, the training data needs to capture enough of the physics in order to generalize well.
## VI Conclusion
The gathering and the scattering representation of a spatially varying blur are twins of the same blur kernel. They are mutually exclusive, in the sense that if one is the exact representation of the original blur, the other one can only be an approximation. They become identical if the underlying blur kernel is spatially invariant. As summary, we recognize the following key points:
**Gathering has an origin of image filtering.** It is an effective approach to speed up the spatially varying blur via a small set of invariying blurs. The approach is to filter the image first, and then combine the filtered images through a pixelwise mask. Gathering offers better edge awareness in tasks such as image denoising.
**Scattering is originated from light propagation physics.** It is an accurate description according to the scalar diffraction theory in Fourier optics. The approach is to weigh the images and then perform filtering afterward. Scattering is _the_ model for describing how light propagates through a random medium.
Our hope in this paper is to raise awareness about the different meanings and implications of scattering and gathering. At the very least, gathering and scattering are not just the convolution and transposed convolution in deep neural networks.
|
2305.19120 | Comparing and combining some popular NER approaches on Biomedical tasks | We compare three simple and popular approaches for NER: 1) SEQ
(sequence-labeling with a linear token classifier) 2) SeqCRF (sequence-labeling
with Conditional Random Fields), and 3) SpanPred (span-prediction with boundary
token embeddings). We compare the approaches on 4 biomedical NER tasks: GENIA,
NCBI-Disease, LivingNER (Spanish), and SocialDisNER (Spanish). The SpanPred
model demonstrates state-of-the-art performance on LivingNER and SocialDisNER,
improving F1 by 1.3 and 0.6 F1 respectively. The SeqCRF model also demonstrates
state-of-the-art performance on LivingNER and SocialDisNER, improving F1 by 0.2
F1 and 0.7 respectively. The SEQ model is competitive with the state-of-the-art
on the LivingNER dataset. We explore some simple ways of combining the three
approaches. We find that majority voting consistently gives high precision and
high F1 across all 4 datasets. Lastly, we implement a system that learns to
combine the predictions of SEQ and SpanPred, generating systems that
consistently give high recall and high F1 across all 4 datasets. On the GENIA
dataset, we find that our learned combiner system significantly boosts F1(+1.2)
and recall(+2.1) over the systems being combined. We release all the
well-documented code necessary to reproduce all systems at
https://github.com/flyingmothman/bionlp. | Harsh Verma, Sabine Bergler, Narjesossadat Tahaei | 2023-05-30T15:29:30Z | http://arxiv.org/abs/2305.19120v1 | # Comparing and combining some popular NER approaches on Biomedical tasks
###### Abstract
We compare three simple and popular approaches for NER: 1) SEQ (sequence-labeling with a linear token classifier) 2) SeqCRF (sequence-labeling with Conditional Random Fields), and 3) SpanPred (span-prediction with boundary token embeddings). We compare the approaches on 4 biomedical NER tasks: GENIA, NCBI-Disease, LivingNER (Spanish), SocialDisNER (Spanish). The SpanPred model demonstrates state-of-the-art performance on LivingNER and SocialDisNER, improving F1 by 1.3 and 0.6 F1 respectively. The SeqCRF model also demonstrates state-of-the-art performance on LivingNER and SocialDisNER, improving F1 by 0.2 F1 and 0.7 respectively. The Seq model is competitive with the state-of-the-art on the LivingNER dataset. We explore some simple ways of combining the three approaches. We find that majority voting consistently gives high precision and high F1 across all 4 datasets. Lastly, we implement a system that learns to combine the predictions of SEQ and SpanPred, generating systems that consistently give high recall and high F1 across all 4 datasets. On the GENIA dataset, we find that our learned combiner system significantly boosts F1(+1.2) and recall(+2.1) over the systems being combined. We release all the well-documented code necessary to reproduce all systems at this Github repository.
## 1 Introduction
NER has frequently been formulated as a sequence-labeling problem (Chiu and Nichols, 2016; Ma and Hovy, 2016; Wang et al., 2022) in which a model learns to label each token using a labeling scheme such as BIO(_beginning_, _inside_, _outside_). However, in recent years people have also formulated the NER task as a span-prediction problem (Jiang et al., 2020; Li et al., 2020; Fu et al., 2021; Zhang et al., 2023) where spans of text are represented and labeled with entity types.
Let SEQ be the simplest sequence-labeling model which represents each token using a language model and then classifies each token-representation with a linear layer. Let SeqCRF be another popular sequence-labeling model which is identical to SEQ model except that the token representations from the language model are fed into a linear-chain conditional random field layer(Lafferty et al., 2001; Lample et al., 2016). Let SpanPred(Lee et al., 2017; Jiang et al., 2020) be a model that represents every possible span of text using two token-embeddings located at the its boundary, and then classifies every span-representation using a linear layer. We describe all three models in detail in section 4. We evaluate SEQ, SeqCRF, and SpanPred models on four biomedical NER tasks: GENIA(Kim et al., 2003), NCBI-Disease(Dogan et al., 2014), LivingNER(Spanish)(Miranda-Escalada et al., 2022), and SocialDisNER(Spanish)(Gasco Sanchez et al., 2022). Despite being simple, the SpanPred and CRF models improve the state-of-the-art on the LivingNER and SocialDisNER tasks.
(Fu et al., 2021) show that the sequence-labeling approaches(eg. Seq and SeqCRF) and span-prediction approaches(eg. SpanPred) have _different_ strengths and weaknesses _while_ having similar(F1) performance. This motivated us to try and combine Seq, SeqCRF, and SpanPred models using two simple methods and study the results. We refer to the two simple methods as Union and MajVote. Union is inspired by the set(mathematical) union operation and it simply involves "unioning" the sets of predictions made by the models. MajVote is the classic majority voting method. We find that MajVote can yield systems that have both high precision and high F1.
Inspired by the boost in recall(and the corresponding drop in precision) resulting from the Union method, we implemented a combiner system (which we refer to as Meta) that aims to _combat_ the
drop in precision as a result of the Union method. We find that Meta shows very promising signs of increasing precision while preserving high recall and high F1. Meta borrows ideas from work on generating span representations using "solid markers"(Baldini Soares et al., 2019; Xiao et al., 2020; Ye et al., 2022), work on using prompts (Li et al., 2020), and work by (Fu et al., 2021) to combine the span-prediction and sequence-labeling approaches using the span-prediction approach.
## 2 Preliminaries
Let every prediction \(p\) for NER be a tuple of the form
\[p=(\text{SampleId},\text{EntityType},\text{BeginOffset},\text{EndOffset})\]
which consists of the identifier of the sample/text in which the entity is found, the type of the entity, and the beginning and ending offsets for the entity.
## 3 Preprocessing
For GENIA and NCBI-Disease, each sample is an English sentence. For SocialDisNER, each sample is an entire Spanish tweet. For LivingNER, we use the FLERT(Schweter and Akbik, 2020) approach for document-level NER, in which each Spanish sentence is surrounded by a context of 100 characters to the left and 100 characters to the right.
## 4 Models
### Seq model
Token Representation StepGiven a sentence \(\mathbf{x}=[w_{1},w_{2},...,w_{n}]\) with \(n\) tokens, we generate for each token \(w_{i}\) a contextualized embedding \(\mathbf{u}_{i}\in\mathbb{R}^{d}\) that corresponds to the last-hidden-layer representation of the language model. Here, \(d\) represents the size of the token embedding. Importantly, special tokens like [CLS] and [SEP] are also represented. We find that the performance can drop significantly(especially for SEQ) if they are not incorporated in the learning process.
XLM-RoBERTa large(Conneau et al., 2020) is the multilingual language model that we use for the LivingNER and SocialDisNER spanish tasks. Inspired by its high performance on the BLURB(Gu et al., 2021) biomedical benchmark, we use BioLinkBert large(Yasunaga et al., 2022) for the NCBI-Disease and GENIA datasets.
Token Classification StepIn this layer, we classify every token representation into a set of named entity types corresponding to the BIO(_beginning_, _inside_, _outside_) tagging scheme. Assuming \(\mathbf{\Theta}\) is the set of all named entity types, then the set of all BIO tags \(\mathbf{B}\) is of size \((2\times|\mathbf{\Theta}|)+1\). In other words, a linear layer maps each token representation \(\mathbf{u}_{i}\in\mathbb{R}^{d}\) to a prediction \(\mathbf{p}_{i}\in\mathbb{R}^{|\mathbf{B}|}\), where \(d\) is the length of the token embedding. Finally, the predictions are used to calculate loss of given sentence \(\mathbf{x}\) with \(n\) tokens as follows:
\[\text{Loss}(\mathbf{x})=\frac{-1}{n}\sum_{i=1}^{n}\text{log}(\text{Softmax}( \mathbf{p}_{i})_{y_{i}}) \tag{1}\]
Here \(y_{i}\) represents the index of the gold BIO label of the \(i^{th}\) token.
### SeqCRF Model
This model is identical to the Seq model except that we pass the contextualized token representation \(\mathbf{U}\) through a a Linear Chain CRF(Lafferty et al., 2001) layer. The CRF layer computes the probabilities of labeling the sequence using the Viterbi algorithm(Forney, 1973). A loss suited to the CRF layer's predictions is then used to train the model. We directly use the CRF implementation available in the FLAIR(Akbik et al., 2019) framework. The BIO scheme is used for token classification.
### Span Model
Token Representation LayerSame as the token representation layer of the Seq model.
Span Representation LayerLet a span \(\mathbf{s}\) be a tuple \(\mathbf{s}=(b,e)\) where \(b\) and \(e\) are the beggining and ending token indices, and \(\mathbf{s}\) represents the text segment \([w_{b},w_{b+1},...,w_{e}]\) where \(w_{i}\) is the \(i^{th}\) token. In this layer, we enumerate **all possible** spans and then represent each span using two token embeddings located at its boundary. More precisely, given embeddings \([\mathbf{u}_{1},\mathbf{u}_{2},...,\mathbf{u}_{n}]\) of \(n\) tokens, there are \(\binom{n}{2}=\frac{n^{2}}{2}\) possible spans, which can be enumerated and represented as the list \([(0,0),(0,1),...,(0,n),(1,1),(1,2)...(1,n),...(n,n)]\). Then we removed all spans that have a length longer than 32 tokens - this was important to fit the model in GPU memory with a batch size of 4. Finally, as in (Lee et al., 2017), each span \(s_{i}\) will be represented by \(\mathbf{v}_{i}=[\mathbf{u}_{b_{i}};\mathbf{u}_{e_{i}}]\), a concatenation of the beginning and ending token embeddings. Hence, the output of this layer is \(\mathbf{V}\in\mathbb{R}^{k\times(2\times d)}\)
where \(k=\frac{n^{2}}{2}\) and \(d\) is length of the token embedding vector.
Span Classification LayerIn this layer, we classify each span representation with a named entity type. We introduce an additional label Neg_Span which represents the absence of a named entity. Precisely, a linear layer maps each span representation \(\mathbf{v}_{i}\in\mathbb{R}^{(2\times d)}\) to a prediction \(\mathbf{p}_{i}\in\mathbb{R}^{[\Omega]}\), where \(\Omega\) is the set of all named entity types(including Neg_Span) and \(d\) is the size of the token embedding. Finally, the predictions are used to calculate loss of given sentence \(\mathbf{x}\) with \(l\) possible spans as follows:
\[\text{Loss}(\mathbf{x})=\frac{-1}{l}\sum_{i=1}^{l}\text{log}(\text{Softmax}( \mathbf{p}_{i})_{y_{i}}) \tag{2}\]
Here \(y_{i}\) represents the index of the gold label of the \(i^{th}\) span.
### Union combiner model
This model doesn't learn weights. For a given list \(P_{0},P_{1},...,P_{n}\) where \(P_{i}\) is the set of predictions(as defined in section 2) made by the \(i^{th}\) NER model and \(n\) is the total number of models, it returns the set \(P_{1}\cup P_{2}\cup...P_{n}\).
### MajVote combiner model
This model doesn't learn weights. This is the classic majority voting combiner model. Precisely, when given a list \(P_{0},P_{1},...,P_{n}\) where \(P_{i}\) is the set of predictions(as defined in section 2) made by the \(i^{th}\) NER model and \(n\) is the total number of models, it returns a set which only includes predictions in \(P_{1}\cup P_{2}\cup...P_{n}\) that have been predicted by more that \(\lfloor\frac{n}{2}\rfloor\) models.
### Meta combiner model
The job of meta is simple : "Learn to tell if a prediction made by SEQ or SpanPred is a mistake or not". In other words, Meta looks at a prediction made by SEQ or SpanPred on the _validation set_ and learns to classify the prediction as being either "correct" or "incorrect". "correct" means that the prediction is a good prediction, and that it should not be removed. "incorrect" means that the prediction should be removed. In other words, if \(P_{\text{SEQ}}\) is the set of all predictions of the SEQ and \(P_{\text{Span}}\) is the set of all predictions of SpanPred, then Meta acts as (and learns to be) a filter for \(P_{\text{Span}}\cup P_{\text{SEQ}}\). During evaluation, Meta filters \(P_{\text{Span}}\cup P_{\text{SEQ}}\), generating a final set of predictions.
We borrow the idea of using markers made with special tokens Baldini Soares et al. (2019); Xiao et al. (2020); Ye et al. (2022) which, intuitively, help models "focus their attention on the span-of-interest". In other words, by introducing special tokens(which act as markers) like [e] and [/e] in the language model's vocabulary, and then surrounding the span-of-interest with them, one can help the model "focus" of the span of interest while making some prediction. In Meta's case, the markers are supposed to help locate/identify the entities predicted by SEQ or SpanPred in raw text. See subsection 4.7 for an example input prediction with markers highlighting the entity.
We also borrow the idea of promptingLi et al. (2020), which involves pre-pending some text(prompt) to the original input text with the goal of priming(or aiding) a model's decision making with a useful bias. In particular, every input to Meta includes the type of the predicted entity as prompt. Intuitively, this helps Meta recognize the type of
\begin{table}
\begin{tabular}{l|l|l|l|l} \hline Dataset & SocialDisNER & LivingNER & Genia & NCBI-Disease \\ \hline SOTA & Fu et al. (2022) & Zotova et al. (2022) & Shen et al. (2022) & Tian et al. (2020) \\ \hline & 89.1, 90.6, 87.6 & 95.1, 95.8, 94.3 & 81.7, -, - & 90.08, -, - \\ \hline SpanPred & 90.4, 90.5, 90.4 & 95.7, 95.4, 96.0 & 77.1, 77.0, 77.1 & 89.0, 88.1, 89.9 \\ \hline SEQ & 88.7, 88.3, 89.1 & 95.0, 94.7, 95.3 & 76.1, 79.8, 72.7 & 88.7, 87.8, 89.5 \\ \hline SeqCRF & 89.8, 89.6, 90.0 & 95.3, 95.6, 95.0 & 75.7, 79.7, 72.1 & 87.9, 86.2, 89.6 \\ \hline SpanPred \(\cup\) SEQ & 89.0, 86.0, 92.2 & 95.2, 93.4, 97.1 & 77.2, 73.5, 81.4 & 88.2, 84.6, 92.2 \\ \hline SpanPred x SEQ & 90.2, 93.3, 87.3 & 95.5, 96.9, 94.2 & 75.8, 85.0, 68.5 & **89.6**, 91.9, 87.4 \\ \hline SpanPred \(\cup\) SEQ \(\cup\) SeqCRF & 88.3, 84.1, 93.0 & 94.9, 92.5, 97.4 & 76.4, 71.3, 82.3 & 87.1, 81.4, 93.8 \\ \hline SpanPred x SEQ x SeqCRF & **90.8**, 91.2, 90.4 & 95.7, 96.1, 95.4 & 77.1, 81.9, 72.9 & 89.5, 88.8, 90.1 \\ \hline Meta(SpanPred \(\cup\) SEQ) & 90.5, 89.7, 91.3 & **95.7**, 94.6, 96.9 & **78.3**, 77.4, 79.2 & 89.1, 86.3, 92.2 \\ \hline \end{tabular}
\end{table}
Table 1: Performance of all systems on test set on all 4 biomedical datasets. \(\cup\) represents the Union combiner and x represents the MajVote combiner.
the entity it is dealing with. See subsection 4.7 for an example of prompting with the entity type "disease".
Note that prompting and special markers are _only_ used to prepare the training data for Meta using the predictions of SEQ and SpanPred on the validation set. Meta itself is a simple binary classification neural model. Just like SEQ, SeqCRF and SpanPred, it first creates contextualized token representations from raw input using the appropriate language model(XLM-RoBERTa or BioLinkBERT) and then classifies the pooler token([CLS] or [s]) representation using a linear layer. As in SpanPred and SEQ, cross-entropy loss is used to train the model.
Because META acts as a "filter"(it allows certain predictions and disallows others), it _cannot_ improve recall - it can only improve precision. Ideally, Meta will learn the true nature of the mistakes that SEQ and SpanPred make and remove all false positives, resulting in a perfect precision score of 100 and no drop in recall.
Preparing the training data for Meta:_all_ predictions(with "correct" and "incorrect" labels) on the validation set for _all_ 20 epochs by _both_ SEQ and SpanPred, and _all_ gold predictions(that only have "correct" labels) from the _original_ training data make up the training set for Meta. We hold out 15 percent of Meta's training set for validation. Note that we incorporate the predictions of SpanPred and SEQ from earlier epochs because the fully trained high-performing models don't make that many mistakes(which META needs for its learning). As expected, the test set is not touched while training Meta. During evaluation, Meta filters the predictions made by SEQ and SpanPred on the test set.
### Meta input example
Assume the example sentence "Bob has HIV and flu." and the task of identifying diseases. Now assume that SEQ predicted
(id, **disease**, 8, 11) (see section 2 for the definition of prediction) and correctly identified the disease "HIV" in the input. Then, the input to meta will be the the text "**disease** Bob has [e] HIV [e] and flu" and the associated gold label of correct. Prompting with **disease** informs Meta that it is dealing with a prediction representing a disease. Meta has to make a judgement on whether the prediction is correct or not.
### Training and Optimization
Both XLM RoBERTa largeConneau et al. (2020) and BioLinkBERT largeYasunaga et al. (2022) are fine-tuned on the training data using the AdafactorShazeer and Stern (2018) optimizer with a learning rate of 1e-5(see code) and a batch size of 4 for _all 4 datasets_. Specifically, we used the implementation of Adafactor available on HuggingFaceWolf et al. (2019). It was not possible for us to use the same learning rate and batch size for every dataset with AdamKingma and Ba (2015) because we noticed it was prone to over-fitting(and then collapsing) mid-training on LivingNER, NCBI-Disease, and GENIA - batch-size had to be increased to avoid over-fitting. Moreover, we found that SEQ, SeqCRF, and SpanPred converged to better solutions with Adafactor on all datasets. However, we found that Meta consistently converged to better solutions on the NCBI disease dataset using Adam.
The best model is selected using early stopping with a patience(in terms of epochs) of 5.
## 5 Evaluation Methodology
All tasks evaluate systems using the strict(no partial matching) Micro F1, Precision and Recall. For SocialDisNER, _all_ systems were submitted to the corresponding CodaLabPavao et al. (2022) competition website for evaluation. For LivingNER, _all_ our systems have been evaluated using the official evaluation script that the organizers made available. For Genia and NCBI-Disease, we unfortunately couldn't find official CodaLab websites, so we had to use our own script, which can be inspected here.
## 6 Analysis of Results
Note that among the 3 models, SpanPred consistently outperforms the other two on all datasets. This is anticipated on tasks with overlapping entities like LivingNER and GENIA(because SEQ and SeqCRF cannot represent them), but not on "flat" NER tasks like SocialDisNER and NCBI-Disease.
Note that any system resulting from a Union combination should have higher recall than any of the involved systems because a set union operation is incapable of removing a correct prediction (the set of false negatives can only shrink with more systems). Also, the resulting system's precision cannot be higher than the highest precision observed in any sub-system. Table 1 adheres to both of these expectations. On the other hand, a system resulting from a MajVote combiner is _inclined_ to have
higher precision when the systems being combined are diverse and comparable because - intuitively - MajVote can be a more "picky" system (only allowing a prediction if it has been voted on by several). In Table 1, note that both SpanPredxSEQ and SpanPredxSEQxCRF consistently boost precision across all datasets. Also note that the best MajVote systems significantly outperform all other systems on precision while maintaining the highest F1 on all datasets except Genia, where Meta outperforms all other systems on F1 for the first(and last) time. Also on Genia is the only time when a Union model (SpanPred\(\cup\)SEQ) outperforms the MajVote models due to a significant boost in recall. Finally, note how Meta, across all datasets, outperforms SpanPred, SEQ, and SeqCRF models on Recall and delivers an F1 that is at least as high as any of the three models.
## 7 Conclusion
Our implementation(code available) of CRF and SpanPred, two simple models, improves the state of the art on LivingNER and SocialDisNER datasets. We used two simple approaches called Union and MajVote to combine the NER models' predictions and studied the results. MajVote on the three NER models seems to be effective at generating systems with high precision and high F1. While Union can generate systems with higher recall, it is only at the cost of F1 due to a significant drop in precision. Meta seems to be effective at alleviating Union's issue, generating systems with both high recall and high F1.
|
2303.00927 | QuickCent: a fast and frugal heuristic for harmonic centrality
estimation on scale-free networks | We present a simple and quick method to approximate network centrality
indexes. Our approach, called QuickCent, is inspired by so-called fast and
frugal heuristics, which are heuristics initially proposed to model some human
decision and inference processes. The centrality index that we estimate is the
harmonic centrality, which is a measure based on shortest-path distances, so
infeasible to compute on large networks. We compare QuickCent with known
machine learning algorithms on synthetic data generated with preferential
attachment, and some empirical networks. Our experiments show that QuickCent is
able to make estimates that are competitive in accuracy with the best
alternative methods tested, either on synthetic scale-free networks or
empirical networks. QuickCent has the feature of achieving low error variance
estimates, even with a small training set. Moreover, QuickCent is comparable in
efficiency -- accuracy and time cost -- to those produced by more complex
methods. We discuss and provide some insight into how QuickCent exploits the
fact that in some networks, such as those generated by preferential attachment,
local density measures such as the in-degree, can be a proxy for the size of
the network region to which a node has access, opening up the possibility of
approximating centrality indices based on size such as the harmonic centrality.
Our initial results show that simple heuristics and biologically inspired
computational methods are a promising line of research in the context of
network measure estimations. | Francisco Plana, Andrés Abeliuk, Jorge Pérez | 2023-03-02T03:04:55Z | http://arxiv.org/abs/2303.00927v2 | # QuickCent: a fast and frugal heuristic for harmonic centrality estimation on scale-free networks
###### Abstract
We present a simple and quick method to approximate network centrality indexes. Our approach, called _QuickCent_, is inspired by so-called _fast and frugal_ heuristics, which are heuristics initially proposed to model some human decision and inference processes. The centrality index that we estimate is the _harmonic_ centrality, which is a measure based on shortest-path distances, so infeasible to compute on large networks. We compare _QuickCent_ with known machine learning algorithms on synthetic data generated with preferential attachment, and some empirical networks. Our experiments show that _QuickCent_ is able to make estimates that are competitive in accuracy with the best alternative methods tested, either on synthetic scale-free networks or empirical networks. QuickCent has the feature of achieving low error variance estimates, even with a small training set. Moreover, _QuickCent_ is comparable in efficiency -accuracy and time cost- to more complex methods. We discuss and provide some insight into
how QuickCent exploits the fact that in some networks, such as those generated by preferential attachment, local density measures such as the in-degree, can be a proxy for the size of the network region to which a node has access, opening up the possibility of approximating centrality indices based on size such as the harmonic centrality. Our initial results show that simple heuristics and biologically inspired computational methods are a promising line of research in the context of network measure estimations.
**Keywords:** Centrality measure, Complex networks, Power-law distribution, Degree.
## 1 Introduction
Heuristics are proposed as a model of cognitive processesSome models based on heuristics have been proposed to account for cognitive mechanisms [58], which assume that, though these heuristics are used at a lesser computational cost, they sacrifice accuracy and lead to systematic errors. This viewpoint has been challenged by the so-called _Fast and frugal_ heuristics [21], which are simple heuristics initially proposed to model some human decision and inference processes. They have shown that very simple human-inspired methods, by relying on statistical patterns of the data, can reach accurate results, in some cases even better than methods based on more information or complex computations [31, 21]. Due to these features, fast and frugal heuristics have been applied in problems different from their original motivation, including medical decision-making [2], predicting the outcomes of sport matches [54] and geographic profiling [55].
The problem of centrality computationIn this paper, we provide an example of the usefulness of one of these simple heuristics for estimating the centrality index in a network. Roughly speaking, the centrality index is a measure of the importance of a node in a network. We chose to estimate the _harmonic centrality_ index [40] since it satisfies a set of necessary axioms that any centrality should meet [6], namely that nodes belonging to large groups are important (_size_ axiom); that nodes with a denser neighborhood, i.e. with more connections, are more important (_density_ axiom); and that the importance increases with the addition of an arc (_score-monotonicity_ axiom). Consider a directed graph \(G=(V,A)\), with \(V\) the set of nodes and \(A\) the
set of arcs or edges. Formally, let \(d_{G}(y,x)\) be the length of the shortest path from node \(y\) to \(x\) in the digraph \(G\). The harmonic centrality of \(x\) is computed as
\[H_{G}(x)=\sum_{y\in V,y\neq x}\frac{1}{d_{G}(y,x)},\]
which has the nice property of managing unreachable nodes in a clean way.
Besides its good properties, to compute the harmonic centrality for all nodes in a network we need first to solve the all-pairs shortest-path problem. Notice that by the total number of pairs of nodes, there is an intrinsic lower bound of \(|V|^{2}\) for computing this centrality, and \(O(|V|^{2})\) is already a huge constraint for modern networks. There has been a lot of work on optimizing the computation of all-pairs shortest-paths for weighted networks [46, 45, 49] but even under strict constraints on the structure of the networks [49] this computation is unfeasible for networks with a large number of nodes, usually needing time \(O(|A|\cdot|V|)\). Thus, in order to use harmonic centrality in practice we need ways of estimating or approximating it.
Though there are few centrality indexes satisfying the three axioms [6], some simple measures can be built that do satisfy them. One way of doing this, is by taking the simple product of a density measure, such as the in-degree, with a size measure, such as the number of weakly reachable nodes [6]. While the in-degree is cheap to compute, many times stored as an attribute so accessible in constant time, size measures have a higher time complexity. For example, the number of reachable nodes, for each node, can be computed from the condensation digraph of strongly connected components, which may give, in the worst case, a total time complexity of \(O(|A|\cdot|V|+|V|^{2})\). In this paper, we explore whether expensive indexes, sensitive to either density and size, such as the harmonic centrality, may be approximated by cheap local density measures such as the in-degree.
Our proposalOur proposed method, called _QuickCent_, is a modification of QuickEst [24], a heuristic proposed to represent the processes underlying human quantitative estimation. QuickCent can be considered as a generalization of QuickEst, in the sense that, albeit in this work we focus on centrality approximation, it proposes a general procedure to regress a variable on a predictor when some assumptions are met. QuickCent is a very simple heuristic based on sequences of _binary clues_ associated with nodes in a network; the value of a clue is an indicator of the presence or absence of
an attribute signal of greater centrality for a node. The method simply finds the first clue with value 0 (absence), and it outputs an estimate according to this clue. All the clues used in QuickCent are based on the in-degree of the node, thus QuickCent can be seen as a method to regress a variable (harmonic centrality) that correlates with a predictor variable (in-degree) that is cheaper to compute. Another key characteristic of QuickCent is that it is designed to estimate magnitudes distributed according to a power-law [44], which can model a wide range of natural and human-made phenomena. This paper extends previous work by some of the authors, mainly by adding the study of networks defying the heuristics assumptions and the performance over empirical networks [48].
Results and future workOur method is able to generate accurate estimates even if trained with a small proportion -10%- of the dataset. We compare QuickCent with three standard machine learning algorithms trained with the same predictor variable over synthetic data and some empirical networks. Our results show that QuickCent is comparable in accuracy to the best-competing methods tested, and has the lowest error variance. Moreover, the time cost of QuickCent is in the middle range compared to the other methods, even though we developed a naive version of QuickCent. We also discuss how QuickCent exploits the fact that in some networks, where higher degree nodes are more likely to be found because more paths lead to them, local density measures such as in-degree can be a good proxy for the size of the network region to which a node has access, opening up the possibility of approximating centrality indices based on size, such as harmonic centrality. This insight supports the conjecture that QuickCent may be better suited to information networks, such as the Internet, citation, or scientific collaboration, which can be well approximated by the preferential attachment growth mechanism [4, 29, 59], than to more purely social networks [28, 11], which is an interesting question for future work. Also, working in the future with more general notions of local density [62, 16] may serve to extend the validity of the heuristics for more general networks. The results of this paper are a proof of concept to illustrate the potential of using methods based on simple heuristics to estimate some network measures. Whether or not these heuristics provide a realistic model of human cognition, is a wide problem [10] which is out of the scope of this work.
Structure of the paperThe rest of this paper is structured as follows. We begin in Section 2 by introducing the general mechanism of QuickCent, while Section 3 presents our concrete implementation. In Section 4, we present the results of our proposal, including the comparison with other machine learning methods on either synthetic or empirical networks. Section 5 gives a final discussion of the results including directions for possible future work.
## 2 The QuickCent Heuristic
In this section, we give a general abstract overview of our proposal, which we call QuickCent. The setting for QuickCent is as follows: the input is a network \(G=(V,A)\) and we want to get an accurate estimate of the value of a centrality function \(f_{C}:V\longrightarrow\mathbb{R}\). That is, for every \(v\in V\), we want to compute a value \(\bar{f}_{v}\) that is an estimation of \(f_{C}(v)\). In our abstract formulation, it does not matter which particular centrality function we are estimating, and the details of the implementation of the heuristic for the particular case of harmonic centrality are given in the next section. We next explain the general abstract idea of the components of QuickCent.
Analogously to QuickEst [24], our QuickCent method relies on vectors of \(n\)_binary clues._ We associate to every node \(v\in V\) a vector \(\vec{x}_{v}=(x_{v}^{1},x_{v}^{2},\ldots,x_{v}^{n})\in\{0,1\}^{n}\). The intuition is that the value of the \(i-\)th component (clue) \(x_{v}^{i}\) is an indicator of the presence (\(x_{v}^{i}=1\)) or absence (\(x_{v}^{i}=0\)) of an attribute signal of greater centrality for node \(v\). Our method also considers the following \(n+1\) sets of nodes:
\[S_{1} = \{v\in V\mid x_{v}^{1}=0\}\] \[S_{i} = \{v\in V\mid x_{v}^{i}=0\text{ and }x_{v}^{i-1}=1\}\ \ (2\leq i\leq n)\] \[S_{n+1} = \{v\in V\mid x_{v}^{n}=1\}\]
That is, \(S_{i}\) corresponds to nodes that do not have the \(i-\)th attribute while having the previous one. For each one of the sets \(S_{i}\), with \(1\leq i\leq n+1\), our method needs a quantity \(\bar{f}_{i}\) which is a summary statistic of the centrality distribution of the nodes in set \(S_{i}\). QuickCent must ensure that successive clues are associated with higher centrality values, thus we will have that
\[\bar{f}_{1}<\bar{f}_{2}<\cdots<\bar{f}_{n}<\bar{f}_{n+1}. \tag{1}\]
With the previous ingredients, the general estimation procedure corresponds to the following simple rule.
**General QuickCent heuristic:**_For node \(v\), we iterate over the \(n\) clues, considering every value \(x_{v}^{i}\). When we find the first \(i\) verifying that \(x_{v}^{i}=0\), we stop and output the value \(\bar{f}_{i}\). If node \(v\) is such that \(x_{v}^{i}=1\) for every \(i\in\{1,\ldots,n\}\), we output \(\bar{f}_{n+1}\)._
**Example 2.1**.: This is only a very simple example to exhibit the working of QuickCent, where we assume complete knowledge of the centrality values of all nodes. Let us consider the following network in Figure 1 of size 25 obtained as a random instance of linear preferential attachment, defined in B. Table 1 displays only the non-zero values of in-degree and harmonic centrality in this network. A reasonable way to aggregate these values is to consider four sets \(S_{i},i=1,2,3,4\), with the following binary clues, \(x_{v}^{i}=1\) (\(i=1,2,3\)) if and only if \(\deg^{\rm in}(v)>d_{i}\), with \(d_{1}=0\), \(d_{2}=3\) and \(d_{3}=4\). With this choice, for simple centrality approximation it is natural to take, for example, the median of harmonic centrality on every set \(S_{i}\) as summary statistics, \(\bar{f}_{1}=0\), \(\bar{f}_{2}=1\), \(\bar{f}_{3}=4.666\) and \(\bar{f}_{4}=15.75\).
QuickCent provides a simple stopping rule: for each node, the search is finalized when the first clue with value 0 is found. Therefore, if our input is a network in which the vast majority of nodes is having similar and small centrality values -as it would be the case if the centrality were distributed according to a power law- the procedure is likely to stop the search early and give an estimate quickly. In this sense, the heuristic is _frugal_, given that in
Figure 1: **A network randomly generated with linear preferential attachment.**
many cases it can output an estimate without passing over all the clues, or without using all the available information.
Up to this point, QuickCent remains similar to QuickEst. The reader can review the details of QuickEst in the book chapter by Hertwig et al (1999) [24]. The most critical aspects that distinguish QuickCent from QuickEst, as well as a specification of each part of the heuristic, are presented in the next section.
## 3 A QuickCent implementation
In this section, we propose an instantiation of our general QuickCent method, including a way to compute the clues \(x_{v}^{i}\) for every node \(v\) based on its in-degree in Section 3.1, and an efficient way to compute the summary statistic \(\bar{f}_{i}\) of the centrality for every set \(S_{i}\) in Section 3.3. Section 3.2 makes explicit the assumptions on the structure of graphs that QuickCent requires to be a _ecologically rational heuristic_[24], i.e. the proper problem conditions that ensure a successful application of the heuristic, including that the centrality has a power-law distribution. Necessary concepts of the power-law distribution are introduced in A.
### Using the in-degree for the clues
Our approach to compute the binary clues is to employ a proxy variable related to the centrality by means of a monotonic function which ensures that
\begin{table}
\begin{tabular}{c|c c c c c c c c}
**Node** & 1 & 4 & 8 & 10 & 14 & 17 & 19 & 23 \\ \hline
**In-degree** & 9 & 4 & 4 & 1 & 3 & 1 & 1 & 1 \\
**Harmonic** & 15.750 & 4.833 & 4.500 & 1.000 & 3.500 & 1.500 & 1.000 & 1.000 \\
**QC100** & 13.429 & 2.973 & 2.973 & 1.309 & 1.309 & 1.309 & 1.309 & 1.309 \\
**QC70** & 6.531 & 2.197 & 2.197 & 1.214 & 1.214 & 1.214 & 1.214 & 1.214 \\ \end{tabular}
\end{table}
Table 1: **In-degree and harmonic centrality values for each node of the network from Figure 1.** Nodes that do not appear here have a zero in-degree and centrality. The last two rows correspond to QuickCent models described in Example 3.2. The number of decimal places is truncated to three with respect to the source.
Equation (1) holds. The idea is to use a proxy which should be far cheaper to compute than computing the actual centrality value. The proxy variable we chose is the in-degree of the node, that is, the number of neighbors of the node given by incoming arcs of the network. The intuition for this proxy is that greater in-degree will likely be associated with shorter distances, which likely increases the harmonic centrality. The in-degree is one of the most elementary properties of a node, and in many data structures it is stored as an attribute of the node (thus accessible in \(O(1)\) time). The in-degree can itself be considered as a centrality measure [6]. For a node \(v\) we denote by \(\deg^{\rm in}(v)\) its in-degree.
Now, starting from a set of proportions \(\{p_{i}\}_{i=1}^{n}\), where \(0\leq\cdots\leq p_{i}\leq p_{i+1}\leq\cdots\leq 1\), we can get the respective _quantile degree values_\(\{d_{i}\}_{i=1}^{n}\). That is, if \(F\) is the cumulative distribution function (CDF) for the in-degree, then \(d_{i}=F^{-1}(p_{i})\) for each \(i=1,\ldots,n\). Then, we define the \(i-\)th clue for node \(v\) as
\[x_{v}^{i}=1\ \ \mbox{if and only if}\ \ \deg^{\rm in}(v)>d_{i}. \tag{2}\]
With this definition, the sets \(S_{i}\) are
\[S_{1} = \{v\in V\mid\deg^{\rm in}(v)\leq d_{1}\},\] \[S_{i} = \{v\in V\mid d_{i-1}<\deg^{\rm in}(v)\leq d_{i}\}\ \ (2\leq i\leq n),\] \[S_{n+1} = \{v\in V\mid d_{n}<\deg^{\rm in}(v)\}.\]
**Example 3.1**.: This type of clues was already used in Example 2.1. In fact, the quantile degree values \(\{0,3,4\}\) used there can be obtained via the inverse of the in-degree CDF applied to the set of proportions \(\{0.68,0.84,0.96\}\).
The final piece to apply QuickCent is to show how to compute the summary statistic \(\bar{f}_{i}\) for every set \(S_{i}\). We propose to compute \(\bar{f}_{i}\) analytically as the median of each \(S_{i}\) based on estimating the parameters of a power-law distribution. This idea is developed in the next subsections and the required background on this distribution is on A.
### Computing the summary statistic via a power-law distribution assumption
Our first assumption is the existence of a non-decreasing function \(g\) relating the in-degree and the centrality1. If there exists a function \(g\) satisfying this
condition, then the quantiles in the centrality side are equivalent to the application of \(g\) on the same degree quantiles [27]. With this result, the quantile proportions can be specified according to characteristics of the centrality distribution, as it is explained in Section 3.3. In practice, and even more so considering that the in-degree is a discrete variable while the centrality is continuous, the object \(g\) is a relation rather than a function. More formally, let \(\{C_{i}\}_{i=1}^{n}\) be the set of quantile centrality values associated to the proportions \(\{p_{i}\}_{i=1}^{n}\) that were used to compute the quantile degree values \(\{d_{i}\}_{i=1}^{n}\) (see Equation (2)). Given the above assumption about \(g\), we can rewrite the sets \(S_{i}\) as follows:
\[\begin{array}{rcl}S_{1}&=&\{v\in V\mid g(\deg^{\rm in}(v))\leq C_{1}\}\\ S_{i}&=&\{v\in V\mid C_{i-1}<g(\deg^{\rm in}(v))\leq C_{i}\}\;\;\;(2\leq i\leq n )\\ S_{n+1}&=&\{v\in V\mid C_{n}<g(\deg^{\rm in}(v))\}\end{array}\]
Our second assumption is that the centrality index that we want to estimate follows a power-law distribution. We add this assumption motivated by the argument that QuickEst would have a _negative bias_[24], in the sense that it is a negative clue (or absent attribute) that stops this heuristic. Thus, a distribution such as the power law where most values are small (with mostly negative clues) and only a few high values exist (with mostly positive clues), would provide an optimal context for the performance of QuickEst, which is consistent with the finding that this heuristic predicts well the estimation behavior by some people on this kind of data [61]. Moreover, power-laws have a pervasive presence in many natural phenomena and magnitudes produced by humans too [44], although there has been some recent controversy on this topic [11, 3]. As we next show, our assumption of power-law distribution will allow us to use some particular properties to approximate the values \(\{C_{i}\}_{i=1}^{n}\) used in the rewriting above, and then use them to efficiently compute the statistics \(\{\bar{f}_{i}\}_{i=1}^{n+1}\) for every set \(S_{i}\). In Section 4.3, we show some experiments to argue that these two assumptions of the heuristic are key to ensure its competitive accuracy.
### Putting all the pieces together
Let \(D=(V,A)\) be our input network, and recall that we are assuming that the centrality that we want to estimate for \(D\) follows a power-law distribution. Let \(\hat{\alpha}\) be the estimate of the exponent parameter of the distribution (given by Equation (8)), and \(\hat{x}_{\rm min}\) be the estimate of the lower limit of the
distribution (given by the minimization of the functional of (9)), which have been computed by considering a set of \(m\) nodes in \(V\) and their (real) centrality values. With all these pieces, we can compute the values \(\{C_{i}\}_{i=1}^{n}\) associated to the proportions \(\{p_{i}\}_{i=1}^{n}\) easily by using the equation
\[\int_{\hat{x}_{\min}}^{C_{i}}Kx^{-\hat{\alpha}}dx=p_{i}\]
from which we get that
\[C_{i}=\hat{x}_{\min}\cdot(1-p_{i})^{\frac{1}{1-\hat{\alpha}}}.\]
Now, in order to compute the summary statistics \(\{\bar{f}_{i}\}_{i=1}^{n+1}\), we will use the median of every set \(S_{i}\). This median can be computed as follows. Given that we rewrote \(S_{i}\) as the set of centrality values \(x\) such that \(C_{i-1}\leq x\leq C_{i}\), then the median \(\mathit{md}_{i}\) of \(S_{i}\) must verify
\[\int_{\mathit{md}_{i}}^{C_{i}}Kx^{-\hat{\alpha}}dx=\frac{1}{2}\int_{C_{i-1}}^{ C_{i}}Kx^{-\hat{\alpha}}dx\]
from which we obtain that
\[\mathit{md}_{i}=\left(\frac{(C_{i-1})^{1-\hat{\alpha}}+(C_{i})^{1-\hat{\alpha }}}{2}\right)^{\frac{1}{1-\hat{\alpha}}}=\bar{f}_{i}\ \ \ (2\leq i\leq n) \tag{3}\]
Moreover, since the extreme points of the distribution are \(x_{\min}\) (estimated as \(\hat{x}_{\min}\)) and \(\infty\), the two remaining statistics \(\bar{f}_{1}\) and \(\bar{f}_{n+1}\) are computed as
\[\bar{f}_{1}=\left(\frac{(C_{1})^{1-\hat{\alpha}}+(\hat{x}_{\min})^{1-\hat{ \alpha}}}{2}\right)^{\frac{1}{1-\hat{\alpha}}} \tag{4}\]
and
\[\bar{f}_{n+1}=2^{\frac{1}{\hat{\alpha}-1}}\cdot C_{n} \tag{5}\]
We stress that with these formulas we compute the summary statistic \(\bar{f}_{i}\) for each set \(S_{i}\) just by knowing the values \(\{C_{i}\}_{i=1}^{n}\), which are computed by using only the values \(\hat{\alpha}\), \(\hat{x}_{\min}\), and the underlying vector of proportions \(\{p_{i}\}_{i=1}^{n}\). This last element was chosen as the quantile probability values that produced equidistant points on the range of \(\{\log(h(v))|v\in V,h(v)\geq\hat{x}_{\min}\}\), that is, the set of vertices where the power-law is well defined for the harmonic centrality.
Logarithmic binning is chosen to gauge the tail of the power-law distribution with higher frequency. The length \(n\) of the vector of proportions required to construct the clues (see Equation (2)) was chosen after pilot testing on each type of distribution. See D, and Sections 4.3.2 and 4.4 for more details. The election of this vector is a way of adapting QuickCent to distinct centrality distributions. Research on possible improvements achievable by tuning this vector may be addressed in future work.
The last element we introduced in our procedure, is the use of an additional quantile centrality value \(C_{0}=\hat{x}_{\min}\), with the goal of spanning the centrality values \(h(v)<\hat{x}_{\min}\) with greater accuracy. Since for this range of the vertex set the power-law distribution is no longer valid, the representative statistic \(\bar{f}_{0}\) we have used is simply the empirical median of the harmonic centrality in the set of nodes \(v\) such that \(\deg^{\text{in}}(v)\leq g^{-1}(\hat{x}_{\min})\). With this element, it turns out that, if we use a proportions vector \(\{p_{i}\}_{i=1}^{n}\) of length \(n\), the total number of medians \(\{\bar{f}_{i}\}_{i=0}^{n+1}\) is \((n+2)\). In the code provided to produce the analyses of this paper [47], this element is optional (and activated by setting **rm**=True or **rms**=True). All the results in this paper were obtained with this centrality quantile and median.
**Example 3.2**.: We continue revisiting Example 2.1. If we fix \(x_{\min}=1\), the exponent \(\hat{\alpha}(1)\) that fits the complete distribution of centrality values, by using Equation (8), is \(2.067\). The set of proportions shown in Example 3.1 comes from evaluating the centrality CDF on the set of points \(\{1,2.506,6.283\}\), which correspond to \(x_{\min}\) and two points (\(n=2\)) that in logarithmic scale turn out to be equidistant to the minimum and maximum of the set \(\{\log(h(v))|v\in V,h(v)\geq\hat{x}_{\min}\}\), the (log) centrality domain of the given network where the power law is valid. From these parameters and the expressions shown in this section, one can get the medians required by QuickCent to make estimates. These can be examined in Table 1, corresponding to the model **QC100**, which has a MAE (mean absolute error) over the whole digraph of \(3.606e-01\). A more interesting case may be computed when \(\hat{\alpha}(1)\) is derived from a random sample of the centrality distribution. For example, by taking a sample without replacement of size \(70\%\) one may get an exponent estimate of \(\hat{\alpha}(1)=2.477\), which has a MAE of \(6.948e-01\) and QuickCent estimates that can be examined in the model **QC70** in Table 1.
This completes all the ingredients for our instantiation of QuickCent, as we have the values for the clues \((x_{v}^{1},x_{v}^{2},\ldots,x_{v}^{n})\) computed from the in-degree of the node \(v\), plus the values \(\{d_{i}\}_{i=1}^{n}\) as shown in Equation (2), and also the
summary statistics \(\{\bar{f_{i}}\}_{i=0}^{n+1}\) for each set \(S_{i}\), which are the two pieces needed to apply the heuristic.
## 4 Results
In the present section, we show the results of applying _QuickCent_ either on synthetic data or on some empirical networks, and we compare it with alternative procedures for centrality estimation. We first show the comparison of QuickCent with other methods when applied on synthetic networks, considering accuracy and time measurements in Sections 4.1 and 4.2. The synthetic network model corresponds to the preferential attachment (PA) growth model introduced in B. Section 4.3 reviews the output of QuickCent on null network models where its accuracy is not as good relative to other methods, with the aim of showing that the two assumptions of QuickCent (Section 3.2) are jointly required as a necessary condition for the competitive performance of this heuristics. The same benchmark presented for the synthetic case was applied to the empirical datasets, and the results are shown in Section 4.4. The experiments to check the fulfillment of QuickCent assumptions by the different networks are shown in C, E, F and G. In all our experiments we consider harmonic centrality as the target to estimate. The number of nodes chosen for the synthetic networks experiments is \(10,000\) and \(1000\) for the null models, with the aim of accelerating the bootstrap computations to check the assumptions of QuickCent on each network. Similar sizes were searched for when choosing the tested empirical networks. These are not really big numbers compared with modern networks. We select these numbers as we need to be able to compute the exact value of the harmonic centrality for all nodes in the graph, in order to compare our estimations with the real value, and regard it to be enough for a first assessment of the heuristic.
#### 4.0.1 Experiments specifications
The norm that we employed to summarize the error committed on each node is the mean absolute error (MAE). This measure is preferable to other error norms, such as the Root mean squared error, because the units of the MAE are the same as the quantity under consideration, in this case, the harmonic centrality. On the other hand, since the MAE can be understood as the
_Minkowski_ loss with \(\mathcal{L}_{1}\) norm for the regression of the variable of interest, and in this case, it is known that the solution is given by the conditional median [5], it is reasonable to use the MAE when the summary statistic chosen is the median of each centrality interval. Finally, all the experiments were performed on the R language [51] with igraph library [15] for graph manipulation, and ggplot2 library [63] to produce the plots.
### Comparison with other methods
In this section, we compare the performance of known existing regression methods with QuickCent. This exercise allows us to evaluate the potential uses and applications of our proposal. Specifically, whether it can deliver reasonable estimates, in relation to alternative solutions for the same task. This is not a trivial matter, considering that QuickCent is designed to do little computational work of parameter estimation and output production, possibly with limited training data, while common alternative machine learning (ML) methods generally perform more complex computations. For a fair comparison, all other methods use only the in-degree as an explanatory variable. In rigor, QuickCent is able to produce the estimates only from the binary clues, without using the in-degree.
The competing methods considered are linear regression (denoted by L in plots), a regression tree (T) [50, 64] and a neural network (NN) [53], which are representatives of some of the most known machine learning algorithms. We used _Weka_[64] and the _RWeka_ R interface [26] to implement T and NN using default parameters. In the literature, there is previous work specifically tailored to centrality estimation using ML methods, but for other centrality indices beyond harmonic centrality. In particular Brandes and Pich study specific estimations for _closeness_ and _betweenness_ centrality [8]. It would be interesting to compare our method with the one proposed by Brandes and Pich [8], but this would amount to changing and adapting their method to harmonic centrality. We leave this adaptation and further comparison as future work.
The results of this experiment are shown in Figure 2 with a training size of 10 % and Figure 3 with a training size of 100 %, where the test set is always the entire digraph. The two training sizes are studied with the goal of assessing the impact of scarce data on the distinct estimation methods, by contrasting a full versus a scarce data scenario. In the figures, it can be seen that QuickCent (QC) produces the lowest MAE errors of all the methods,
either in terms of the IQR length, or the mean and outliers, for PA exponents 1 and 0.5. As noted in C, these are the cases where the centrality distribution has a better fit by a power-law model, but it is anyway a remarkable result considering the error committed by fixing \(x_{\min}=1\), see D. Examining the MAE medians of these simulations in Table 2, one sees that the QC median is at the level of the most competitive ML methods in the simulations, NN and T, sometimes being the best of the three depending on the experiment. However, for exponents 1 and 0.5, where the power-law is present, the good thing about QC is that the upper quantiles and even the outliers remain low compared to other methods (see Figure 2 and Figure 3).
Thus, the main takeaway is that QC, when its assumptions are fulfilled, is able to produce estimates at the same level as much more complex ML methods, with likely lower variance. This fact is consistent with the argument given by Brighton and Gigerenzer [9] claiming that the benefits of simple heuristics are largely due to their low variance. The argument relies on the decomposition of the (mean squared) error into _bias_, the difference between the average prediction over all data sets and the desired regression function, and _variance_, the extent to which the estimates for individual datasets vary around their average [5]. Thus, along the range of the bias-variance trade-off of models, simple heuristics are relatively rigid models with high bias and low variance, avoiding the potential overfitting of more complex models.
By examining the contrast of the outliers between Figure 2 and Figure 3, it can be noticed that QC suffers the least impact from scarce data. In the case of L and NN, they show a similar pattern for power-law centralities (PA exponents 1 and 0.5). They have medians that are lower for the 10 %
\begin{table}
\begin{tabular}{r|c c c c c c c}
**PA** \(\beta\) & **L10** & **NN10** & **QC10** & **T10** & **L100** & **NN100** & **QC100** & **T100** \\ \hline
1 & 2.341 & 1.194 & 1.040 & 1.422 & 5.711 & 1.242 & 1.009 & 1.560 \\
0.5 & 3.249 & 1.699 & 1.576 & 1.561 & 4.704 & 3.300 & 1.578 & 1.571 \\
1.5 & 0.079 & 0.991 & 0.996 & 1.009 & 0.006 & 0.018 & 0.997 & 1.986 \\ \end{tabular}
\end{table}
Table 2: **Medians of the MAE distribution across 1000 digraphs.** These estimates are computed from the same simulationes displayed in Figure 2 and Figure 3. The suffix of each method abbreviation corresponds to the size of the training size used. The exponent \(\beta\) corresponds to the exponent of the preferential attachment growth (B). The number of decimal places is truncated to three with respect to the source.
Figure 2: **Benchmark with other ML methods for different exponents of PA digraph instances and 10 \(\%\) of training size.** For each regression method, there is a boxplot showing the MAE distribution. Each boxplot goes from the \(25-\)th percentile to the \(75-\)th percentile, with a length known as the _inter-quartile range_ (IQR). The line inside the box indicates the median, and the rhombus indicates the mean. The whiskers start from the edge of the box and cover until the furthest point within 1.5 times the IQR. Any data point beyond the whisker ends is considered an outlier, and it is drawn as a dot. For display purposes, the vertical limit of the plots has been set to 10, since the highest MAE outliers of NN or L, depending on the PA exponent, blur the details of the model performance.
Figure 3: **Benchmark with other ML methods for different exponents of PA digraph instances and 100 \(\%\) of training size.** For each regression method there is a boxplot showing the MAE distribution. Each boxplot goes from the \(25-\)th percentile to the \(75-\)th percentile, with a length known as the _inter-quartile range_ (IQR). The line inside the box indicates the median, and the rhombus indicates the mean. The whiskers start from the edge of the box and cover until the furthest point within 1.5 times the IQR. Any data point beyond the whisker ends is considered an outlier, and it is drawn as a dot. For display reasons, the vertical limit of the two first plots was set to 10, since the highest MAE outliers of NN or L, depending on the PA exponent, blur the details of the model performance.
training size than those obtained with the whole network. Since there are only a few large values in the entire graph, when the training sample gets smaller, the sample values have a better linear fit, in comparison to larger samples. Therefore, a linear model adjusted to some small sample provides a good fit to the small-to-moderate centrality size nodes, which is the case for most of the nodes. This also explains the presence of higher outliers in the 10 % training size. On the other hand, the behavior of the regression tree is more similar to that of QuickCent.
quick procedure.
Based on these results, we conjecture that QuickCent has the lowest time complexity among the tested methods. Among the computations that QuickCent performs, the most expensive ones correspond to the selection problem of finding the median of the lowest centrality values (Section 3.3), plus the quantile degree values (Section 3.1). The procedure used to compute the proportions vector (Section 3.3), which is not considered in the elapsed time measurements, also relies on solving the selection problem (of the maximum of the set of centrality values) and sorting of the centrality values set (to find the proportions). These problems may be solved in linear time [14] on the input size, that is, linear on the network size \(\mathcal{O}(|V|)\). In contrast to the highly optimized R implementations for L, T, and NN, we considered only a naive implementation of QuickCent without, for example, architectural considerations. With these improvements such as using more appropriate data structures, these times could still be improved. We left as future work the construction of an optimized implementation for QuickCent.
### Networks defying QuickCent assumptions
Up to this point, we have mainly seen examples of networks where QuickCent exhibits quite good performance compared to competing regression methods. In order to give a full account of QuickCent capabilities and its _ecological rationality_[24], one should also have an idea of the networks where its accuracy deteriorates. To accomplish this, we will look at the two assumptions of QuickCent, namely, the power-law distribution of the centrality, and its monotonic map with the in-degree, to show that they are jointly required as a necessary condition for the competitive performance of the heuristic. Our approach here is to work with two null network models, each acting as a negation of the conjunction of the two assumptions, which provide strong evidence for this claim.
#### 4.3.1 Response to the loss of the monotonic map
Our first null model is a scale-free network built by preferential attachment, just as in the previous experiments, but after a _degree-preserving randomization_[41] of the initial network, which is simply a random reshuffling of arcs that keeps the in- and out-degree of each node constant. The aim is, on the one hand, to break the structure of degree correlations found in preferential
attachment networks [38, 65], which may be a factor favoring a monotonic relationship between in-degree and harmonic centrality, and on the other hand, to maintain a power-law distribution for the harmonic centrality by preserving the degree sequence of nodes in the network. This last feature does not ensure that the centrality distribution is a power-law, since randomization also affects this property. E displays the results of the experiments performed to check the assumptions of QuickCent on the randomized networks, showing that these networks satisfy them. Finally, Figure 4 shows the impact of randomization on each regression method. This is an experiment where 1000 PA networks (exponent 1) of size 1000 were created, and the four ML methods used in Section 4.1 were trained on each network with samples of size 30 % of the total node set, using only the in-degree as the predictor variable for the harmonic centrality. The same procedure was run on each network after applying degree-preserving randomization on 10000 pairs of arcs. The plot shows that the randomization has a similar impact on the performance loss of each method, which is an expected result due to the fact that the only source of information used by each method, the in-degree, becomes less reliable due to the weaker association with harmonic centrality thanks to the arc randomization. Since QuickCent was the most accurate of the methods tested on the initial PA networks, it appears to be also one of the methods most affected methods by the randomization.
#### 4.3.2 Response to the loss of the power-law distribution of centrality
Our second null model is the directed Erdos-Renyi graph model [20, 7, 30], and is chosen with the aim of gauging the impact of losing the power-law distribution of the centrality while maintaining the monotonic map from in-degree to centrality. This model is known to have a Poisson degree distribution [7], a behavior very different from a heavy-tailed distribution, and according to our simulations, see F, it turns out to be ideal for our purposes. We choose connection probabilities that ensure a unimodal distribution for centrality and a strong correlation with in-degree, i.e. with a mean in-degree greater than 1 [30]. In order to get a fair control on the performance of QuickCent, we have taken two empirical digraphs that satisfy the given condition of the mean in-degree, with node sets of size near 1000, just to accelerate the bootstrap p-value computations. The networks are extracted from the
## Errors on PA networks, before and after randomization
Figure 4: **Effect of randomization on different ML methods using 30 \(\%\) of the training size.** Each boxplot group is labeled with the name of the ML method, a dot, and the type of network on which the estimates are made (‘PL’ for the initial PA network, ‘RPL’ for the network after randomization). QC8 corresponds to QuickCent with a proportion vector of length 8, and analogously for QC1. For each regression method, there is a boxplot representing the MAE distribution. Each boxplot goes from the \(25-\)th percentile to the \(75-\)th percentile, with a length known as the _inter-quartile range_ (IQR). The line inside the box indicates the median, and the rhombus indicates the mean. The whiskers start from the edge of the box and extend to the furthest point within 1.5 times the IQR. Any data point beyond the whisker ends is considered an outlier, and it is drawn as a dot. For display reasons, the vertical limit of the plots was set at 10, since the highest MAE outliers of NN, make blur the details of the model performance.
KONECT database [34]2, and their meta-data is shown in Table 4. The fields \(N\) and \(\mathrm{deg}^{\mathrm{in}}\) given in this table, are used to determine the network size and the connection probability used to instantiate the respective ER digraphs from the identity \(\mathrm{deg}^{\mathrm{in}}=p\cdot(N-1)\).
Footnote 2: [http://konect.cc/](http://konect.cc/)
Finally, in Figure 5 we can see the results of an experiment analogous to the one with the first null model, that is, there are 1000 iterations where the same four ML methods were trained on each network, two ER graphs with the two connection probabilities and sizes given by the two empirical/control networks, with random samples of size 30 % of the total node set, using only the in-degree. Since the unimodal distribution of ER digraphs is very different from a power-law, in this experiment we have taken the approach of using the parameter \(\hat{x}_{\mathrm{min}}\) estimated by the method reviewed in A, as well as the search space restriction explained there, instead of a fixed lower limit as in the previous experiments. Now, by comparing the two plots in Figure 5, one can observe a noticeable difference in the behavior of QuickCent in the two cases. While QuickCent achieves an average accuracy relative to other regression methods on the control networks with centrality distributions that are more or less close to heavy-tailed, on ER digraphs with similar characteristics to the controls, QuickCent consistently performs worse than other methods. The performance of QuickCent in these plots corresponds to the best possible for each network as a function of the length of the proportions vector, denoted by the number after 'QC'. This output is also consistent with
\begin{table}
\begin{tabular}{c|c c c c}
**Name** & **N** & \(\mathrm{deg}^{\mathrm{in}}\) & **Corr** & **Arc meaning** & **Ref.** \\ \hline
**moreno\_blogs** & 990 & 19.21 & 0.872 & Blog hyperlink & [1] \\
**subelj\_jung-j** & 2208 & 62.81 & 0.808 & Software Class dependency & [56] \\ \end{tabular}
\end{table}
Table 4: **General description of the two empirical control networks.** The fields in the table are the dataset name, the number of nodes with positive in-degree (N), the mean in-degree of nodes with positive in-degree (\(\mathrm{deg}^{\mathrm{in}}\)), the Spearman correlation between the positive values of in-degree and harmonic centrality (Corr), the meaning of the arcs, and the original reference. The name corresponds to the _Internal name_field in the KONECT database. To access the site to download the dataset, append the internal name to the link _[http://konect.cc/networks/_](http://konect.cc/networks/_).
the difference in p-values of the power-law fit between the control networks and 1000 instances of the ER models, reported in Table 10. These results reveal the critical importance of the centrality distribution of the data set for the proper functioning of QuickCent.
On the other hand, all of the methods exhibit better performance on the ER digraphs than on the corresponding control network, probably due to less heterogeneity in the values to be predicted on the former. Finally, as a side note for working on empirical network datasets, for general networks it should be more accurate to use the fitted value of \(\hat{x}_{\min}\) than a fixed value, although this depends on the variability range existing on the values less than \(\hat{x}_{\min}\), which may introduce potentially large contributions to the estimation error. Observe that there is an additional computational overhead due to the calculation of \(\hat{x}_{\min}\).
### Experiments with empirical networks
In this section, we present the performance of QuickCent on some real network datasets of similar size to the synthetic networks already tested, also in comparison to other machine learning methods. These results are only a first glimpse of the challenges this heuristics may encounter when dealing with real datasets, and they should also be considered as a proof of concept.
We selected five datasets, all of them extracted from the KONECT network database [34]3, a public online database of more than a one thousand network datasets. The criteria for selecting the networks were that, besides being similar in size to our synthetic networks(10000 nodes), each network had a distinct meaning, i.e., networks representing distinct systems from different contexts. General descriptors of these datasets are displayed on Table 5. There, we can see that we have selected: a social network of friendships among students created from a survey [42], a co-authorship network from the _astrophysics_ section of arXiv from 1995 to 1999 [43], a citation network of publications from DBLP [37], a network of connected Gnutella hosts from 2002 [52], and the communication network of messages among users from the Galician Wikipedia [57]. See G to review the experiments of assumption verification on these empirical datasets.
Footnote 3: [http://konect.cc/](http://konect.cc/)
We end this section with several plots in Figure 6 showing the performance of QuickCent compared to the same ML algorithms from Section 4.1,
Figure 5: **Effect of centrality distribution on different ML methods using 30 \(\%\) of training size.** Each boxplot group is labeled with the name of the ML method, a dot, and the type of network on which the estimates are made (‘mb’ for moreno_blogs, ‘sj’ for subelj_jung-j, ‘ERmb’ for the ER digraph created with the parameters of moreno_blogs, and analogously for ‘ERsj’). The number after ‘QC’ is the length of the vector of proportions used by that method, corresponding to the best accuracy for the respective network. For each regression method, there is a boxplot representing the MAE distribution. Each boxplot goes from the \(25-\)th percentile to the \(75-\)th percentile, with a length known as the _inter-quartile range_ (IQR). The line inside the box indicates the median, and the rhombus indicates the mean. The whiskers start from the edge of the box and extend to the furthest point within 1.5 times the IQR. Any data point beyond the whisker ends is considered an outlier, and it is drawn as a dot. For display reasons, the vertical limit of the control network plot has been set at 150, as the highest MAE outliers of NN blur the details of the model performance.
Figure 6: **Performance of QuickCent against known ML algorithms on each dataset.** The competing algorithms are the same as in Section 4.1, that is, a linear regression (L), a neural network (NN), and a regression tree (T), all with default parameters. Each point from each boxplot is the MAE of the respective model trained with a random sample of nodes of size 10 % of the total, and all the samples come from the same respective network. The white rhombus in each boxplot is the mean of the distribution.
all of them trained with samples of size equal to the 10 % of each dataset. The feature of QuickCent having the smallest error dispersion observed in the synthetic datasets is also observed in this case. The QC performance, although not as good as in the synthetic datasets, is competitive with the other ML methods, and even better than an important number of instances of the neural network for the dimacs10-astro-ph and wiki_talk_gl datasets. These results are obtained with a length of the proportions vector equal to 2, which delivers the best performance found among several vector lengths tested, in contrast to the larger length of 8 used in the synthetic case. These two differences with the synthetic case, support the hypothesis that the overall goodness of the power-law fit found by QC is better for the synthetic distributions than for the empirical ones. Finally, it is noteworthy that in these two datasets either QC or T, and in general the regression tree for all the datasets, obtain the best accuracy beating more flexible methods such as NN, considering that these methods provide a limited number of distinct output values.
## 5 Discussion and future work
In this section, we analyze the results presented in the last section. We start with a summary of the results, and then the discussion is mainly centered on the type of network patterns on which the performance of QuickCent is
\begin{table}
\begin{tabular}{c|c c c c}
**Name** & **Dir.** & **N** & **m** & **Edge meaning** & **Reference** \\ \hline
**moreno\_health** & Y & 2539 & 12969 & Friendship & [42] \\
**dimacs10-astro-ph** & N & 16046 & 121251 & Co-authorship & [43] \\
**dblp-cite** & Y & 12590 & 49759 & Citation & [37] \\
**p2p-Gnutella04** & Y & 10876 & 39994 & Host Connection & [52] \\
**wiki\_talk\_gl** & Y & 8097 & 63809 & Message & [57] \\ \end{tabular}
\end{table}
Table 5: **General description of the five empirical network datasets.** The fields in the table are the dataset name, whether the network is directed, the number of nodes (N), the number of edges (m), the meaning of the edges, and the original reference. The name corresponds to the _Internal name_ field in the KONECT database. To access the site to download the dataset, append the internal name to the link _[http://konect.cc/networks/_](http://konect.cc/networks/_).
based. We end up with a series of ideas for future work and concluding remarks.
Summary of resultsThe results presented in Section 4 show that QuickCent can be a competent alternative to perform a regression on a power-law centrality variable. The method generates accurate and low variance estimates even when trained on a small -10%- proportion of the dataset, comparable in precision to some more advanced machine learning algorithms. Its accuracy is available at a time cost that is significantly better than one of the machine learning methods tested, namely, the neural network. In this sense, QuickCent is an example of a simple heuristic based on exploiting regularities present in the data, which can be a competitive alternative to more computationally intensive methods.
The patterns on which QuickCent reliesAn interesting question is why our initial attempt to approximate an expensive centrality sensitive to size and density by a cheap density measure is successful, at least for the network cases tested. The same question framed in terms of the QuickCent method, would be why the two method assumptions, the power-law of harmonic centrality, and the strong correlation between harmonic centrality and in-degree, do hold for power-law, or more specifically for some preferential attachment networks. It was already mentioned in the text that while in-degree and PageRank centrality of a digraph obey a power-law with the same exponent [39], we have no knowledge of results describing the distribution of harmonic centrality on digraphs. A possible intuition for the scale-free behavior of harmonic centrality observed on PA networks (Figure 8), may come from the motivation for the harmonic centrality given by Marchiori and Latora [40, 35]. The reciprocal shortest-path distances are used to informally define the efficiency of communication between nodes in a network. Therefore, it is reasonable to think that the scale-free degree distribution induces an analogous distribution in the efficiency to receive information sent.
The correlation and the monotonic relationship between harmonic centrality and in-degree, is strongly favored by the network generation mechanism. There is converging evidence showing that preferential attachment, which in its usual formulation requires global information about the current degree distribution, can be the outcome of link-creation processes guided by the local network structure, such as a random walk adding new links to
neighbors of connected nodes, or in simple words, meeting friends of friends [28, 59]. The reason is that the mechanism of choosing a neighbor of a connected node makes those higher-degree nodes more likely to be chosen by the random walk, which in turn makes more paths lead to them. That is, the local density could indeed reflect the access to larger parts of the network. Of course, preferential attachment is not the only mechanism capable of producing scale-free networks [33, 66, 18], and the distinct generative mechanisms may engender or not, a stronger relationship between density and size in the resulting network. This insight may be the reason why the monotonic relationship between harmonic centrality and in-degree is more apparent in the preferential attachment model than in some of the empirical networks, as Figure 7 shows.
The reasons explaining why and when fast and frugal heuristics work is an active research problem [9, 25]. This question is not addressed in this paper, but some related results are mentioned next. It has been claimed and tested by simulations that the QuickEst estimation mechanism works well on power-law distributed data, but poorly on uniform data [61]. In the case of QuickCent, this dependence on the power-law distribution is reinforced by the parameter construction of this method (see Section 3). Another fact given by the definition of our clues that has been pointed out as a factor favoring simple strategies is the correlated information [36], that is,
Figure 7: **Scatterplot of in-degree versus harmonic centrality for a synthetic and an empirical network.** The plot on the left is obtained with a PA exponent of 1, and the plot on the right is that of the dimacs10-astro-ph dataset. The axes are in logarithmic (base 10) scale.
information found early in the search is predictive of information found later. In the case of QuickCent, this simply means that since all clues are based on the in-degree, and this number can only belong to one of several disjoint real intervals, information from additional clues will not provide contradictory evidence once the heuristic has terminated its search. Other tasks where it is necessary to weigh the contribution of many possibly contradicting variables may present a more challenging context for these simple heuristics.
Future workThe insight described above about the monotonic map between the in-degree and the harmonic centrality on networks generated by preferential attachment nurtures the conjecture that QuickCent may be better suited to, for example, networks with an information component such as the Internet or citations, which can be well approximated by this growth mechanism [4, 29, 59], than to more pure social networks such as friendships [28, 11]. There is evidence that, if one assumes that some nodes to form links are found uniformly at random, while others are found by searching locally through the current structure of the network, it turns out that the more pure social networks appear to be governed largely through random meetings, while others like the World Wide Web and citation networks involve much more network-based link formation [28]. Testing this hypothesis on a large corpus of diverse empirical network datasets is an interesting question for future work.
Applying QuickCent to other types of networks or centrality measures is not a direct task, since, depending on the type of network considered, degree and centrality may be strongly or weakly related. We plan to address these extensions in future work, where one possible line of research is to formulate the problem of finding the proportion quantiles as that of obtaining an optimal quantizer [23]. There is some resemblance between our problem of finding the quantiles minimizing the error with respect to some distribution and that of finding the optimal thresholds of a piecewise constant function minimizing the distortion error of reproducing a continuous signal by a discrete set of points. On the other hand, QuickCent requires an explanatory variable that is correlated with the fitted variable to construct the clues. Future work should deal with extensions and flexibility of the clues employed, trying other clues or new ways to integrate different clues. The idea raised in our work of using a local density measure to approximate expensive size-based centrality indices could be generalized in order to be valid on more general
networks, for example, by using a more general notion of local density than the in-degree, such as combined indicators of the spreading capability [62] or random-walk based indices of community density [16].
Concluding remarksThe results of this paper are a proof of concept to illustrate the potential of using methods based on very simple heuristics to estimate some network centrality measures. Our results show that Quick-Cent is comparable in accuracy to the best-competing methods tested, with the lowest error variance, even when trained on a small proportion of the dataset, and all this at intermediate time cost relative to the other methods using a naive implementation. We give some insight into how QuickCent exploits the fact that in some networks, such as those generated by preferential attachment, local density measures, such as the in-degree, can be a good proxy for the size of the network region to which a node has access, opening up the possibility of approximating centrality indices based on size such as the harmonic centrality.
|
2307.13512 | A motivic analogue of the K(1)-local sphere spectrum | We identify the motivic $KGL/2$-local sphere as the fiber of $\psi^3-1$ on
$(2,\eta)$-completed Hermitian $K$-theory, over any base scheme containing
$1/2$. This is a motivic analogue of the classical resolution of the
$K(1)$-local sphere, and extends to a description of the $KGL/2$-localization
of an arbitrary motivic spectrum. Our proof relies on a novel conservativity
argument that should be of broad utility in stable motivic homotopy theory. | William Balderrama, Kyle Ormsby, J. D. Quigley | 2023-07-25T14:04:52Z | http://arxiv.org/abs/2307.13512v3 | # A motivic analogue of
###### Abstract.
We identify the motivic \(KGL/2\)-local sphere as the fiber of \(\psi^{3}-1\) on \((2,\eta)\)-completed Hermitian \(K\)-theory, over any base scheme containing \(1/2\). This is a motivic analogue of the classical resolution of the \(K(1)\)-local sphere, and extends to a description of the \(KGL/2\)-localization of an arbitrary motivic spectrum. Our proof relies on a novel conservativity argument that should be of broad utility in stable motivic homotopy theory.
###### Contents
* 1 Introduction
* 2 First reductions
* 3 Reduction to \(S=\operatorname{Spec}(\mathbb{C})\)
* 4 The case \(S=\operatorname{Spec}(\mathbb{C})\)
## 1. Introduction
The motivic stable homotopy category \(\operatorname{SH}(k)\) over a field \(k\) was introduced by Morel and Voevodsky [13] in the 1990's to apply powerful tools from algebraic topology to problems in algebraic geometry. Its successes include the resolution of the Milnor and Bloch-Kato conjectures [12, 13, 14, 15], new computations in algebraic and Hermitian \(K\)-theory [1, 17], detailed analyses of algebraic cobordism [1], contributions to the developing field of quadratic enumerative geometry [16, 17], and computational input to the Asok-Fasel-Hopkins program on algebraic vector bundles [1, 2, 3].
There has been considerable interest in studying localizations of the motivic stable homotopy category. Morel showed in [15] that \(\operatorname{SH}(k)[\frac{1}{2}]\) splits as a product of its "positive" and "negative" parts, and Cisinski-Deglise [18], Garkusha [19], and Deglise-Fasel-Khan-Jin [12] have analyzed the rationalization \(\operatorname{SH}(k)_{\mathbb{Q}}\). In particular, the positive part is related to the theory of rational motives, while the negative part is related to Witt theory and \(\eta\)-periodic phenomenon.
Here \(\eta\in\pi_{1,1}(\mathbf{1}_{k})\) is represented by the canonical \(\mathbb{G}_{m}\)-torsor \(\mathbb{A}^{2}\smallsetminus 0\to\mathbb{P}^{1}\) and may be viewed as an endomorphism of the unit object \(\mathbf{1}_{k}\) of \(\operatorname{SH}(k)\). The coefficients of the \(\mathbf{1}_{k}[\eta^{-1}]\) have been studied over various base fields ([1, 1, 18, 19, 20, 21]), and Bachmann-Hopkins [1, Thm. 1.1] recently showed that
\[\mathbf{1}_{k}[\eta^{-1}]_{(2)}\simeq\operatorname{fib}\left(\varphi\colon kw _{(2)}\to\Sigma^{4}kw_{(2)}\right).\]
Here, \(kw=KQ[\eta^{-1}]_{\geq 0}\) is the spectrum of connective Balmer-Witt K-theory and \(\varphi\) is a lift of \(\psi^{3}-1\).
## 1. Introduction
Let \(K\) be a finite field, \(K(1)\) be a finite field, \(K(1)\) is a finite field, \(K(1)
**Remark 1.1**.: Belmont-Isaksen-Kong [1] and Kong and the third author [13] studied a "connective" analogue of \(JQ\) over a variety of base fields \(k\), there denoted
\[L=\operatorname{fib}\left(\psi^{3}-1\colon kq^{\wedge}_{(2,\eta)}\to kq^{ \wedge}_{(2,\eta)}\right).\]
Here, \(kq=\tilde{f}_{0}KQ\) is the very effective cover of Hermitian K-theory [1]. The very effective cover functor is not triangulated, so the computations in [1, 13] do not immediately yield the homotopy groups of \(\tilde{f}_{0}L_{KGL/2}\mathbf{1}_{k}\); the precise relationship between these groups will be described in a forthcoming revision of [13].
**Remark 1.2**.: The \(C_{2}\)-equivariant Betti realization functor \(\operatorname{Be}\colon\operatorname{SH}(\mathbb{R})\to\operatorname{Sp}^{C_ {2}}\) sends \(KGL\) and \(KQ\) to the \(C_{2}\)-equivariant spectra \(K\mathbb{R}\) and \(KO_{C_{2}}\) respectively. Thus the \(\mathbb{R}\)-motivic \(KGL/2\)-local sphere may be viewed as a lift of the \(C_{2}\)-equivariant \(K\mathbb{R}/2\)-local sphere \(L_{K\mathbb{R}/2}S_{C_{2}}\simeq\operatorname{fib}\left(\psi^{3}-1\colon(KO_ {C_{2}})^{\wedge}_{2}\to(KO_{C_{2}})^{\wedge}_{2}\right)\) analyzed by the first author in [1].
After fixing some notation in Subsection 1.1, we outline the proof of Theorem A in Subsection 1.2 below.
### Conventions
We maintain the following conventions throughout the paper.
1. We work over a base scheme \(S\) containing \(1/2\).
2. We write \(\operatorname{SH}(S)\) for the category of \(S\)-motivic spectra.
3. We write \(\mathbf{1}_{S}\in\operatorname{SH}(S)\) for the \(S\)-motivic sphere spectrum.
4. We abbreviate \(\operatorname{SH}(\operatorname{Spec}(k))=\operatorname{SH}(k)\), \(\mathbf{1}_{k}=\mathbf{1}_{\operatorname{Spec}(k)}\), and so forth.
5. We write \(KGL\) and \(KQ\) for the \(S\)-motivic algebraic and Hermitian \(K\)-theory spectra respectively (see [1, SS3.2], where \(KQ\) is denoted \(KO\)).
6. For a field \(k\) containing \(1/2\), we write \(k_{n}^{M}(k)=K_{n}^{M}(k)/2\) for the \(n\)th mod \(2\) Milnor \(K\)-groups of \(k\).
7. We abbreviate \(\rho:=[-1]\in\pi_{-1,-1}\mathbf{1}_{S}\) for the class represented by the unstable map \(\operatorname{Spec}(S)_{+}\to\mathbb{A}^{1}\smallsetminus 0\) induced by the map \(S[x]\to S\) evaluating at \(-1\).
8. We abbreviate \(JQ=\operatorname{fib}\left(\psi^{3}-1\colon KQ^{\wedge}_{(2,\eta)}\to KQ^{ \wedge}_{(2,\eta)}\right)\), so that in general \((X\otimes JQ)^{\wedge}_{(2,\eta)}=\operatorname{fib}\left(\psi^{3}-1\colon( X\otimes KQ)^{\wedge}_{(2,\eta)}\to(X\otimes KQ)^{\wedge}_{(2,\eta)}\right)\).
9. We write \(\operatorname{map}(X,Y)\) for the spectrum of maps between objects \(X\) and \(Y\).
### Outline of proof
We now outline the proof of Theorem A. Using the above notation, the theorem asserts that if \(X\) is an \(S\)-motivic spectrum then the natural map \(X\to(X\otimes JQ)^{\wedge}_{(2,\eta)}\) realizes \((X\otimes JQ)^{\wedge}_{(2,\eta)}\) as the \(KGL/2\)-localization of \(X\). By definition, to prove this we must verify that \((X\otimes JQ)^{\wedge}_{(2,\eta)}\) is \(KGL/2\)-local, and that \(X\to(X\otimes JQ)^{\wedge}_{(2,\eta)}\) is a \(KGL/2\)-equivalence.
The fact that \((X\otimes JQ)^{\wedge}_{(2,\eta)}\) is \(KGL/2\)-local follows from a formal argument using the fact that \(KGL\simeq KQ/\eta\). The same line of reasoning reduces checking that \(X\to(X\otimes JQ)^{\wedge}_{(2,\eta)}\) is a \(KGL/2\)-equivalence for any \(X\) to just the case \(X=\mathbf{1}_{S}\). We give these simple reductions in Section 2. The bulk of our work is then to verify that the map \(\mathbf{1}_{S}\to JQ\) is a \(KGL/2\)-equivalence. The proof of this splits naturally into two parts: the reduction to \(S=\operatorname{Spec}(\mathbb{C})\), carried out in Section 3, and the case \(S=\operatorname{Spec}(\mathbb{C})\), carried out in Section 4.
These proceed as follows. First, the assertion that \(\mathbf{1}_{S}\to JQ\) is a \(KGL/2\)-equivalence is stable under base change, and this allows us to reduce to considering just \(S=\operatorname{Spec}(\mathbb{Z}[\frac{1}{2}])\). To then reduce to \(S=\operatorname{Spec}(\mathbb{C})\), we prove the following
general conservativity result: base change \(\operatorname{SH}(\mathbb{Z}[\frac{1}{2}])\to\operatorname{SH}(\mathbb{C})\) is conservative when restricted to the full subcategory of \(2\)-torsion cellular motivic spectra with vanishing real Betti realization and convergent effective slice tower (Theorem 3.1). Once we have reduced to \(S=\operatorname{Spec}(\mathbb{C})\), we make use of the close relation between \(\mathbb{C}\)-motivic homotopy theory and the classical Adams-Novikov spectral sequence to further reduce to the classical description of the non-motivic \(K(1)\)-local sphere.
**Remark 1.3**.: Pelaez and Weibel show in [13] that the \(KGL\)-cooperation algebra \(KGL_{*}KGL\) admits a decomposition mimicking the classical \(KU\)-cooperation algebra \(KU_{*}KU\). We do not use this, but do use the simpler fact that \(KGL\otimes KGL\) is a free \(KGL\)-module. It could be interesting to pursue an alternate approach which uses their theorem to give a more computational proof of Theorem A. The authors hope that the reduction-to-\(\mathbb{C}\) method pioneered here will prove useful in other applications where explicit computations may be inaccessible over general base schemes but feasible over \(\mathbb{C}\).
### Acknowledgements
The authors thank Jeremiah Heller for explaining the extension of Theorem A from fields to Dedekind domains, and thank Tom Bachmann for pointing out that cellularity is not necessary. The first author was supported by NSF RTG grant DMS-1839968; the second author was partially supported by NSF grant DMS-2204365; the third author was supported by NSF grants DMS-2039316 and DMS-2314082.
## 2. First reductions
By definition, to show that \(L_{KGL/2}X\simeq(X\otimes JQ)^{\wedge}_{(2,\eta)}\), we must show that
1. \((X\otimes JQ)^{\wedge}_{(2,\eta)}\) is \(KGL/2\)-local, in the sense that if \(C\) is \(KGL/2\)-acyclic then \(\operatorname{map}(C,(X\otimes JQ)^{\wedge}_{(2,\eta)})=0\);
2. \(X\to(X\otimes JQ)^{\wedge}_{(2,\eta)}\) is a \(KGL/2\)-equivalence.
In this section, we prove (1), and show that (2) holds for all \(X\) as soon as it holds for \(X=\mathbf{1}_{S}\). We will make frequent use of the following fact, due to Rondigs and Ostvaer.
**Lemma 2.1** ([12, Theorem 3.4]).: There is an equivalence \(KGL\simeq KQ/\eta\).
**Proposition 2.2**.: Let \(X\) be an \(S\)-motivic spectrum. Then \((X\otimes JQ)^{\wedge}_{(2,\eta)}\) is \(KGL/2\)-local.
Proof.: As the subcategory of \(KGL/2\)-local spectra is closed under fibers, it suffices to show that \((X\otimes KQ)^{\wedge}_{(2,\eta)}\) is \(KGL/2\)-local. To simplify the notation, let us show more generally that if \(M\) is any \((2,\eta)\)-complete \(KQ\)-module, then the underlying motivic spectrum of \(M\) is \(KGL/2\)-local.
By Lemma 2.1, we may identify \(KGL/2\simeq KQ/(2,\eta)\). It follows that if \(C\) is \(KGL/2\)-acyclic, then \((KQ\otimes C)^{\wedge}_{(2,\eta)}\simeq 0\). From here we compute
\[\operatorname{map}(C,M)\simeq\operatorname{Mod}_{KQ}(KQ\otimes C,M)\simeq \operatorname{Mod}_{KQ}((KQ\otimes C)^{\wedge}_{(2,\eta)},M)\simeq 0,\]
so that \(M\) is \(KGL/2\)-local as claimed.
**Proposition 2.3**.: Suppose that \(\mathbf{1}_{S}\to JQ\) is a \(KGL/2\)-equivalence. Then \(X\to(X\otimes JQ)^{\wedge}_{(2,\eta)}\) is a \(KGL/2\)-equivalence for any \(S\)-motivic spectrum \(X\).
Proof.: As \(KGL/2\simeq KQ/(2,\eta)\), we have
\[(X\otimes KQ)^{\wedge}_{(2,\eta)}\otimes KGL/2\simeq X\otimes KQ\otimes KGL/2\]
for any \(X\). Thus \(X\to(X\otimes JQ)^{\wedge}_{(2,\eta)}\) is a \(KGL/2\)-equivalence if and only if the natural map
\[X\otimes KGL/2\to X\otimes JQ\otimes KGL/2\]
is an equivalence. This map is obtained from the case \(X=\mathbf{1}_{S}\) by smashing with \(X\), so if it is an equivalence for \(X=\mathbf{1}_{S}\) then it is an equivalence for any \(X\).
## 3. Reduction to \(S=\operatorname{Spec}(\mathbb{C})\)
It remains to show that \(\mathbf{1}_{S}\to JQ\) is a \(KGL/2\)-equivalence. In this section, we show how one may reduce this statement over an arbitrary base scheme \(S\) containing \(1/2\) to just the case \(S=\operatorname{Spec}(\mathbb{C})\), which will then be carried out in Section 4.
### A conservativity result
The heart of our argument is the following.
**Theorem 3.1**.: Base change \(\operatorname{SH}(\mathbb{Z}[\frac{1}{2}])\to\operatorname{SH}(\mathbb{C})\) is conservative when restricted to the full subcategory of \(2\)-torsion cellular motivic spectra with vanishing real Betti realization and convergent effective slice tower.
Here, we refer to the effective slice tower in the sense of Voevodsky [20, Section 2]. The rest of this subsection is dedicated to the proof of Theorem 3.1. In Subsection 3.2, we verify that this implies the desired reduction. The starting point for the proof is the following conservativity result of Bachmann and Hoyois.
**Lemma 3.2**.: For \(X\in\operatorname{SH}(\mathbb{Z}[\frac{1}{2}])\) to vanish, it suffices that the base change of \(X\) to \(\operatorname{Spec}(k)\) vanishes for \(k\) a prime field other than \(\mathbb{F}_{2}\).
Proof.: This is a special case of [1, Proposition B.3].
This would reduce us to considering fields, only if \(k\) has positive characteristic then there is no obvious comparison between \(\operatorname{SH}(k)\) and \(\operatorname{SH}(\mathbb{C})\). To handle this we make use of the following Lefschetz principle.
**Lemma 3.3**.: Let \(K\) be an algebraically closed field containing \(1/2\). Then there is a zigzag of base change functors between \(\operatorname{SH}(K)\) and \(\operatorname{SH}(\mathbb{C})\) which is an equivalence on full subcategories of \(2\)-torsion cellular objects.
Proof.: This follows from [1, Proposition 5.2.1] and its proof upon restricting to \(2\)-torsion objects.
Now fix a prime field \(k\) other than \(\mathbb{F}_{2}\), and fix an algebraic closure \(p\colon\operatorname{Spec}(K)\to\operatorname{Spec}(k)\). The core of our argument is to show that \(p^{*}\colon\operatorname{SH}(k)\to\operatorname{SH}(K)\) is conservative when restricted to the full subcategory of \(2\)-torsion cellular motivic spectra with convergent slice tower, and with vanishing real Betti realization when \(k=\mathbb{Q}\). As \(p^{*}\) is exact, it is equivalent to show that if \(X\in\operatorname{SH}(k)\) is such an object and \(p^{*}X=0\), then \(X=0\). First let us reformulate the condition on the Betti realization of \(X\) when \(k=\mathbb{Q}\) to one which is uniform in \(k\). Recall that \(\rho=[-1]\in\pi_{-1,-1}\mathbf{1}_{k}\) (see Subsection 1.1), and write \(C(\rho)\) for its cofiber.
**Lemma 3.4**.: Let \(X\in\operatorname{SH}(k)\), and when \(k=\mathbb{Q}\) suppose that the real Betti realization of \(X\) vanishes. If \(C(\rho)\otimes X=0\) then \(X=0\).
Proof.: We claim that \(X\) is \(\rho\)-torsion, that is that \(X[\rho^{-1}]=0\). The claim follows as smashing with \(C(\rho)\) is conservative on \(\rho\)-torsion objects, as can be seen using the cofiber sequence \(\mathbf{1}_{S}\to\mathbf{1}_{S}[\rho^{-1}]\to\mathbf{1}_{S}/(\rho^{\infty})\), where \(\mathbf{1}_{S}/(\rho^{\infty})=\operatorname{colim}_{n}\Sigma^{n,n}C(\rho^{n})\) is built out of copies of \(C(\rho)\).
First consider \(k=\mathbb{F}_{p}\) (\(p\neq 2\)). Here \(\rho\) is already nilpotent in \(\pi_{*,*}\mathbf{1}_{S}\)[11, Example 1.5], and so \(X[\rho^{-1}]\simeq 0\) for any \(X\).
Next consider \(k=\mathbb{Q}\). As \(\mathbb{Q}\) has a unique ordering, work of Bachmann [1, Theorem 35, Proposition 36] shows that \(\operatorname{SH}(\mathbb{Q})[\rho^{-1}]\simeq\operatorname{SH}\) via real Betti realization. Thus the condition that the real Betti realization of \(X\) vanishes exactly says that \(X[\rho^{-1}]=0\).
We now turn to considering slices. For a motivic spectrum \(X\), write \(f_{q}X\) for the \(q\)-effective cover of \(X\), and write \(s_{q}X=f_{q}X/f_{q+1}X\) for the \(q\)th slice of \(X\).
**Lemma 3.5**.: Let \(k\) be any field containing \(1/2\). Then the assignment \(X\mapsto s_{*}(X/2)\) lifts to an exact functor \(\operatorname{SH}(k)\to\operatorname{Mod}_{H\mathbb{F}_{2}^{k}}\). Moreover, if \(X\in\operatorname{SH}(k)\) is cellular then \(s_{*}(X/2)\) is cellular as an \(H\mathbb{F}_{2}^{k}\)-module.
Proof.: Exactness follows from the definitions, see [10, Theorem 2.2]. The assignment \(X\mapsto s_{*}(X)\) defines a lax monoidal functor from \(\operatorname{SH}(k)\) to \(\mathbb{Z}\)-graded objects in \(\operatorname{SH}(k)\), see [10]. In particular, \(s_{*}(X)\) is a module over \(s_{0}(\mathbf{1}_{k})\). Theorems of Voevodsky [10, Thm. 6.6] and Levine [11, Theorem 10.5.1] identify \(s_{0}(\mathbf{1}_{k})\simeq H\mathbb{Z}^{k}\), and so exactness of \(s_{*}\) implies that
\[s_{*}(X/2)\simeq s_{*}(X)/2\simeq H\mathbb{F}_{2}^{k}\otimes_{H\mathbb{Z}^{k}} s_{*}(X)\]
is naturally an \(H\mathbb{F}_{2}^{k}\)-module as claimed. Finally, to verify that if \(X\) is cellular then \(s_{*}(X/2)\) is cellular as an \(H\mathbb{F}_{2}^{k}\)-module, it suffices to verify that \(s_{*}(\mathbf{1}_{k})\) is cellular as an \(H\mathbb{Z}^{k}\)-module, which follows from Rondigs-Spitzweck-Ostvaer's computation of the slices \(s_{*}(\mathbf{1}_{k})\) in [10, Theorem 2.12].
Our last main ingredient in the proof of Theorem 3.1 is to show that \(\operatorname{Mod}_{H\mathbb{F}_{2}^{k}}\to\operatorname{Mod}_{H\mathbb{F}_{2 }^{k}}\) is conservative upon restriction to cellular \(\rho\)-torsion objects. The proof of this is a variant of [11, Example 3.5]. Write Cell for the cellularization functor, and set
\[E=\operatorname{Cell}(\operatorname{Spec}(K)_{+}\otimes H\mathbb{F}_{2}^{k}) \in\operatorname{Mod}_{H\mathbb{F}_{2}^{k}}.\]
Recall that \(\pi_{*,*}H\mathbb{F}_{2}^{k}=k_{*}^{M}(k)[\tau]\), where \(\tau\in\pi_{0,-1}H\mathbb{F}_{2}^{k}\) and \(x\in k_{n}^{M}(k)\) lives in \(\pi_{-n,-n}H\mathbb{F}_{2}^{k}\), and likewise \(\pi_{*,*}E=\mathbb{F}_{2}[\tau]\). Say that an \(H\mathbb{F}_{2}^{k}\)-module \(M\) is \(\tau\)_-free_ if \(\pi_{*,*}M\) is free as an \(\mathbb{F}_{2}[\tau]\)-module.
**Lemma 3.6**.: Let \(M\) be a \(\tau\)-free \(H\mathbb{F}_{2}^{k}\)-module. Then there exists a \(\tau\)-free \(H\mathbb{F}_{2}^{k}\)-module \(M_{\geq n}\) receiving a map \(M\to M_{\geq n}\) with the following properties:
1. \(\pi_{i,*}M_{\geq n}=0\) for \(i<n\);
2. \(\pi_{i,*}M\to\pi_{i,*}M_{\geq n}\) is an isomorphism for \(i\geq n\).
Proof.: The proof of [1, Proposition 3.3] applies. For \(m\in\mathbb{Z}\), choose a basis \(\{x_{i}:i\in I_{m}\}\) for \(\pi_{m,*}M\) as an \(\mathbb{F}_{2}[\tau]\)-module, and define
\[V_{m}M=\bigoplus_{x\in I_{m}}\Sigma^{|x_{i}|}H\mathbb{F}_{2}^{k}.\]
Then there is a map \(V_{m}M\to M\) inducing an isomorphism on \(\pi_{m,*}\), and if we set
\[C(M)=\operatorname{cof}\left(\bigoplus_{m<n}V_{m}(M)\to M\right),\]
then coconnectivity of \(H\mathbb{F}_{2}^{k}\) ensures that \(M\to C(M)\) induces an isomorphism on \(\pi_{i,*}\) for \(i\geq n\) and is zero on \(\pi_{i,*}\) for \(i<n\). Thus \(M_{\geq n}=\operatorname{colim}_{k}C^{k}(M)\) does the job.
**Lemma 3.7**.: Let \(M\) be a \(\tau\)-free cellular \(H\mathbb{F}_{2}^{k}\)-module. Fix \(n\in\mathbb{Z}\) and suppose \(\pi_{i,*}M=0\) for \(i\neq n\). Then \(M\) is equivalent to a sum of copies of \(\Sigma^{n,*}E\).
Proof.: Without loss of generality we may suppose \(n=0\). First we claim that if \(x\in\pi_{0,w}M\), then there is a map \(g\colon\Sigma^{0,w}E\to M\) satisfying \(g(1)=x\). Without loss of generality we may suppose \(w=0\). Note that \(\pi_{*,*}E\cong\mathbb{F}_{2}[\tau]\), and consider the universal coefficient spectral sequence
\[E_{2}^{s,f,w}=\operatorname{Ext}_{\pi_{*,*}H\mathbb{F}_{2}^{k}}^{f}(\mathbb{F }_{2}[\tau],\pi_{*-s,*-w}M)\Rightarrow[E,\Sigma^{s+f,w}M],\quad d_{r}\colon E _{r}^{s,f,w}\to E_{r}^{s+1,f+r,w}.\]
The proposed map \(g\) defines a class in \(E_{2}^{0,0,0}\). The only way this could fail to survive the spectral sequence is if there is some nontrivial differential \(d_{r}(g)\neq 0\). This lives in a subquotient of \(\operatorname{Ext}_{\pi_{*,*}H\mathbb{F}_{2}^{k}}^{r}(\mathbb{F}_{2}[\tau], \pi_{*-1,*}M)\), which vanishes because \(M\) is concentrated in nonnegative stems and \(\pi_{*,*}H\mathbb{F}_{2}^{k}\) and \(\mathbb{F}_{2}[\tau]\) are concentrated in nonpositive stems. Thus \(g\) survives to a map \(g\colon\Sigma^{0,w}E\to M\) satisfying \(g(1)=x\) as claimed.
Now choose a basis \(\{x_{i}:i\in I\}\) for \(\pi_{0,*}M\) as an \(\mathbb{F}_{2}[\tau]\)-module. Then the above argument provides a map
\[(x_{i})_{i\in I}\colon\bigoplus_{i\in I}E\to M\]
inducing an isomorphism in \(\pi_{*,*}\), which is then an equivalence as \(E\) and \(M\) are cellular.
The above lemmas hold in general over any base field \(k\) containing \(1/2\). However, the following relies on \(k\) being a prime field other than \(\mathbb{F}_{2}\) in order to ensure that \(\pi_{*,*}C(\rho)\otimes H\mathbb{F}_{2}^{k}\) is concentrated in finitely many degrees.
**Proposition 3.8**.: Base change \(p^{*}\colon\operatorname{Mod}_{H\mathbb{F}_{2}^{k}}\to\operatorname{Mod}_{H \mathbb{F}_{2}^{k}}\) is conservative when restricted to the full subcategory of cellular \(\rho\)-torsion \(H\mathbb{F}_{2}^{k}\)-modules.
Proof.: Write \(p\colon\operatorname{Spec}(K)\to\operatorname{Spec}(k)\). Then the composite
\[\operatorname{Mod}_{H\mathbb{F}_{2}^{k}}^{\operatorname{cell}}\xrightarrow{ \subset}\operatorname{Mod}_{H\mathbb{F}_{2}^{k}}\xrightarrow{p^{*}} \operatorname{Mod}_{H\mathbb{F}_{2}^{k}}\xrightarrow{p_{*}}\operatorname{Mod }_{H\mathbb{F}_{2}^{k}}\xrightarrow{\operatorname{Cell}}\operatorname{Mod}_{ H\mathbb{F}_{2}^{k}}^{\operatorname{cell}}\]
can be identified as smashing with \(E\). It therefore suffices to show that smashing with \(E\) is conservative on the full subcategory of \(\rho\)-torsion \(H\mathbb{F}_{2}^{k}\)-modules.
By the structure of \(\pi_{*,*}H\mathbb{F}_{2}^{k}\) when \(k\) is a prime field other than \(\mathbb{F}_{2}\) (convenient references are [10, Section 2.1] and [13, Section 5]), \(C(\rho)\) is \(\tau\)-free and \(\pi_{i,*}C(\rho)\neq 0\) for only finitely many \(i\). Combining Lemma 3.6 and Lemma 3.7, we find that \(C(\rho)\) admits a finite filtration with associated graded equivalent to a direct sum of copies of \(\Sigma^{*,*}E\). In particular \(C(\rho)\) lies in the thick \(\otimes\)-ideal of \(\operatorname{Mod}_{H\mathbb{F}_{2}^{k}}\) generated by \(E\).
Smashing with \(C(\rho)\) is conservative on the full subcategory of \(\rho\)-torsion \(H\mathbb{F}_{2}^{k}\)-modules. As \(C(\rho)\) lies in the thick \(\otimes\)-ideal generated by \(E\), it follows that smashing with \(E\) is conservative on the full subcategory of \(\rho\)-torsion \(H\mathbb{F}_{2}^{k}\)-modules, proving the proposition.
We can now give the following.
Proof of Theorem 3.1.: Combining Lemma 3.2, Lemma 3.3, and Lemma 3.4, we reduce to verifying the following assertion:
Let \(k\) be a prime field other than \(\mathbb{F}_{2}\) and \(p\colon\operatorname{Spec}(K)\to\operatorname{Spec}(k)\) be an algebraic closure. Let \(X\in\operatorname{SH}(k)\) be a cellular motivic spectrum with convergent slice tower, and suppose \(p^{*}(C(\rho)\otimes X/2)=0\). Then \(C(\rho)\otimes X/2=0\).
Indeed, as \(X\) is slice complete, so is \(C(\rho)\otimes X/2\). It therefore suffices to verify that \(s_{*}(C(\rho)\otimes X/2)=0\). By exactness \(s_{*}(C(\rho)\otimes X/2)\simeq C(\rho)\otimes s_{*}(X/2)\), and so by Lemma 3.5 and Proposition 3.8 it suffices to verify that \(p^{*}(s_{*}(C(\rho)\otimes X/2))=0\). As slices of cellular spectra are preserved by base change [14, Corollary 2.17], we have \(p^{*}(s_{*}(C(\rho)\otimes X/2))\simeq s_{*}(p^{*}(C(\rho)\otimes X/2))\), and this vanishes by assumption.
### Proof of the reduction to \(S=\operatorname{Spec}(\mathbb{C})\)
We now show that Theorem 3.1 allows us to reduce from verifying that \(\mathbf{1}_{S}\to JQ\) is a \(KGL/2\)-equivalence to just the case \(S=\operatorname{Spec}(\mathbb{C})\).
**Lemma 3.9**.: The following \(S\)-motivic spectra are cellular and compatible with base change between schemes containing \(1/2\):
\[KQ,\qquad KGL,\qquad KGL/2,\qquad KGL/2\otimes JQ.\]
Proof.: Recall that we have taken \(KQ\) and \(KGL\) to be defined as in [1, SS3.2]. They are cellular and compatible with base change as discussed there. As \(KGL/2\simeq KQ/(2,\eta)\), we may identify
\[KGL/2\otimes JQ\simeq KGL/2\otimes\operatorname{fib}\left(\psi^{3}-1\colon KQ [\tfrac{1}{3}]\to KQ[\tfrac{1}{3}]\right).\]
This is then cellular, and is compatible with base change between schemes containing \(1/2\) by [1, Theorem 3.1(1)].
**Lemma 3.10**.: The motivic spectra \(KGL/2\) and \(KGL/2\otimes JQ\) are slice complete.
Proof.: As the functors \(f_{q}\) are exact, slice complete motivic spectra form a thick subcategory of \(\operatorname{SH}(S)\). As \(KGL/2\simeq KQ/(2,\eta)\), there is an equivalence \(KGL/2\otimes(KQ)^{\wedge}_{(2,\eta)}\simeq KGL/2\otimes KQ\), and \(KGL\otimes KQ\) is a retract of \(KGL\otimes KGL\). Thus we reduce to verifying that \(KGL\) and \(KGL\otimes KGL\) are slice complete. Following the proof of [1, Lemma 2.6], \(f_{q}KGL\) is \(q\)-connected in the sense of [14, Definition 3.16]. As \(\infty\)-connected objects are contractible, it follows that \(KGL\) is slice complete. By [13, Corollary 7.5], \(KGL\otimes KGL\) is a free \(KGL\)-module. As \(f_{q}\) commutes with direct sums [13, Proposition 6.1], it follows that \(f_{q}(KGL\otimes KGL)\) is also \(q\)-connected, and thus \(KGL\otimes KGL\) is slice complete.
**Lemma 3.11**.: The real Betti realizations of \(KGL/2\) and \(KGL/2\otimes JQ\) vanish.
Proof.: As real Betti realization is exact and monoidal, it suffices to verify that the real Betti realization of \(KGL\) vanishes. This is proved by Bachmann and Hopkins in [1, Lemma 3.9].
As far as the proof of Theorem A is concerned, the following is the main result of this section.
**Proposition 3.12**.: To show that \(\mathbf{1}_{S}\to JQ\) is a \(KGL/2\)-equivalence over any scheme \(S\) containing \(1/2\), it suffices to consider just \(S=\operatorname{Spec}(\mathbb{C})\).
Proof.: Lemma 3.9 reduces the case of an arbitrary scheme \(S\) containing \(1/2\) to just \(S=\operatorname{Spec}(\mathbb{Z}[\tfrac{1}{2}])\). By Lemma 3.10 and Lemma 3.11, the hypotheses of Theorem 3.1 apply to then reduce to \(S=\operatorname{Spec}(\mathbb{C})\)
## 4. The case \(S=\operatorname{Spec}(\mathbb{C})\)
It remains to show that \(\mathbf{1}_{\mathbb{C}}\to JQ\) is a \(KGL/2\)-equivalence. To do this, we will make use of the close relationship between \(\mathbb{C}\)-motivic homotopy theory and the classical Adams-Novikov spectral sequence. There are (at least) two closely related approaches to formalizing this relation: the synthetic spectra of Pstragowski [13], and the \(\Gamma_{\star}\)-construction of Gheorghe-Isaksen-Krause-Ricka [1]. For our argument it is convenient to use the latter.
In [1], properties of the classical and motivic Adams-Novikov spectral sequences are used to construct:
1. A lax symmetric monoidal functor \(\Gamma_{\star}\colon\operatorname{Sp}\to\operatorname{Fun}((\mathbb{Z},<)^{ \operatorname{op}},\operatorname{Sp})\), and
2. A lax symmetric monoidal functor \(\Omega^{0,\star}\colon\operatorname{SH}(\mathbb{C})\to\operatorname{Mod}_{ \Gamma_{\star}(S)}\),
and to prove that \(\Omega^{0,\star}\) induces an equivalence between the category of \(2\)-complete cellular \(\mathbb{C}\)-motivic spectra and the category of \(2\)-complete \(\Gamma_{\star}(S)\)-modules. Moreover, under this correspondence one has
\[KGL\simeq\Gamma_{\star}(KU),\qquad KQ\simeq\Gamma_{\star}(KO), \tag{4.1}\]
up to \(2\)-completion. We are now ready to prove the following.
**Proposition 4.1**.: The map \(\mathbf{1}_{\mathbb{C}}\to JQ\) is a \(KGL/2\)-equivalence.
Proof.: It is equivalent to show that
\[KGL/2\xrightarrow{}KGL/2\otimes KQ\xrightarrow{\psi^{3}-1}KGL/2\otimes KQ \tag{4.2}\]
is a fiber sequence of \(\mathbb{C}\)-motivic spectra. Consider the classical fiber sequence
\[KU/2\xrightarrow{}KU/2\otimes KO\xrightarrow{\psi^{3}-1}KU/2\otimes KO\.\]
As these spectra have \(MU\)-homology concentrated in even degrees, [1, Proposition 3.17] implies that \(\Gamma_{\star}\) sends this to a fiber sequence
\[\Gamma_{\star}(KU/2)\xrightarrow{}\Gamma_{\star}(KU/2\otimes KO)\xrightarrow {\psi^{3}-1}\Gamma_{\star}(KU/2\otimes KO) \tag{4.3}\]
of \(\Gamma_{\star}(S)\)-modules. We claim that \(\Gamma_{\star}(KU/2)\simeq\Gamma_{\star}(KU)/2\) and \(\Gamma_{\star}(KU/2\otimes KO)\simeq\Gamma_{\star}(KU/2)\otimes_{\Gamma_{ \star}(S)}\Gamma_{\star}(KO)\). In light of the equivalence between \(2\)-complete cellular motivic spectra and \(2\)-complete \(\Gamma_{\star}(S)\)-modules, this would give an equivalence between the two sequences Eq. (4.2) and Eq. (4.3), and thus the former would be a fiber sequence as claimed.
That \(\Gamma_{\star}(KU/2)\simeq\Gamma_{\star}(KU)/2\) follows by again applying [1, Proposition 3.17], this time to the cofiber sequence \(KU\to KU\to KU/2\); likewise one has \(\Gamma_{\star}(KU/2\otimes KO)\simeq\Gamma_{\star}(KU\otimes KO)/2\). As \(KU\) is a filtered colimit of finite complexes with only even cells [1, Proposition 2.12], and \(KO\) has \(MU\)-homology concentrated in even degrees, [1, Proposition 3.25] implies that \(\Gamma_{\star}(KU\otimes KO)\simeq\Gamma_{\star}(KU)\otimes_{\Gamma_{\star}(S )}\Gamma_{\star}(KO)\). Combining these observations provides the requires equivalences.
Combining Proposition 2.2, Proposition 2.3, Proposition 3.12, and Proposition 4.1 now proves Theorem A. |
2306.04347 | World Models for Math Story Problems | Solving math story problems is a complex task for students and NLP models
alike, requiring them to understand the world as described in the story and
reason over it to compute an answer. Recent years have seen impressive
performance on automatically solving these problems with large pre-trained
language models and innovative techniques to prompt them. However, it remains
unclear if these models possess accurate representations of mathematical
concepts. This leads to lack of interpretability and trustworthiness which
impedes their usefulness in various applications. In this paper, we consolidate
previous work on categorizing and representing math story problems and develop
MathWorld, which is a graph-based semantic formalism specific for the domain of
math story problems. With MathWorld, we can assign world models to math story
problems which represent the situations and actions introduced in the text and
their mathematical relationships. We combine math story problems from several
existing datasets and annotate a corpus of 1,019 problems and 3,204 logical
forms with MathWorld. Using this data, we demonstrate the following use cases
of MathWorld: (1) prompting language models with synthetically generated
question-answer pairs to probe their reasoning and world modeling abilities,
and (2) generating new problems by using the world models as a design space. | Andreas Opedal, Niklas Stoehr, Abulhair Saparov, Mrinmaya Sachan | 2023-06-07T11:25:20Z | http://arxiv.org/abs/2306.04347v1 | # World Models for Math Story Problems
###### Abstract
Solving math story problems is a complex task for students and NLP models alike, requiring them to understand the world as described in the story and reason over it to compute an answer. Recent years have seen impressive performance on automatically solving these problems with large pre-trained language models and innovative techniques to prompt them. However, it remains unclear if these models possess accurate representations of mathematical concepts. This leads to lack of interpretability and trustworthiness which impedes their usefulness in various applications. In this paper, we consolidate previous work on categorizing and representing math story problems and develop MathWorld, which is a graph-based semantic formalism specific for the domain of math story problems. With MathWorld, we can assign world models to math story problems which represent the situations and actions introduced in the text and their mathematical relationships. We combine math story problems from several existing datasets and annotate a corpus of \(1,019\) problems and \(3,204\) logical forms with MathWorld. Using this data, we demonstrate the following use cases of MathWorld: (1) prompting language models with synthetically generated question-answer pairs to probe their reasoning and world modeling abilities, and (2) generating new problems by using the world models as a design space.
## 1 Introduction
Math story problems (MSPs) are short narrative texts that describe a dynamic situation in the world consisting of entities, actions and states, followed by a quantitative question about the world, as displayed in Fig. 1. The task of automatically solving MSPs has received much research attention in NLP. While earlier models for solving MSPs [11, 12, 13] focused on extracting various features from text to learn probabilistic models, recent efforts have used pre-trained large language models (LLMs) [16, 17, 18, 2]. Although they display high performance on benchmarks, it has been shown that such neural models tend to rely heavily on shallow heuristics, raising questions about whether the models can indeed "understand" MSPs and robustly solve them [23, 24].
From the human side, solving MSPs requires a wide set of skills. A student must not only perform a set of given computations, but first be able to process the text and map it into a corresponding world model that represents the situation described in text [12, 13]. Inspired by this, we take a step towards developing more interpretable solvers and introduce MathWorld, a semantic world model framework for MSPs.
MathWorld can be viewed as a formalism for reasoning in dynamical problem settings [12, 13], specific to the domain of MSPs. It represents each problem as a directed graph called a _world model_ (SS 3). The
Figure 1: An example of a world model in MathWorld. MathWorld can be used to develop interpretable MSP solvers, to study the reasoning of LLMs and as a design space for generation of new MSPs.
nodes in a world model are containers (SS 3.1) representing entities' possession of some quantity Hosseini et al. (2014) and the edges represent various types of mathematical relations between the quantities (SS 3.2). The relations correspond to mathematical concepts that have been previously shown to cover a vast majority of MSPs Mitra and Baral (2016); Roy and Roth (2018). We annotate a MathWorld dataset consisting of \(1,019\) English MSPs from various widely-used datasets Kondel-Kedziorski et al. (2016); Miao et al. (2020); Patel et al. (2021), which we make publicly available.
There are several potential use cases of MathWorld, of which we discuss three. First, one natural application is that of developing interpretable MSP solvers. A solver using MathWorld follows two steps: (i) semantic parsing and (ii) reasoning. The semantic parser takes an MSP text and outputs a world model based on the explicit information in the text. The reasoner then takes the world model and solves the problem based on the quantities and their relations. Our experiments show that LLMs struggle to build accurate and well-formed world models; we encourage future work to develop stronger semantic parsers for MathWorld.
Another use case of MathWorld is as a tool to study the reasoning capabilities of existing solvers. For instance, we can use the world model annotations to automatically generate synthetic subquestions for the MSPs. Using such subquestions, we give empirical evidence that GPT-3 Brown et al. (2020) benefits from the structured knowledge derived by world models in its ability to solve MSPs. We further use our synthetic questions to understand if GPT-3 can indeed answer these intermediate questions about the world described in the MSPs, and not just the final question. We find that for problems where GPT-3 answers the final question correctly, it can only answer 64% of the intermediate questions. This suggests that GPT-3 is not accurately building world models for these problems but might be relying on reasoning shortcuts.
Finally, MathWorld can be considered as a design space for generating interesting new MSPs. We illustrate the usefulness of MathWorld for the task of generating MSPs by prompting an LLM using the world model annotations.
## 2 Related Work
Math story problems in NLPAlthough the problem of automatically solving MSPs has gathered substantial interest in NLP Roy and Roth (2015); Kushman et al. (2014); Huang et al. (2017); Amini et al. (2019); Xie and Sun (2019); Drori et al. (2022), the focus has traditionally been on improving answer accuracy rather than providing didactic human-interpretable solutions Shridhar et al. (2022). Some approaches map the text to expression trees Koncel-Kedziorski et al. (2015); Yang et al. (2022); Roy and Roth (2017) or explicitly model arithmetic concepts Mitra and Baral (2016); Roy and Roth (2018). However, few if any computational works have attempted to solve MSPs by using mental models Johnson-Laird (1983), which is a common framework for analyzing how humans solve MSPs Kintsch and Grecno (1985). Taking inspiration from mental models of MSPs, we offer MathWorld as a computational model (fully expressible in first-order logic, App. D) which represents reasoning steps, arithmetic concepts and fictional elements in a human-readable graph format. We hope that such an approach can support intelligent tutoring systems Anderson et al. (1995), e.g., by delivering feedback and hints Zhou et al. (1999); Fossati (2008) or generating new MSPs Polozov et al. (2015); Koncel-Kedziorski et al. (2016); Srivastava and Goodman (2021).
In particular, we draw inspiration from Hosseini et al. (2014), who propose a symbolic approach that maps the text to container-based states. However, their symbolic representation is purely extracted from syntactic rules without human annotation. Further, their approach only covers problems that involve a transfer of some quantity between some actors (although they do not use that terminology), requiring addition and/or subtraction. In contrast, MathWorld is more closely tied to the MSP's semantics. It covers a strictly larger set of problem types, involving more concepts and all four basic arithmetic operators \((+,-,\times,\div)\). See Table 1 for a comparison between MathWorld and Hosseini et al. (2014), as well as Mitra and Baral (2016) and Roy and Roth (2018) from which we adopt the taxonomy over arithmetic concepts.
Reasoning with large language modelsLLMs have displayed impressive performance on numerical reasoning tasks Brown et al. (2020); Chowdhery et al. (2022), particularly by the help of careful prompt engineering Wei et al. (2022); Shridhar et al. (2023); Zhou et al. (2023). While language models have been argued to be intrinsically limited in their ability to perform human-like rea
soning (Bender and Koller, 2020), the mechanism by which they find answers in complex reasoning tasks is currently an active area of research (Tafjord et al., 2021; Saparov and He, 2023). MathWorld provides ground truth world model annotations, which is valuable in such studies (as demonstrated in SS 5.2). One other aspect of LLMs that may limit them when applied to reasoning is that they produce natural language text, which may be ambiguous and diverse. These considerations motivate us to study MSPs as structured representations of meaning, which can in turn be used to generate natural language (Saparov and Mitchell, 2022).
Semantic parsingMathWorld can be viewed as a domain-specific semantic formalism. Our work thus also relates closely to semantic parsing, particularly of graph-based structures (Banarescu et al., 2013; Cai and Lam, 2019; Zhang et al., 2019; Bai et al., 2022). However, while most other formalisms consider meaning only at the sentence level, our world model graphs span the meaning across multiple sentences.
## 3 MathWorld
In this section, we present our world model formalism MathWorld. We formalize an MSP as a sequence of \(n\) sentences \(\mathbf{s}=s_{1}\circ\cdots\circ s_{n}\). It can be separated into a **body**\(\mathbf{b}\) and a **question**\(q\), such that \(\mathbf{s}=\mathbf{b}\circ q\). The body is further partitioned into a sequence of \(n-1\) declarative sentences \(\mathbf{b}=s_{1}\circ\cdots\circ s_{n-1}\) and the question consists of a single interrogative sentence \(q=s_{n}\).
World models in MathWorld are directed and labelled graphs, denoted \(g\).1 We refer to the nodes of the graph as **containers** (SS 3.1) and the edges of the graph as **relations** (SS 3.2). Each container and relation is labelled with a set of properties. One such property is the **quantity**, which may be either an explicit number mentioned in text or a variable representing an unknown number. The containers and relations along with their properties specify the equations induced by the MSP. In addition, each \(g\) is associated with a **reference variable**\(r\), which points to the variable in \(g\) that holds the correct answer to the question as stated in \(q\). We consider each \(\mathbf{s}\) to be associated with some structure \((g,r)\).
Footnote 1: The graphs may be cyclic. Although in practice, they tend to be acyclic.
We say that \(g\) is **faithful** if it represents the semantics of the problem text according to the framework of MathWorld. Further, \(g\) is **complete** if \(r\) can be solved with the equations induced by \(g\). A complete world model is **correct** if, when evaluated, \(r\) gives the correct answer to the problem. See Fig. 1 for an example of a world model.
In order to allow for incremental parsing, we segment the world models into sentence-level logical forms \(m_{i}\), \(i=1,\ldots,n\). The logical form is a sequence that represents the containers and/or relations associated with the corresponding sentence.2 We can convert \((m_{1},\ldots,m_{n})\) to a world model graph, and vice versa. The two representations are nearly equivalent, with the exception of a few caveats (see App. F for details). There is no bound on the problem length and, by extension, the number of logical forms. MathWorld is thus able to represent problems of any arbitrary number of
\begin{table}
\begin{tabular}{l|c|c|c|c|c} & **Arithmetic** & **Conceptual** & **Semantic** & **Annotations?** & **Mapping to** \\ & **coverage** & **coverage** & **granularity** & **formal logic?** \\ \hline \multirow{3}{*}{MathWorld} & \multirow{3}{*}{\((+,-,\times,\div)\)} & Transfer & \multirow{3}{*}{World model} & \multirow{3}{*}{Yes} & \multirow{3}{*}{Yes} \\ & & Rate & & & \\ \cline{1-1} & & Comparison & Part-whole & & \\ \hline Hosseini et al. (2014) & \((+,-)\) & Transfer & World model & No & No \\ \hline Mitra and Baral (2016) & \((+,-)\) & Transfer & \multirow{3}{*}{\begin{tabular}{} \end{tabular} } & \multirow{3}{*}{Yes} & \multirow{3}{*}{No} \\ & & Comparison (add) & & & \\ \cline{1-1} & & Part-whole & & & \\ \hline \multirow{3}{*}{Roy and Roth (2018)} & \multirow{3}{*}{\((+,-,\times,\div)\)} & Transfer & \multirow{3}{*}{
\begin{tabular}{} \end{tabular} } & \multirow{3}{*}{No} & \multirow{3}{*}{No} \\ & & Rate & & & \\ \cline{1-1} & & Comparison & & & \\ \cline{1-1} & & Part-whole & & & \\ \hline \end{tabular}
\end{table}
Table 1: Comparison between MathWorld and other MSP works that use a more fine-grained symbolics than equations alone. “Annotations” refers to whether those symbolics are explicitly provided in the dataset.
reasoning steps. The assignment of logical forms may be ambiguous in the sense that there may be multiple faithful logical forms for a given sentence (discussed in App. B).
We consider subgraphs \(g_{i}\), for sentence \(i\), of the final graph \(g\). A subgraph \(g_{i}\) corresponds to the logical forms up to sentence \(i\), i.e., \((m_{1},\ldots,m_{i})\mapsto g_{i}\). We refer to the subgraph for some sentence index \(i\) as the **state** of \(i\). As an example of how world models are built incrementally with states, consider Fig. 1. The first sentence maps to the container for label _Will_ holding the entity _money_ of quantity \(83\) with unit _dollar_. The second sentence provides information on an update to Will's possessed money, a Transfer relation (SS 3.2.1). Finally, the question sentence introduces rate information, a Rate relation (SS 3.2.2), between money and toys.
In the next sections, we describe the details of containers and relations in depth.
### Containers
We adopt and modify the containers described in the model of Hosseini et al. (2014). Semantically, containers represent containment/possession. We refer to the possessor in the text as the **label** of the container.3 In Fig. 1, the container label is _Will_ for all containers (although in general the label can vary across containers). The label must be a noun plus any associated noun adjuncts (like _elementary school_). In addition to label, a container may have the following four properties:
Footnote 3: There may not always be an explicit possessor expressed in text. In such cases, we use the label _World_.
Entity:The entity is _what_ is possessed in the container. It is a noun, for which there may be an associated count. When expressed in a problem text, it must be the head of a noun phrase. In Fig. 1, _money_ and _toy_ are entities.4
Footnote 4: Note how the term _money_ is not actually expressed in the problem text. Similarly, the word _time_ will seldom be expressed in MSPs involving reasoning about time.
Quantity:The quantity is the number associated with the entity. It may be known, in which case it will be a positive real number, or unknown, in which case it will be a variable.
Attribute:The attribute is a modifier for the entity. It is often an adjective, but may take other forms as well. The attribute is an optional property.
Unit:The unit is the unit of measurement for the entity. A unit property must exist if the entity is a mass noun, but may exist in other cases as well. For example, "liter of water" and "kg of apples" will both be assigned to containers with units. The unit is an optional property.
Entity, attribute and unit are written in their lemmatized forms. The label is not, in order to be able to distinguish between a set (plural: _friends_) and an element of some set (singular: _friend_).
Note that the containers take a variable number of properties; having arity \(3\), \(4\) or \(5\). Two containers are **equal** if they have the same arity and the same properties. We refer to a container's **structure** as its container label, entity, attribute (if exists) and unit (if exists). Two containers are **structurally equal** if they have the same structure.
### Relations
Relations are the edges in \(g\). They represent the interactions between the various parts of the world model, from which the equations of the MSP are induced. The relations are directed, and the direction encodes semantics of the relation depending on the type of relation. Like containers, relations have properties. The properties and their arity also depend on the type of relation.
There are four types of relations: Transfer, Rate, Comparison and PartWhole. Together they span all four basic arithmetic operators \((+,-,\times,\dot{\div})\). Next, we give a detailed description of each of these relation types. Examples of world models with each relation type are provided in App. A.
#### 3.2.1 Transfer
Transfer relations model that a transfer of some quantity of an entity has occurred. A given container structure will either gain or lose quantity from a Transfer relation. For example, "Alice 3 apples" will correspond to a Transfer with a loss of 3 apples for the container labeled Alice. A Transfer is always between two containers of the same structure. The direction of the edge describes order: The source container will hold the quantity _before_ the transfer event occurred, and the target container will hold the quantity _after_ the transfer event occurred.
In addition to quantity, Transfer takes the following two properties:
Recipient:The label of the container structure where the quantity of the given entity is _gained_.
Sender:The label of the container structure where the quantity of the given entity is _lost_.
A recipient, a sender or both must exist. Transfer thus has arity \(2\) or \(3\). The Transfer relation either adds or subtracts the relation quantity to/from the source container quantity, depending on whether the relation connects the recipient containers or sender containers.
#### 3.2.2 Rate
The Rate relation models mathematical rate between two quantities. These two quantities are held in two separate containers with the same label, and the ratio quantity of the rate is given as a property to the relation. Rate has this one single property. The direction of the edge determines the relationship: The source container holds the numerator of the rate, and the target container holds the denominator of the rate. In the example in Fig. 1, the source container holds the entity _money_ and the target container holds the entity _toy_, indicating that the rate quantity concerns _money per toy_. Mathematically, Rate implies that the source quantity divided by the relation quantity equals the target quantity.
#### 3.2.3 Comparison
Comparison is invoked when there is an explicit relationship between two quantities in the MSP. For example, "Alice is twice as old as Bob". The Comparison relation may be either between containers with different labels, such as "Alice has 3 more apples than Bob", or between containers with the same label, such as "Alice has 3 more red apples than she has green apples". It takes two properties; quantity and type:
Type:The arithmetic operation type Comparison. It can take one of the two values; _add_ (indicating addition) or _mul_ (indicating multiplication).
The quantity held in the source container is the one that is combined with the quantity of the Comparison relation under the arithmetic operator, the output of which will be the quantity held in the target container.
#### 3.2.4 PartWhole
PartWhole relations model set partitions. The set represented by some container is partitioned into subsets, each of which is represented by another container. For each of the subset containers (the parts), there is an outgoing edge to the container with the superset (the whole). Thus, PartWhole implies that for a given container that has ingoing PartWhole edges, the sum over the quantities in the source containers of those edges equals the quantity in the target container. Note that PartWhole differs from the other relations in that it requires multiple edges to induce an equation.5 In most cases, all containers involved in a PartWhole relation will have the same label. The relation can then be viewed as a relation between entities possessed by a specific label. For instance, "Alice has 3 red apples and 6 green apples, how many apples does she have in total?" would be represented by PartWhole. PartWhole relations have no properties.
Footnote 5: Note that a PartWhole relation can be equivalently represented as a hyperedge.
PartWhole relations may represent meaning that is not explicit in text. Parsing the text of a problem that requires PartWhole might thus lead to an incomplete (SS 3) world model, which may require additional assumptions. In addition, orienting PartWhole relations might require common-sense knowledge. For instance, a problem might introduce a quantity for tables and a quantity for chairs, and ask about the total number of furniture.
### World model equivalence and similarity
One of the principal utilities of MathWorld is to allow for evaluating models on their reasoning ability. For that we need consistent equivalence notions and similarity metrics between world models, which we provide here.
Let \(g\) and \(g^{\prime}\) be **isomorphic** if there exists an isomorphism on the underlying graphs that additionally preserves relation types. We consider two forms of equivalence notions between world models, which we call strong and weak equivalence. Weak equivalence deems two world models to be equal if they are isomorphic. Strong equivalence additionally requires all properties of the containers and relations to be equal.6 In addition, we create two similarity scores based on the AMR metric _smatch_[10]: Weak match considers graph topology in the same way as our isomorphism equivalence, and strong match additionally considers all properties of the world models. We give details on these similarity scores in App. C.
### Comparison to other logical formalisms
MathWorld can be fully expressed in first-order logic (FOL). We provide a constructive proof in the form of a conversion in App. D, which enables comparison of the expressive power of MathWorld with that of other formalisms. Both AMR and MathWorld restrict the expressivity of full FOL in different ways. AMR provides a way to express negation (the polarity relation) but does not provide a way to directly express universal quantification7(Bos, 2016). MathWorld represents sets of objects as containers and enables universal quantification over those sets. This is restricted, however, as MathWorld does not allow the definition of sets of sets, or nested universal quantification.8 Negation is not directly expressible in MathWorld, as it is designed for the domain of MSPs where negation is quite rare.
Footnote 7: It is possible to do so indirectly, as in \(\neg\exists x.\neg\phi(x)\equiv\forall x.\phi(x)\), but this can only be done once per sentence.
Footnote 8: This disallows higher-order expressions, e.g., Comparison relations between quantities expressed in Transfer relations. It also disallows nested possession outside of what is made possible under Rate, e.g., structures like “Alice has a house that has a shelf that has a book that has 200 pages.”
MathWorld is more comparable to _situation calculus_(McCarthy, 1963), where each relation can be modeled as an action that changes the state of the world. Like situation calculus, the changing world state over time is implicitly represented in MathWorld (via the Transfer relation), whereas in FOL, an explicit description of the time of each event is necessary.
## 4 Data Collection
In order to study how models are able to answer MSPs, convert them to logical form, perform world modeling, and reason mathematically to find the answer, we require a diverse dataset of labeled MSPs that spans all concepts covered by MathWorld. To ensure diversity and wide variety in the examples, we collect them from numerous sources:
1. The math word repository MAWPS (Koncel-Kedziorski et al., 2016) gathers several datasets (Hosseini et al., 2014; Kushman et al., 2014; Koncel-Kedziorski et al., 2015; Roy and Roth, 2015), thus providing a wide variety of MSPs.
2. To complement with more challenging problems, we also adopt problems from ASDiv-A (Miao et al., 2020), which was designed for linguistic diversity and math concept diversity.
3. We also annotate a subset of the SVAMP dataset (Patel et al., 2021), which was introduced as a challenge set to test robustness to data artifacts. This enables future work to test the robustness of MathWorld parsers.
We randomly sample a subset from each of these three datasets,9 and annotate them with world models. We obtain \(1,019\) MSPs, which corresponds to \(3,204\) logical forms, which we partition into \(80/20\) train/test splits. Table 2 provides more details.
Footnote 9: We also considered the larger GSM8K dataset (Cobbe et al., 2021), which contains problems with more reasoning steps. However, although we found MathWorld to cover many of its MSPs, annotation workers were unable to reliably annotate these problems. Future work may aim to augment the data to assign ground truth world model structures to longer MSPs, using techniques similar to those demonstrated in § 5.3.
We hire external workers for annotation. Annotation follows three phases: A first training phase where annotators are given several small sets at a time with follow-up discussion sessions, an agreement phase in which all annotators are given the same problems and a final scale-up phase. We use an annotation tool created specifically for this work (shown in App. E.2). The problems are annotated incrementally sentence-by-sentence, in order to match logical forms to sentences as described in § 3. Questions are hidden from annotators until all preceding sentences are completed, in order to avoid bias stemming from having read the question--MathWorld is meant to capture the world model of the problem irrespective of what is asked in the question. Within sentences, we ask annotators to add containers and relations according to the order in which they occur in text. This allows us to write the logical forms according to within-sentence order when creating training data for semantic parsing. We maintain this order with integer IDs that are increme
\begin{table}
\begin{tabular}{l|c c|c c} & \multicolumn{2}{c|}{**Train**} & \multicolumn{2}{c}{**Test**} \\ & MSPs & LFs & MSPs & LFs \\ \hline ASDIV-A & \(328\) & \(1\),\(052\) & \(83\) & \(272\) \\ MAWPS & \(312\) & \(936\) & \(79\) & \(235\) \\ SVAMP & \(173\) & \(563\) & \(44\) & \(146\) \\ \hline TOTAL & \(813\) & \(2\),\(551\) & \(206\) & \(653\) \\ \end{tabular}
\end{table}
Table 2: Size of annotated dataset in terms of number of MSPs and number of sentence-aligned logical forms (LFs), stratified by dataset of origin and split.
the annotation tool.
We performed an agreement analysis of \(125\) overlapping MSPs, revealing a high agreement rate considering the complexity of the annotation task. Concretely, \(61\) out of these \(125\) were strongly equivalent (SS 3.3) across annotators, and \(107\) were weakly equivalent (SS 3.3). Many of the only weakly equivalent annotations were due to ambiguity in the properties (App. B.1), and almost half of the \(18\) non-agreed problems were due to ambiguity in relation type (App. B.2). The strong and weak match scores were \(0.91\) and \(0.97\) respectively. These can be interpreted as approximate upper bounds on the match scores achievable by any model, due to the ambiguity in the dataset. Many of the annotation errors, also outside of the overlapping set, could be either corrected or discarded _ex post_. Further details on annotation are given in App. E.
## 5 Applications of MathWorld
In this section we showcase some applications of MathWorld: solving (SS 5.1), probing of reasoning (SS 5.2) and generation of new MSPs (SS 5.3).
### Parsing and Reasoning
We spell out a framework for solving MSPs using MathWorld. The framework consists of two components: A _parser_ and a _reasoner_. The parser is tasked with assigning a faithful world model \(g\) to an input problem \(\mathbf{s}\), along with a reference variable \(r\). The reasoner is then queried with \(r\) and computes an answer based on the induced equations of \(g\). We also present a set of initial experiments, meant to introduce the task of MathWorld parsing to the community.
#### 5.1.1 Parser
Given an MSP \(\mathbf{s}\), the task is to assign a world model \(g\). The first step is to predict the sequence of logical forms \(m_{1},\ldots,m_{n}\). We model this as a conditional distribution
\[p(m_{1},\ldots,m_{n}\mid\mathbf{s})=\prod_{i=1}^{n}p(m_{i}\mid s_{1},\ldots,s _{i}). \tag{1}\]
With this factorization, we can parse the graph incrementally one sentence at a time. The factorization is based on two assumptions: \(m_{i}\perp s_{j},\forall i<j\) and \(m_{i}\perp m_{j},\forall i\neq j\). Both are aligned with MathWorld as outlined in SS 3: the first assumption means that a logical form is independent of the sentences in subsequent steps, and the second assumption means that logical forms are independent of each other. Dependencies of logical forms on preceding sentences are kept due to coreferences, elliptical constructions and other inter-sentence dependencies.
As explained in SS 3, the logical forms are linearized representations of the world model graphs. Thus, our pipeline (as well as applications like those demonstrated in SS 5) requires that we are able to convert from one representation to the other: World model graphs must be converted to logical forms in order to create training data for a semantic parser, and the predicted logical forms must be converted to world model graphs and reference variables for visualization and reasoning. The details of this conversion are given in App. F.
#### 5.1.2 Reasoner
Once we have a world model graph, we apply a reasoning algorithm over the graph to compute an answer. The reasoner takes a world model and a reference variable, and outputs a numeric value for the reference variable \(r\). Our implementation is deterministic and follows two steps. First, it extracts all equations induced by the world model (as described in SS 3.2 and illustrated in App. A). Second, it solves for \(r\) using a recursive algorithm. Full pseudocode along with a discussion is presented in App. H.10
Footnote 10: We note that annotated world models are not necessarily complete (def. in § 3). Annotators were requested to only build world models that represent what is made explicit in the text. Some problems may require additional background knowledge to build a complete world model.
#### 5.1.3 Baseline solving experiments
We demonstrate our proposed modeling framework with a baseline semantic parser, in the form of a large language model that is supervised in-context. We use Codex (Chen et al., 2021), as language models trained on code have been previously shown to perform well on structured prediction tasks (Madaan et al., 2022; Drozdov et al., 2023). The prompt contains 50 ground truth examples from MAWPS and ASDiv-A, and we evaluate the model on the test sets of MAWPS, ASDiv-A and SVAMP. We also implement a rule-based baseline system, based on Hosseini et al. (2014).
Our results corroborate that this is a challenging task; for the least difficult dataset the model gets roughly one third of the problems correct, and predicts a complete world model for only slightly more than half of the problems. The rule-based baseline gets nearly no problems correct. Indeed, a model
must, for each sentence, produce well-formed logical forms that exhaustively and correctly capture the semantics in MathWorld, combine these into a world model and query the reasoner with the correct reference variable. One mistake in any of these steps may lead to an incorrect answer. With much progress and research interest in semantic parsing in recent years Shin et al. (2021); Qiu et al. (2022) there are several promising directions for improvement, and we invite the research community to help in developing stronger semantic parsers for this challenging task. Further details on the setup and results can be found in App. I.1.
### Probing LLMs' partial knowledge
World models enable us to study the reasoning ability of LLMs: Beyond just testing whether a model outputs the correct solution to an MSP, we can test whether the model follows a correct reasoning path and accurately builds world model representations.
SetupWe design question and answer templates that are automatically filled based on information in the world model. Two examples of such templates are given in Fig. 2 and a list of all templates is given in App. I.3. By courtesy of the world model we know the true answer to each of these synthetic questions, enabling us to create prompts with question-answer pairs.
We experiment with three types of prompts, all displayed with full-length examples in Table 8: _(1) synth QA (all at once)_. We first include the complete problem text, followed by synthetic question and answer pairs related to some part of the text. We randomly sample two such pairs; _(2) synth QA (sentence-by-sentence)_. We again sample two question-answer pairs at random, but in this setting they are imputed right after the sentence in which the answer to the question is given; _(3) original MSP QA_. Under this setting we do not include any synthetic question-answer pairs, only the original text. All prompts end with the MSP question that we aim to solve followed by "A:". We study both whether the synthetic questions help the model answer the MSP correctly, and how well the model answers the synthetic questions themselves.
ResultsWe report results obtained by GPT-3 Brown et al. (2020) on the combined test set of all three datasets in Table 3. The number of inncontext examples is either 0 or 1. We observe increased performance when including synthetic question-answer pairs, particularly in setting (2) where the questions are imputed at the relevant part of the MSP text. We hypothesize that doing so helps guide the reasoning trace of the model, in a similar vein as chain-of-thought prompting Wei et al. (2022). Further, we find that GPT-2 Radford et al. (2019), BART Lewis et al. (2020), Codex Chen et al. (2021), T5 Raffel et al. (2020) and NT5 Yang et al. (2021) overall perform poorly, but benefit from an increase in performance when synthetic question-answer pairs are provided.
We further compare the ability of GPT-3 to answer the intermediate synthetic questions to its ability to answer the original final question. For each MSP, we first select a container or relation uniformly at random and then create a synthetic question. We then ask both the synthetic question and the original question at the end of two separate prompts in a zero-shot setting. Table 4 displays the results. Interestingly, in more than one third of the
\begin{table}
\begin{tabular}{l|c c} & \multicolumn{2}{c}{from \(x\) MSPs} \\ QA type & 0 & 1 \\ \hline (1) synth QAs (all at once) & 70.8 & 71.8 \\ (2) synth QAs (sent by sent) & 71.3 & **78.6** \\ (3) original MSP QAs & 69.4 & 70.8 \\ \end{tabular}
\end{table}
Table 3: Results obtained by GPT-3 in answering math story problems reported in accuracy percent. A larger increase in performance is observed when the synthetic question-answer pairs are presented at the relevant part of the text, rather than at the end.
Figure 2: Synthetically created question-answer pairs based on templates. Note that the quantity in the container or relation does not need to be expressed in text, but could be a variable. Such cases test the model’s ability to reason over intermediate quantities.
cases that the model gets the original question right (top row), it gets the intermediate synthetic question wrong (top right cell). Overall it also shows a higher accuracy on the original questions (top row) than the synthetic intermediate questions (left column). While some of these results could be explained by the nature of the templated questions, it does seem to indicate that the model makes use of heuristics rather than human-like reasoning when solving MSPs Patel et al. (2021).
### Generation of MSPs
MathWorld can be considered as a space under which a practitioner can design new MSPs with certain desired features. For instance, a teacher may be interested in generating variations of an MSP to test a specific mathematical concept with a specific unknown variable. To demonstrate the potential for such applications we provide a small proof-of-concept experiment.
SetupWe use GPT 3.5 Turbo Ouyang et al. (2022) with a prompt of 30 examples from the train sets of MAWPS and ASDiv-A. One example consists of the logical forms for a full MSP world model (source) followed by the text of the MSP (target). We separate sentence-aligned logical forms in the source as well as the sentences in the target by a marker, so that the model can pick up the alignment patterns. The ground truth examples are sampled randomly. To generate a new MSP conditioned on a world model, we append the logical form corresponding to the world model to the end of the prompt. We try generating new MSPs both based on (i) world models present in our annotated test sets (paraphrasing) and (ii) manual augmentations of annotated world models. We perform evaluation for setting (i) using SacreBLEU Post (2018) and BERTScore Zhang et al. (2020), comparing all MSPs in the test sets to their paraphrases.11
Footnote 11: More details on the generation setup are given in App. 1.2.
ResultsWe obtain SacreBLEU scores of \(66.73\), \(40.86\) and \(26.02\) and F1 BERTScores of \(0.933\), \(0.930\) and \(0.931\) for MAWPS, ASDiv-A and SVAMP respectively. Qualitatively we observe that the generated MSPs mostly stay faithful to the logical forms but tend to be shorter and less linguistically complex than the original problems, which would explain the comparatively low SacreBLEU scores in comparison to the BERTScores. Further, we give the first six examples we generated according to the described setup. One of them is shown in Fig. 3. The model generates an output MSP very similar to the original, having only accessed the original's ground truth logical forms. We further augment the original world model by changing the Transfer to a Rate. Note how the generated MSP is faithful to the augmented world model. The other five examples are shown in Table 6.
## 6 Conclusion
In this work, we have presented a novel formalism, MathWorld, for expressing the semantics of math story problems. We have annotated a MathWorld corpus consisting of \(1,019\) problems and \(3,204\) logical forms. A world model derived from MathWorld exposes the structure of the reasoning process needed to solve the problem, which benefits several applications as we have demonstrated in SS 5. As such, we hope that MathWorld will promote use cases beyond just improved MSP solving, ranging from automated chain-of-thought prompting to math problem generation.
\begin{table}
\begin{tabular}{l|c|c} & \multicolumn{2}{c}{Synthetic Question} \\ Original Question & Correct & Wrong \\ \hline Correct & 46.0\% & 25.7\% \\ \hline Wrong & 11.0\% & 17.3\% \\ \end{tabular}
\end{table}
Table 4: We test whether the model gets synthetic questions about parts of the world model right and compare it against its performance on the original question.
Figure 3: Example MSPs generated by GPT 3.5 Turbo.
### Limitations
MathWorld is limited to cover math story problems using the four basic arithmetic operators. Furthermore, within the space of such problems, it does not cover "second-order" MSPs (as discussed in SS 3.4). Neither does it cover negation nor inequalities.
We only consider datasets with MSPs written in English in this work. However, MathWorld should in principle be able to cover the same type of problems formulated in other languages as well.
An obvious limitation of this work is the low performance on the task of solving MSPs. The focus of this work is to introduce the world model formalism and its use cases, and we leave for future work to build stronger MathWorld parsers.
## Ethics Statement
We foresee no major ethical concerns with this work. The introduction of MathWorld is aimed at improving the interpretability and robustness of existing and future models for math story problem solving. On this account, we hope to contribute to identifying (and hopefully reducing) existing biases in pre-trained language models, or any future alternatives. However, we would like to caution that the formalism could be used to generate inappropriate math story problems.
## Acknowledgements
We thank Arnav Mishra, Aryaman Kolhe, Devraj Thakur, Gaurav Saini and Soham Bopardikar for help with annotation work. We further thank Jakub Macina, Kumar Shridhar and Menna El-Assady for input in the early stages of the project, Ethan Wilcox and Ying Jiao for helpful feedback, and Yixiong Wang for help in implementation of a symbolic baseline solver. Andreas Opedal is partially supported by the Max Planck ETH Center for Learning Systems. Niklas Stoehr acknowledges funding from the Swiss Data Science Center.
|
2306.12525 | LPFormer: LiDAR Pose Estimation Transformer with Multi-Task Network | Due to the difficulty of acquiring large-scale 3D human keypoint annotation,
previous methods for 3D human pose estimation (HPE) have often relied on 2D
image features and sequential 2D annotations. Furthermore, the training of
these networks typically assumes the prediction of a human bounding box and the
accurate alignment of 3D point clouds with 2D images, making direct application
in real-world scenarios challenging. In this paper, we present the 1st
framework for end-to-end 3D human pose estimation, named LPFormer, which uses
only LiDAR as its input along with its corresponding 3D annotations. LPFormer
consists of two stages: firstly, it identifies the human bounding box and
extracts multi-level feature representations, and secondly, it utilizes a
transformer-based network to predict human keypoints based on these features.
Our method demonstrates that 3D HPE can be seamlessly integrated into a strong
LiDAR perception network and benefit from the features extracted by the
network. Experimental results on the Waymo Open Dataset demonstrate the
state-of-the-art performance, and improvements even compared to previous
multi-modal solutions. | Dongqiangzi Ye, Yufei Xie, Weijia Chen, Zixiang Zhou, Lingting Ge, Hassan Foroosh | 2023-06-21T19:20:15Z | http://arxiv.org/abs/2306.12525v2 | # LPFormer: LiDAR Pose Estimation Transformer with Multi-Task Network
###### Abstract
In this technical report, we present the 1st place solution for the 2023 Waymo Open Dataset Pose Estimation challenge. Due to the difficulty of acquiring large-scale 3D human keypoint annotation, previous methods have commonly relied on 2D image features and 2D sequential annotations for 3D human pose estimation. In contrast, our proposed method, named LPFormer, uses only LiDAR as its input along with its corresponding 3D annotations. LPFormer consists of two stages: the first stage detects the human bounding box and extracts multi-level feature representations, while the second stage employs a transformer-based network to regress the human keypoints using these features. Experimental results on the Waymo Open Dataset demonstrate the top performance, and improvements even compared to previous multi-modal solutions.
## 1 Introduction
Human pose estimation has gained significant popularity in the image and video domain due to its wide range of applications. However, pose estimation using 3D inputs, such as LiDAR point cloud, has received less attention due to the difficulty associated with acquiring accurate 3D annotations. As a result, previous methods [3, 18, 16] on LiDAR-based human pose estimation commonly rely on weakly-supervised approaches that utilize 2D annotations. These approaches often assume precise calibration between camera and LiDAR inputs. However, in real-world scenarios, small errors in annotations or calibration can propagate into significant errors in 3D space, thereby affecting the training of the network. Additionally, due to the differences in perspective, it is difficult to accurately recover important visibility information by simply lifting 2D annotations to the 3D space.
In image-based human pose estimation, the dominant approach is the top-down method [9], which involves first detecting the human bounding box and then predicting the single-person pose based on the cropped features. However, a significant gap exists in the backbone network between the 2D detector and the 3D detector. Most LiDAR object detectors utilize projected bird's-eye view (BEV) features to detect objects, which helps reduce computational costs. This procedure leads to the loss of separable features in the height dimension that are crucial for human pose estimation. An effective use of learned object features for human pose estimation is still unexplored.
In this technical report, we present LPFormer, a complete two-stage top-down 3D human pose estimation framework that uses only LiDAR point cloud as input and is trained solely on 3D annotations. In the first stage, we adopt the design of LidarMultiNet [14], our previous state-of-the-art LiDAR multitask network that can accurately predict human object bounding boxes while generating fine-grained voxel features at a smaller scale. The second stage extracts point-level, voxel-level and object-level features of the points clouds inside each predicted bounding box and regresses the keypoints in a light-weighted transformer-based network. Our approach demonstrates that complex human pose estimation tasks can be seamlessly integrated into the LiDAR multi-task learning framework (as shown in Figure 1), achieving state-of-the-art performance without
Figure 1: Our method can predict 3D keypoints (red points with yellow wireframes), 3D bounding boxes, and 3D semantic segmentation in a single framework.
the need for image features or annotations.
## 2 Related Work
**Image-based 3D human pose estimation** 3D human pose estimation (HPE) has been extensively studied based solely on camera images, where the human pose is represented as a parameter mesh model such as SMPL [5] or skeleton-based keypoints. Previous works in this field can be generally categorized into two main approaches: top-down [4, 9] or bottom-up methods [2, 11]. Top-down methods decouple the pose estimation problem to individual human detection using the off-the-shelf object detection network and single-person pose estimation on the cropped object region. On the contrary, bottom-up methods first estimate the instance-agnostic keypoints and then group them together [2] or directly regress the joint parameter using center-based feature representation [11]. Some recent works [8, 13] explored using the transformer decoder to estimate human pose in an end-to-end fashion following the set matching design in DETR [1]. However, image-based 3D HPE suffers from inaccuracies and is considered not applicable to larger-scale outdoor scenes due to frequent occlusion and the difficulty of depth estimation.
**LiDAR-based 3D human pose estimation** To solve the depth ambiguity problem, some researchers [12, 19] explored using depth images for the 3D HPE. Compared to the depth image, LiDAR point cloud has a larger range and is particularly applicable to outdoor scenes, such as in autonomous driving applications. Waymo recently released the human joint keypoint annotations on both associated 2D images and 3D LiDAR point cloud on Waymo Open Dataset [10]. However, due to the lack of enough 3D annotation, previous works [16, 18] have focused on semi-supervised learning approaches. These approaches lift 2D annotation to the 3D space and rely on the fusion of image and LiDAR features for the HPE task.
## 3 Method
LPFormer is a two-stage LiDAR-only model designed for 3D pose estimation. Figure 2 provides an overview of our framework. The input to LPFormer only consists of point clouds, represented as a set of LiDAR points \(P=\{p_{i}|p_{i}\in\mathbb{R}^{3+C_{point}}\}_{i=1}^{N}\), where \(N\) denotes the number of points and \(C_{point}\) includes additional features such as intensity, elongation, and timestamp for each point. In the first stage, we employ a powerful multi-task network [14] that accurately predicts 3D object detection and 3D semantic segmentation, incorporating meaningful semantic features. Inspired by a recent work [16], our second stage leverages a transformer-based model. This model takes various out
Figure 2: **Main Architecture of LPFormer. Our network aims to estimate the 3D human pose for the entire frame based on the LiDAR-only input. It is comprised of two main components. The left part (blue) represents our powerful multi-task network, LidarMultiNet [14], which generates accurate 3D object detection and provides rich voxel and bird’s-eye-view (BEV) features. The right part (green) corresponds to our Keypoint Transformer (KPTR), predicting the 3D keypoints of each human box using various inputs from our first-stage network.**
puts from the first stage as inputs and generates 3D human keypoints \(Y_{kp}\in\mathbb{R}^{N_{kp}\times 3}\) along with their corresponding visibilities \(Y_{vis}\in\mathbb{R}^{N_{kp}}\), where \(N_{kp}\) is the number of 3D keypoints.
### First Stage Detection
The first stage of our LPFormer adopts the methodology of LidarMultiNet [14] for extracting point clouds features from raw point clouds \(P\). Illustrated in Figure 2, it consists of a 3D encoder-decoder structure with Global Context Pooling (GCP) module in between. The 3D object detection predictions are obtained through the 3D detection head, which is attached to the dense 2D BEV feature map.
**Enriching point features with multi-level feature embedding** Within each detected bounding box, the points are performed by a local coordinate transformation involving translation and rotation. Subsequently, the transformed points are concatenated with their corresponding original point features, resulting in \(P_{point}\in\mathbb{R}^{M\times N_{max}\times(3+C_{point})}\), where \(M\) is the number of bounding boxes and \(N_{max}\) represents the maximum number of point clouds within each bounding box. For each box, we randomly shuffle and remove extra points, and pad with zero if the number of points within a box is less than \(N_{max}\). Additionally, we generate point voxel features \(P_{voxel}\in\mathbb{R}^{M\times N_{max}\times C_{voxel}}\) by gathering the 3D sparse features from the decoder using their corresponding voxelization index, where \(C_{voxel}\) denotes the channel size of the last stage of the decoder. Similar to CenterPoint [15], for each bounding box, we adopt the BEV features at its center as well as the centers of its edges in the 2D BEV feature map as the box features \(B\in\mathbb{R}^{M\times(5\times C_{BEV})}\)
### Second Stage Keypoint Transformer
By leveraging the capabilities of the robust first stage model LidarMultiNet [14], our second stage is able to exploit valuable semantic features for capturing intricate object details, including human 3D pose. Different from LidarMultiNet [14], we choose a transformer architecture instead of a PointNet-like [7] structure as our second stage, in order to effectively understand 3D keypoints by leveraging local points information through an attention mechanism. The details of our second stage are shown in Figure 3.
Specifically, our second stage takes various features from local points features \(P_{point}\), semantic voxel-wise points features \(P_{voxel}\), and box-wise features \(B\) to predict 3D keypoints for each pedestrian or cyclist box. Starting with a box-wise feature \(B\), we first employ a multilayer perceptron (MLP) to compress its dimensions from \(\mathbb{R}^{5\times C_{BEV}}\) to \(\mathbb{R}^{C_{compressed}}\). This compressed box-wise feature is then replicated as \(P_{box}\in\mathbb{R}^{N_{max}\times C_{compressed}}\) and combined with point-wise features \(P_{point}\) and \(P_{voxel}\), resulting in \(P_{cat}\in\mathbb{R}^{N_{max}\times(3+C_{point}+C_{voxel}+C_{compressed})}\). The fused point-wise features are subjected to a simple matrix multiplication, yielding \(X_{point}\in\mathbb{R}^{N_{max}\times C_{tr}}\), which serves as one part of the input for Keypoint Transformer (KPTR). The other input for KPTR is a learnable 3D keypoints query
Figure 3: **Illustration of Keypoint Transformer (KPTR). In the initial stage of our KPTR, we start by compressing the feature dimension of the box features. These compressed box features are then repeated and concatenated with the point features and point voxel features. The keypoint queries are generated from learnable embedding features. Then L sequences of KPTR operations are performed on the keypoint queries and point tokens. Finally, the keypoint queries are passed through three distinct MLPs to learn the XY offsets, Z offsets, and visibilities of the 3D keypoints. Simultaneously, the point tokens are processed by an MLP to learn the point-wise segmentation labels for the 3D keypoints, which serves as an auxiliary task.**
\(X_{kp}\in\mathbb{R}^{N_{kp}\times C_{tr}}\). Subsequently, we employ KPTR, which consists of L blocks of a multi-head self-attention and a feed-forward network, to learn internal features \(X^{{}^{\prime}}_{point}\) and \(X^{{}^{\prime}}_{kp}\). Finally, the keypoints' internal features \(X^{{}^{\prime}}_{kp}\) are fed into three separate MLPs to predict 3D keypoints offsets along the X and Y axes \(\hat{Y}_{xy}\in\mathbb{R}^{N_{kp}\times 2}\), 3D keypoints offsets along the Z axis \(\hat{Y}_{z}\in\mathbb{R}^{N_{kp}\times 1}\), and 3D keypoints visibilities \(\hat{Y}_{vis}\in\mathbb{R}^{N_{kp}}\). Furthermore, the point-wise internal features \(X^{{}^{\prime}}_{point}\) are processed by an MLP to estimate point-wise keypoint segmentation \(\hat{Y}_{kpseg}\in\mathbb{R}^{N_{max}\times(N_{kp}+1)}\).
For the final predictions, we combine the predicted 3D keypoints offsets \(\hat{Y}_{xy}\), \(\hat{Y}_{z}\), and the predicted 3D keypoints visibilities \(\hat{Y}_{vis}\) to generate the human pose for each bounding box. Then we apply a reverse coordinate transformation to convert the predicted human pose from the local coordinate system to the global LiDAR coordinate system. Moreover, the predicted point-wise keypoint segmentation \(\hat{Y}_{kpseg}\) serves as an auxiliary task, aiding KPTR in learning point-wise local information and enhancing the regression of 3D keypoints through the attention mechanism. In the experiments section, we will demonstrate how this auxiliary task significantly enhances the overall performance of the model.
### Training and Losses
During the training phase, we replace the predicted bounding boxes with ground truth bounding boxes that include 3D keypoints labels. This substitution is necessary since only a limited number of ground truth boxes are annotated with 3D keypoints labels. By employing this approach, we simplify and expedite the training process. Additionally, inspired by [18], we introduce a point-wise segmentation task for keypoints as an auxiliary task to improve the performance of 3D keypoints regression. The pseudo segmentation labels \(Y_{kpseg}\in\mathbb{R}^{N_{max}\times(N_{kp}+1)}\) are generated by assigning each 3D keypoint's type to its top K nearest points. This auxiliary task is supervised using cross-entropy loss, expressed as \(\mathcal{L}_{kpseg}\).
To facilitate the 3D keypoints regression, we divide it into two branches: one for the regression over the X and Y axes and another for the regression over the Z axis. This division is based on our observation that predicting the offset along the Z axis is comparatively easier than predicting it along the X and Y axes. We employ smooth L1 loss to supervise these regression branches, denoting them as \(\mathcal{L}_{xy}\) and \(\mathcal{L}_{z}\). Note that only the visible 3D keypoints contribute to the regression losses. In addition, we treat the visibility of the keypoints as a binary classification problem. We supervise it using binary cross-entropy loss as \(\mathcal{L}_{vis}\).
Our first stage LiDARMultiNet is pretrained following instructions in [14] and frozen during the 3D keypoints' training phase. We introduce weight factors for each loss component, and our final loss function is formulated as follows:
\[\mathcal{L}_{total}=\lambda_{1}\mathcal{L}_{xy}+\lambda_{2}\mathcal{L}_{z}+ \lambda_{3}\mathcal{L}_{vis}+\lambda_{4}\mathcal{L}_{kpseg} \tag{1}\]
where \(\lambda_{1}\), \(\lambda_{2}\), \(\lambda_{3}\), \(\lambda_{4}\) are weight factors and fixed at values of 5, 1, 1, and 1, respectively.
## 4 Experiments
### Dataset
Waymo Open Dataset released the human keypoint annotation on the v1.3.2 dataset that contains LiDAR range images and associated camera images. We use v1.4.2 for training and validation. The 14 classes of keypoints for evaluation are defined as nose, left shoulder, left elbow, left wrist, left hip, left knee, left ankle, right shoulder, right elbow, right wrist, right hip, right knee, right ankle, and head center. There are 144709 objects with 2D keypoints annotations while only 8125 objects with 3D keypoints annotations in the training dataset.
### Metrics
We use mean per-joint position error (MPJPE) and Pose Estimation Metric (PEM) as the metrics to evaluate our method. In MPJPE, the visibility of predicted joint \(i\) of one human keypoint set \(j\) is represented by \(v_{i}^{j}\in[0,1]\), indicating whether there is a ground truth for it. As such, the MPJPE over the whole dataset is:
\[\mathbf{MPJPE}(Y,\hat{Y})=\frac{1}{\sum_{i,j}v_{i}^{j}}\sum_{i,j}v_{i}^{j}||Y_ {i}^{j}-\hat{Y}_{i}^{j}||_{2}, \tag{2}\]
where \(Y\) and \(\hat{Y}\) are the ground truth and predicted 3D coordinates of keypoints.
PEM is a new metric created specifically for the Pose Estimation challenge. Besides keypoint localization error and visibility classification accuracy, it is also sensitive to the rates of false positive and negative object detections, while remaining insensitive to the Intersection over Union (IoU) of object detections. PEM is calculated as a weighted sum of the MPJPE over visible matched keypoints and a penalty for unmatched keypoints, as shown:
\[\mathbf{PEM}(Y,\hat{Y})=\frac{\sum_{i\in M}\left\|y_{i}-\hat{y}_{i}\right\|_{2 }+C|U|}{|M|+|U|}, \tag{3}\]
where \(M\) is the set of indices of matched keypoints, \(U\) is the set of indices of unmatched keypoints, and \(C=0.25\) is a constant penalty for unmatched keypoints. The PEM ensures accurate, robust ranking of model performance in a competition setting.
### Implementation Details
Throughout all our experiments, we use a pretrained LidarMultiNet [14] as the first stage of our framework, which remains frozen during the training phase of the second stage. For additional network and training specifics regarding our first stage, please refer to LidarMultiNet [14].
Regarding KPTR, the dimensions of the inputs, namely \(C_{point}\), \(C_{voxel}\), and \(C_{BEV}\), are set to 3, 32, and 512, respectively. The size of the compressed features, denoted as \(C_{compressed}\), is 32. We cap the maximum number of points per bounding box at 1024. For the transformer architecture, similar to the recent work [16], we utilize \(L=4\) stages, an embedding size of \(C_{tr}=256\), a feed-forward network with internal channels of 256, and 8 heads for the MultiHeadAttention layer. The total number of 3D keypoints \(N_{kp}\) is 14.
During training, we incorporate various data augmentations, including standard random flipping, global scaling, rotation, and translation. It is important to note that flipping the point clouds has an impact on the relationships between the 3D keypoints annotations, similar to the mirror effect. When performing a flip over the X-axis or Y-axis, the left parts of the 3D keypoints should be exchanged with the right parts of the 3D keypoints accordingly.
To train our model, we use the AdamW optimizer along with the one-cycle learning rate scheduler for a total of 20 epochs. The training process utilizes a maximum learning rate of 3e-3, a weight decay of 0.01, and a momentum ranging from 0.85 to 0.95. All experiments are conducted on 8 Nvidia A100 GPUs, with a batch size set to 16.
### Main Pose Estimation Results
In our final submission to the leaderboard, we trained our model using the combined dataset of Waymo's training and validation splits. The results, presented in Table 1, demonstrate the impressive performance of our LPFormer, achieving a PEM of 0.1524, an MPJPE of 0.0594, and ranking 1st on the leaderboard. Notably, our LPFormer outperforms all other methods across all categories in terms of both PEM and MPJPE.
### Ablation Study
To conduct a comprehensive performance analysis of our LPFormer, we compare it with other SOTA methods, as shown in Table 2. It is important to note that all previous methods were evaluated on a subset of the WOD validation split. Additionally, these methods simplify the problem by providing ground truth 3D bounding boxes along with associated ground truth 2D bounding boxes as inputs. Despite some of these methods incorporating camera and LiDAR fusion or 2D weakly supervision, our LPFormer outperforms them all in terms of MPJPE, achieving an impressive MPJPE of 6.16cm.
Table 3 shows a comparison of the performance between the first stage and LPFormer, as well as the contribution of each component in the second stage to the overall performance. The first stage results are directly output from the Center Head following the BEV feature map. Given the BEV feature map is primarily driven by the detection task and has low resolution, it lacks fine-grained features, resulting in mediocre performance. The second stage which is similar to Second-stage Refinement module in LidaMultiNet [14], however, significantly improves upon introducing point-wise fine-grained features. Further enhancements are achieved by adding the keypoint segmentation auxiliary task, employing the transformer structure, and incorporating box features, all of which contribute to varying degrees of performance improvement for the model.
angle. The input is solely 3D LiDAR point clouds. Remarkably, our network simultaneously outputs results of 3D semantic segmentation, 3D bounding boxes, as well as their 3D keypoints (red) along with the corresponding wireframes (yellow) for visualization. Our model also predicts visibility, for example, the left knee of the second person from the left is predicted as invisible, while the left foot is visible. Both feet of the third person from the right are predicted as invisible. The right elbow of the sixth person from the right is predicted as invisible, however, the right hand is visible.
Figure 4 presents a selection of predictions made on the validation set. From left to right, the three columns represent ground truths, the predictions of the 1st stage, and
Figure 4: Prediction results compared to the Ground Truth and the 1st stage results.
the predictions of LPFormer, respectively. Each row showcases the same group of objects. As can be observed, across all three groups, the performance of LPFormer noticeably surpasses that of the 1st stage output. The first row highlights a cyclist for whom ground truth annotations are extremely limited. Despite the limited amount of annotations, LPFormer still manages to deliver meaningful output. In the second row, LPFormer is strikingly close to the ground truth, with the exception of an FN (False Negative) visibility for the right hand of the pedestrian on the left. The third row demonstrates that even on the pedestrian without ground truth annotations, LPFormer still produces satisfactory results. For the running pedestrian on the right, LPFormer performs pretty well. However, the left pedestrian's head center is an FP (False Positive) case, and the crossed hands pose is a difficult case given the small amount of similar ground truth annotations available.
Figure 5 demonstrates the model's performance in pedestrian-rich scenarios, as the PEM metric is sensitive to both false positive and false negative object detections. In these scenarios, the restriction on a 25m detection range has been eliminated, while the detection score threshold and IoU threshold have been maintained. It is evident that the model can detect more distant pedestrians and provide keypoints predictions. However, it is noted that the visibility for distant pedestrians decreases, which is reasonable as the point clouds in the distance tend to be more sparse and prone to occlusion.
## 5 Conclusion
In the Waymo Open Dataset pose estimation challenge 2023, our proposed LPFormer secured the 1st place. As for the future work, we plan to further enhance our LPFormer method through broad integration and fusion of LiDAR and camera data, in addition to exploiting 2D weak supervision.
|
2306.05059 | Reconciling Predictive and Statistical Parity: A Causal Approach | Since the rise of fair machine learning as a critical field of inquiry, many
different notions on how to quantify and measure discrimination have been
proposed in the literature. Some of these notions, however, were shown to be
mutually incompatible. Such findings make it appear that numerous different
kinds of fairness exist, thereby making a consensus on the appropriate measure
of fairness harder to reach, hindering the applications of these tools in
practice. In this paper, we investigate one of these key impossibility results
that relates the notions of statistical and predictive parity. Specifically, we
derive a new causal decomposition formula for the fairness measures associated
with predictive parity, and obtain a novel insight into how this criterion is
related to statistical parity through the legal doctrines of disparate
treatment, disparate impact, and the notion of business necessity. Our results
show that through a more careful causal analysis, the notions of statistical
and predictive parity are not really mutually exclusive, but complementary and
spanning a spectrum of fairness notions through the concept of business
necessity. Finally, we demonstrate the importance of our findings on a
real-world example. | Drago Plecko, Elias Bareinboim | 2023-06-08T09:23:22Z | http://arxiv.org/abs/2306.05059v2 | # Reconciling Predictive and Statistical Parity:
###### Abstract
Since the rise of fair machine learning as a critical field of inquiry, many different notions on how to quantify and measure discrimination have been proposed in the literature. Some of these notions, however, were shown to be mutually incompatible. Such findings make it appear that numerous different kinds of fairness exist, thereby making a consensus on the appropriate measure of fairness harder to reach, hindering the applications of these tools in practice. In this paper, we investigate one of these key impossibility results that relates the notions of statistical and predictive parity. Specifically, we derive a new causal decomposition formula for the fairness measures associated with predictive parity, and obtain a novel insight into how this criterion is related to statistical parity through the legal doctrines of disparate treatment, disparate impact, and the notion of business necessity. Our results show that through a more careful causal analysis, the notions of statistical and predictive parity are not really mutually exclusive, but complementary and spanning a spectrum of fairness notions through the concept of business necessity. Finally, we demonstrate the importance of our findings on a real-world example.
## 1 Introduction
As society increasingly relies on AI-based tools, an ever larger number of decisions that were once made by humans are now delegated to automated systems, and this trend is likely to only accelerate in the coming years. Such automated systems may exhibit discrimination based on gender, race, religion, or other sensitive attributes, as witnessed by various examples in criminal justice [1], facial recognition [13; 5], targeted advertising [12], and medical treatment allocation [19], to name a few.
In light of these challenges, a large amount of effort has been invested in attempts to detect and quantify undesired discrimination based on society's current ethical standards, and then design learning methods capable of removing possible unfairness from future predictions and decisions. During this process, many different notions on how to quantify discrimination have been proposed. In fact, the current literature is abundant with different fairness metrics, some of which are mutually incompatible [9]. The incompatibility of these measures can create a serious obstacle for practitioners since choosing among them, even for the system designer, is usually a non-trivial task.
In the real world, issues of discrimination and unfairness are analyzed through two major legal doctrines. The first one is _disparate treatment_, which enforces the equality of treatment of different groups and prohibits the use of the protected attribute (e.g., race) during the decision process. One of the legal formulations for showing disparate treatment is that "a similarly situated person who is not a member of the protected class would not have suffered the same fate" [3]. Disparate treatment is commonly associated with the notion of direct effects in the causal literature. The second doctrine is known as _disparate impact_ and focuses on _outcome fairness_, namely, the equality of outcomes among protected groups. Discrimination through disparate impact occurs if a facially neutral practice has an adverse impact on members of the protected group, including cases where discrimination is
unintended or implicit. In practice, the law may not necessarily prohibit the usage of all characteristics correlated with the protected attribute due to their relevance to the business itself, which is legally known as "business necessity" (labelled BN from now on) or "job-relatedness". Therefore, some of the variables may be used to distinguish between individuals, even if they are associated with the protected attribute [14]. From a causal perspective, disparate impact is realized through indirect forms of discrimination, and taking into account BN considerations is the essence of this doctrine [3].
We now note how BN requirements span a range of fairness notions between statistical (SP) and predictive parity (PP). Consider a set of causal pathways between the attribute \(X\) and the predictor \(\widehat{Y}\), labeled \(\mathcal{C}_{1},\ldots,\mathcal{C}_{k}\). For example, these pathways could represent the direct, indirect, and spurious effects of \(X\) on \(\widehat{Y}\), or more generally any set of path-specific effects. For each \(\mathcal{C}_{i}\), the system designer needs to decide whether the causal pathway in question is considered as discriminatory (i.e., _not_ in the BN set), or if it considered non-discriminatory (i.e., in the BN set), as shown in Fig. 1 left. If \(\mathcal{C}_{i}\) is not in the BN set, then the causal effect transmitted along this pathway should equal zero, written \(\mathcal{C}_{i}(X,\widehat{Y})=0\) (which is the main focus of previous works on path-specific counterfactual fairness [22; 7]). However, if \(\mathcal{C}_{i}\) is in the BN set, then the transmitted causal effect does not need to equal \(0\), i.e., \(\mathcal{C}(X,\widehat{Y})\neq 0\) may be allowed. Interestingly, the transmitted causal effect should not take an arbitrary value in this case. Rather, it should equal the _transmitted effect from \(X\) to the original outcome \(Y\) (observed in the real world) along the same pathway_, written as \(\mathcal{C}_{i}(X,\widehat{Y})=\mathcal{C}_{i}(X,Y)\). Therefore, considerations of BN can be summarized as a \(0/1\) vector of length \(k\), where each pathway \(\mathcal{C}_{i}\) is represented by an entry. As we demonstrate both intuitively and formally throughout the paper, the choice of the BN set being empty, written \((0,\ldots,0)\), will ensure the notion of statistical parity. The choice of the BN set that includes all pathways, written \((1,\ldots,1)\), will lead to predictive parity. Crucially, various intermediate fairness notions between the two ends of the spectrum are possible, depending on what is allowed or not; for an illustration, see Fig. 1 (right side).
The unification of the principles behind statistical and predictive parity through the concept of BN has the potential to bridge the gap between the two notions and improve the current state-of-the-art, by providing an argument against the seemingly discouraging impossibility result between the two notions. The practitioner is no longer faced with a false dichotomy of choosing between statistical or predictive parity but rather faces a spectrum of different fairness notions determined by the choice of the business necessity set, which is usually fixed through societal consensus and legal requirements.
### Organization & Contributions
In Sec. 2, we introduce important preliminary notions in causal inference, and the formal tools for specifying causal pathways \(\mathcal{C}_{1},\ldots,\mathcal{C}_{k}\) described above. Further, we discuss the notions of statistical and predictive parity, together with the impossibility result that separates them. In Sec. 2.2 we discuss how different causal variations with the statistical parity measure can be disentangled using an additive decomposition [23]. In Sec. 3, we develop a novel decomposition of the predictive parity measure in terms of the underlying causal mechanisms and discuss how this notion is in fact complementary to statistical parity through the concept of business necessity. In Sec. 3.1, we unify the theoretical findings by introducing a formal procedure that shows how to assess the legal doctrines of discrimination by leveraging both the concepts of causal predictive parity and causal statistical parity. In Sec. 4, we apply our approach in the context of criminal justice using the COMPAS dataset [1], and demonstrate empirically the trade-off between SP and PP. Our key formal contributions are the following:
* We develop the first non-parametric decomposition of the predictive parity measure in terms of the underlying causal mechanisms (Thm. 1).
* Building on the previous result, we define a natural notion of causal predictive parity (Def. 5). We then develop a procedure (Alg. 1) for evaluating if a classifier satisfies the desired notions of causal statistical parity and causal predictive parity, which provides a unified framework
Figure 1: BN specification (left) and the spectrum between statistical and predictive parity (right).
for incorporating desiderata from both predictive and statistical parity and sheds light on the impossibility theorem that relates the two notions.
## 2 Background
We use the language of structural causal models (SCMs) as our basic semantical framework [15]. A structural causal model (SCM) is a tuple \(\mathcal{M}:=\langle V,U,\mathcal{F},P(u)\rangle\), where \(V\), \(U\) are sets of endogenous (observables) and exogenous (latent) variables, respectively, \(\mathcal{F}\) is a set of functions \(f_{V_{i}}\), one for each \(V_{i}\in V\), where \(V_{i}\gets f_{V_{i}}(\mathrm{pa}(V_{i}),U_{V_{i}})\) for some \(\mathrm{pa}(V_{i})\subseteq V\) and \(U_{V_{i}}\subseteq U\). \(P(u)\) is a strictly positive probability measure over \(U\). Each SCM \(\mathcal{M}\) is associated to a causal diagram \(\mathcal{G}\)[15] over the node set \(V\) where \(V_{i}\to V_{j}\) if \(V_{i}\) is an argument of \(f_{V_{j}}\), and \(V_{i}\leftarrow\rightarrow V_{j}\) if the corresponding \(U_{V_{i}},U_{V_{j}}\) are not independent. An instantiation of the exogenous variables \(U=u\) is called a _unit_. By \(Y_{x}(u)\) we denote the potential response of \(Y\) when setting \(X=x\) for the unit \(u\), which is the solution for \(Y(u)\) to the set of equations obtained by evaluating the unit \(u\) in the submodel \(\mathcal{M}_{x}\), in which all equations in \(\mathcal{F}\) associated with \(X\) are replaced by \(X=x\). Building on the notion of a potential response, one can further define the notions of counterfactual and factual contrasts, given by:
**Definition 1** (Contrasts [16]).: _Given an SCM \(\mathcal{M}\), a contrast \(\mathcal{C}\) is any quantity of the form_
\[\mathcal{C}(C_{0},C_{1},E_{0},E_{1})=\mathbb{E}\,[y_{C_{1}}\mid E_{1}]- \mathbb{E}\,[y_{C_{0}}\mid E_{0}], \tag{1}\]
_where \(E_{0},E_{1}\) are observed (factual) clauses and \(C_{0},C_{1}\) are counterfactual clauses to which the outcome \(Y\) responds. Furthermore, whenever_
* \(E_{0}=E_{1}\)_, the contrast_ \(\mathcal{C}\) _is said to be counterfactual;_
* \(C_{0}=C_{1}\)_, the contrast_ \(\mathcal{C}\) _is said to be factual._
For instance, the contrast \((C_{0}=\{x_{0}\},C_{1}=\{x_{1}\},E_{0}=\emptyset,E_{1}=\emptyset)\) corresponds to the _average treatment effect (ATE)_\(\mathbb{E}[y_{x_{1}}-y_{x_{0}}]\). Similarly, the contrast \((C_{0}=\{x_{0}\},C_{1}=\{x_{1}\},E_{0}=\{x_{0}\},E_{1}=\{x_{0}\})\) corresponds to the _effect of treatment on the treated (EIT)_\(\mathbb{E}[y_{x_{1}}-y_{x_{0}}\mid x_{0}]\). Many other important causal quantities can be represented as contrasts, as exemplified later on.
Throughout this manuscript, we assume a specific cluster causal diagram \(\mathcal{G}_{\text{SFM}}\) known as the standard fairness model (SFM) [16] over endogenous variables \(\{X,Z,W,Y,\widehat{Y}\}\) shown in Fig. 3. The SFM consists of the following: _protected attribute_, labeled \(X\) (e.g., gender, race, religion), assumed to be binary; the set of _confounding_ variables \(Z\), which are not causally influenced by the attribute \(X\) (e.g., demographic information, zip code); the set of _mediator_ variables \(W\) that are possibly causally influenced by the attribute (e.g., educational level or other job-related information); the _outcome_ variable \(Y\) (e.g., GPA, salary); the _predictor_ of the outcome \(\widehat{Y}\) (e.g., predicted GPA, predicted salary). The SFM also encodes the assumptions typically used in the causal inference literature about the lack of hidden confounding1. We next introduce the key notions and results from the fair ML literature needed for our discussion.
Footnote 1: Partial identification techniques for bounding effects can be used for relaxing these assumptions [24].
### Statistical & Predictive Parity Notions
The notions of statistical parity and predictive parity are defined as follows2:
Figure 3: Standard Fairness Model with \(Z=\emptyset\) from Thm. 1, extended with the predictor \(\widehat{Y}\).
**Definition 2** (Statistical [11] and Predictive [8] Parity).: _Let \(\hat{X}\) be the protected attribute, \(Y\) the true outcome, and \(\widehat{Y}\) the outcome predictor. The predictor \(\widehat{Y}\) satisfies statistical parity (SP) with respect to \(X\) if \(X\!\perp\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
This result shows how we can disentangle direct, indirect, and spurious variations within the SPM. We emphasize the importance of this result in the context of assessing the legal doctrines of discrimination. If a causal pathway (direct, indirect, or spurious) does not lie in the business necessity set, then the corresponding counterfactual measure (Ctf-DE, IE, or SE) needs to equal \(0\). To formalize this notion, we can now introduce the criterion of _causal statistical parity_:
**Definition 4** (Causal Statistical Parity).: _We say that \(\widehat{Y}\) satisfies causal statistical parity with respect to the protected attribute \(X\) if_
\[\text{Ctf-DE}_{x_{0},x_{1}}(\widehat{y}\mid x_{0})=\text{Ctf-IE}_{x_{1},x_{0} }(\widehat{y}\mid x_{0})=\text{Ctf-SE}_{x_{1},x_{0}}(\widehat{y})=0. \tag{9}\]
In practice, causal statistical parity can be a strong requirement, but the notion can be easily relaxed to include only a subset of the Ctf-{DE, IE, or SE} measures, under BN requirements.
## 3 Predictive Parity Decomposition
After discussing the decomposition of the SPM, our aim is to obtain a causal understanding of the predictive parity criterion, \(Y\bot\!\!\!\bot X\mid\widehat{Y}\). To do so, we derive a formal decomposition result of the PP measure that involves both \(Y\) and \(\widehat{Y}\), shown in the following theorem:
**Theorem 1** (Causal Decomposition of Predictive Parity).: _Let \(\mathcal{M}\) be an SCM compatible with the causal graph in Fig. 3 (i.e., SFM with \(Z=\emptyset\)). Then, it follows that the \(\text{PPM}_{x_{0},x_{1}}(y\mid\widehat{y})=P(y\mid x_{1},\widehat{y})-P(y\mid x _{0},\widehat{y})\) can be decomposed into its causal and spurious anti-causal variations as:_
\[\text{PPM}_{x_{0},x_{1}}(y\mid\widehat{y})= P(y_{x_{1}}\mid x_{1},\widehat{y})-P(y_{x_{0}}\mid x_{1}, \widehat{y})+P(y_{x_{0}}\mid\widehat{y}_{x_{1}})-P(y_{x_{0}}\mid\widehat{y}_{ x_{0}}). \tag{10}\]
Thm. 1 offers a non-parametric decomposition result of the predictive parity measure that can be applied to any SCM compatible with the graph in Fig. 3. In Appendix A.1 we provide a proof of the theorem (together with the proof of Cor. 1 stated below), and in Appendix A.2 we perform an empirical study to verify the decomposition result of the theorem. For the additional special case of linear SCMs, the terms appearing in the decomposition in Eq. 10 can be computed explicitly:
**Corollary 1** (Causal Decomposition of Predictive Parity in the Linear Case).: _Under the additional assumptions that (i) the SCM \(\mathcal{M}\) is linear and \(Y\) is continuous; (ii) the learner \(\widehat{Y}\) is efficient, then_
\[\mathbb{E}\left(y_{x_{1}}\mid x_{1},\widehat{y}\right)-\mathbb{E }(y_{x_{0}}\mid x_{1},\widehat{y}) =\alpha_{XW}\alpha_{WY}+\alpha_{XY} \tag{11}\] \[\mathbb{E}(y_{x_{0}}\mid x_{1},\widehat{y}_{x_{1}})-\mathbb{E}( y_{x_{0}}\mid x_{1},\widehat{y}_{x_{0}}) =-(\alpha_{XW}\alpha_{WY}+\alpha_{XY}), \tag{12}\]
_where \(\alpha_{V_{i}V_{j}}\) is the linear coefficient between variables \(V_{i},V_{j}\)._
We now carefully unpack the key insight from Thm. 1. In particular, we showed that in the case of an SFM with \(Z=\emptyset\)4 the predictive parity measure can be written as:
Footnote 4: We remark that the essence of the argument is unchanged in the case with \(Z\neq\emptyset\), but handling this case limits the clarity of presentation.
\[\text{PPM}= \underbrace{P(y_{x_{1}}\mid x_{1},\widehat{y})-P(y_{x_{0}}\mid x_{1 },\widehat{y})}_{\text{Term (I) causal}}+\underbrace{P(y_{x_{0}}\mid\widehat{y}_{x_{1}})-P(y_{x_{0}}\mid \widehat{y}_{x_{0}})}_{\text{Term (II) reverse-causal spurious}}. \tag{13}\]
The first, causal term can be expanded as
\[P(y_{x_{1}}\mid x_{1},\widehat{y})-P(y_{x_{0}}\mid x_{1},\widehat{y})=\sum_{u }\big{[}\underbrace{y_{x_{1}}(u)-y_{x_{0}}(u)}_{\text{unit-level difference}}\big{]}\underbrace{P(u\mid x_{1},\widehat{y})}_{\text{posterior}}. \tag{14}\]
The expansion shows us that the term is a weighted average of unit-level differences \(y_{x_{1}}(u)-y_{x_{0}}(u)\). Each unit-level difference measures the causal effect of a transition \(x_{0}\to x_{1}\) on \(Y\). The associated weights are given by the posterior \(P(u\mid x_{1},\widehat{y})\), which determines the probability mass corresponding to a unit \(u\) within the set of all units compatible with \(x_{1},\widehat{y}\). Therefore, the term represents an average causal effect of \(X\) on \(Y\) for a specific group of units. Interestingly, for any set of units selected by \(x_{1},\widehat{y}\), the effect in Term (I) _does not depend on the constructed predictor \(\widehat{Y}\)_, but only on the underlying system, i.e., it is not under the control of the predictor designer. The additional linear
result in Cor. 1 may also help the reader ground this idea, since it shows that Term (I) indeed captures the causal effect, which, in the linear case, can be obtained using the path-analysis of [21].
To achieve the criterion \(\text{PPM}=0\), the second term needs to be exactly the reverse of the causal effect, captured by the spurious variations induced by changing \(\widehat{y}_{x_{0}}\to\widehat{y}_{x_{1}}\) in the selection of units. The second term, which is in the control of the predictor \(\widehat{Y}\) designer, needs to cancel out the causal effect measured by the first term for PPM to vanish. Therefore, we see that achieving predictive parity is about constructing \(\widehat{Y}\) in a way that reflects the causal effect of \(X\) on \(Y\), across various groups of units. This key observation motivates a novel definition that we call _causal predictive parity_:
**Definition 5** (Causal Predictive Parity).: _Let \(\widehat{Y}\) be a predictor of the outcome \(Y\), and let \(X\) be the protected attribute. Then, \(\widehat{Y}\) is said to satisfy causal predictive parity (CPP) with respect to a counterfactual contrast \((C_{0},C_{1},E,E)\) if_
\[\mathbb{E}[y_{C_{1}}\mid E]-\mathbb{E}[y_{C_{0}}\mid E]=\mathbb{E}\left[ \widehat{y}_{C_{1}}\mid E\right]-\mathbb{E}\left[\widehat{y}_{C_{0}}\mid E \right]. \tag{15}\]
_Furthermore, \(\widehat{Y}\) is said to satisfy CPP with respect to a factual contrast \((C,C,E_{0},E_{1})\) if_
\[\mathbb{E}[y_{C}\mid E_{1}]-\mathbb{E}[y_{C}\mid E_{0}]=\mathbb{E}\left[ \widehat{y}_{C}\mid E_{1}\right]-\mathbb{E}\left[\widehat{y}_{C}\mid E_{0}\right]. \tag{16}\]
The intuition behind the notion of causal predictive parity captures the intuition behind predictive parity. If a contrast \(\mathcal{C}\) describes some amount of variation in the outcome \(Y\), then it should describe the same amount of variation in the predicted outcome \(\widehat{Y}\). For any of the contrasts Ctf-{DE, IE, SE} corresponding to a causal pathway, causal predictive parity would require that \(\mathcal{C}(X,\widehat{Y})=\mathcal{C}(X,Y)\).
### Reconciling Statistical and Predictive Parity
We now tie the notions of statistical and predictive parity through the concept of _business necessity_. In particular, if a contrast \(\mathcal{C}\) is associated with variations that are not in the business necessity set, then the value of this contrast should be \(\mathcal{C}(X,\widehat{Y})=0\), following the intuition of causal statistical parity from Def. 4. However, if the variations associated with the contrast _are_ in the business necessity set, then the value of that contrast should be equal for the predictor to the value for the true outcome
\[\mathcal{C}(X,\widehat{Y})=\mathcal{C}(X,Y), \tag{17}\]
following the intuition of causal predictive parity. Combining these two notions through business necessity results in Alg. 1. The algorithm requires the user to compute the measures
\[\text{Ctf-{DE, IE, SE}}(y),\text{Ctf-{DE, IE, SE}}(\widehat{y}). \tag{18}\]
Importantly, under the SFM, these measures are _identifiable_ from observational data:
**Proposition 4**.: _Under the assumptions of the standard fairness model in Fig. 2, the causal measures of fairness Ctf-{DE, IE, SE}\((y)\), Ctf-{DE, IE, SE}\((\widehat{y})\) are identifiable from observational data, that is, they can be computed uniquely from the observational distribution \(P(V)\)._
The explicit identification expression for each of the measures is given in Appendix B. The above result guarantees that the procedure of Alg. 1 is applicable to practical data analysis, in a fully non-parametric nature.
Further, for each of the DE, IE, and SE effects, the user needs to determine whether the causal effect (CE) in question falls into the business necessity set. If yes, then the algorithm asserts that
\[\text{Ctf-CE}(y)=\text{Ctf-CE}(\widehat{y}). \tag{19}\]
In the other case, when the causal effect is not in the business necessity set, the algorithm asserts that
\[\text{Ctf-CE}(\widehat{y})=0. \tag{20}\]
We remark that Alg. 1 is written in its population level version, in which the causal effects are estimated perfectly with no uncertainty. In the finite sample case, one needs to perform hypothesis testing to see whether the effects differ. If one is also interested in constructing a new fair predictor before using Alg. 1 (instead of testing an existing one), one may use tools for causally removing discrimination, such as [7] or [17; 18].
## 4 Experiments
We now apply Alg. 1 to the COMPAS dataset [1], as described in the following example.
Courts in Broward County, Florida use machine learning algorithms, developed by Northpointe, to predict whether individuals released on parole are at high risk of re-offending within 2 years (\(Y\)). The algorithm is based on the demographic information \(Z\) (\(Z_{1}\) for gender, \(Z_{2}\) for age), race \(X\) (\(x_{0}\) denoting White, \(x_{1}\) Non-White), juvenile offense counts \(J\), prior offense count \(P\), and degree of charge \(D\).
We construct the standard fairness model (SFM) for this example, which is shown in Fig. 5. The bidirected arrow between \(X\) and \(\{Z_{1},Z_{2}\}\) indicates possible co-variations of race with age and sex, which may not be causal in nature5. Furthermore, \(\{Z_{1},Z_{2}\}\) are the confounders, not causally affected by race \(X\). The set of mediators \(\{J,P,D\}\), however, may be affected by race, due to an existing societal bias in policing and criminal justice. Finally, all of the above mentioned variables may influence the outcome \(Y\).
Footnote 5: The causal model is non-committal regarding the complex historical/social processes that lead to such co-variations.
Having access to data from Broward County, and equipped with Alg. 1, we want to prove that the recidivism predictions produced by Northpointe (labeled \(\widehat{Y}^{NP}\)) violate legal doctrines of anti-discrimination. Suppose that in an initial hearing, the Broward County district court determines that the direct and indirect effects are not in the business necessity set, while the spurious effect is. In words, gender (\(Z_{1}\)) and age (\(Z_{2}\)) are allowed to be used to distinguish between the minority and majority groups when predicting recidivism, while other variables are not. If Northpointe's predictions are found to be discriminatory, we are required by the court to produce better, non-discriminatory predictions.
In light of this information, we proceed as follows (see source code). We first obtain a causally fair predictor \(\widehat{Y}^{FP}\) using the fairadapt package, which sequentially performs optimal transport
to match conditional distributions between groups for each variable in the causal diagram. Then, we compute the counterfactual causal measures of fairness for the true outcome \(Y\), Northpointe's predictions \(\widehat{Y}^{NP}\), and the fair predictions \(\widehat{Y}^{FP}\) (see Fig. 5). For the direct effect, we have:
\[\text{Ctf-DE}_{x_{0},x_{1}}(y\mid x_{0}) =-0.08\%\pm 2.59\%, \tag{21}\] \[\text{Ctf-DE}_{x_{0},x_{1}}(\widehat{y}^{NP}\mid x_{0}) =6\%\pm 2.96\%,\] (22) \[\text{Ctf-DE}_{x_{0},x_{1}}(\widehat{y}^{FP}\mid x_{0}) =-0.72\%\pm 1.11\%. \tag{23}\]
The indicated 95% confidence intervals are computed using repeated bootstrap repetitions of the dataset. Since the direct effect is not in the business necessity set, Northpointe's predictions clearly violate the disparate treatment doctrine (green bar for the Ctf-DE measure in Fig. 5). Our predictions, however, do not exhibit a statistically significant direct effect of race on the outcome, so they do not violate the criterion (blue bar). Next, for the indirect effects, we obtain:
\[\text{Ctf-IE}_{x_{1},x_{0}}(y\mid x_{0}) =-5.06\%\pm 1.24\%, \tag{24}\] \[\text{Ctf-IE}_{x_{1},x_{0}}(\widehat{y}^{NP}\mid x_{0}) =-7.73\%\pm 1.53\%,\] (25) \[\text{Ctf-IE}_{x_{1},x_{0}}(\widehat{y}^{FP}\mid x_{0}) =-0.25\%\pm 1.98\%. \tag{26}\]
Once again, the indirect effect, which is not in the business necessity set, is different from \(0\) for the Northpointe's predictions (violating disparate impact, see green bar for Ctf-IE in Fig. 5), but not statistically different from \(0\) for our predictions (blue bar). Interestingly, the indirect effect is different from \(0\) for the true outcome (red bar), indicating a bias in the current real world. Finally, for the spurious effects, we obtain
\[\text{Ctf-SE}_{x_{1},x_{0}}(y) =-3.17\%\pm 1.53\%, \tag{27}\] \[\text{Ctf-SE}_{x_{1},x_{0}}(\widehat{y}^{NP}) =-3.75\%\pm 1.58\%,\] (28) \[\text{Ctf-SE}_{x_{1},x_{0}}(\widehat{y}^{FP}) =-2.75\%\pm 1.22\%. \tag{29}\]
Since the spurious effect is in the business necessity set and each confidence interval contains all three point estimates, no violations with respect to spurious effects are found. The estimated effects for the outcome \(Y,\widehat{Y}^{NP},\text{ and }\widehat{Y}^{FP}\) are also shown graphically in Fig. 5. We conclude that Northpointe's predictions \(\widehat{Y}^{NP}\) violate the legal doctrines of fairness, while our predictions \(\widehat{Y}^{FP}\) do not. Importantly, tying back to the original discussion that motivated our approach, Northpointe's predictions \(\widehat{Y}^{NP}\) are further away from statistical parity than our predictions \(\widehat{Y}^{FP}\) according to the SPM (see SPM column in Fig. 5), while at the same time better calibrated according to the integrated PP measure (iPPM) that averages the PP measures from Eq. 3 across different values of \(\widehat{y}\) (see iPPM column in the figure). This observation demonstrates the trade-off between statistical and predictive parity through business necessity.
The choice of the business necessity set in the example above, \(\text{BN}=\{W\}\), was arbitrary. In general, the BN set can be any of \(\emptyset,Z,W,\{Z,W\}\), which is the setting we explore next.
Statistical vs. Predictive Parity Pareto Frontier.We now investigate other possible choices of the BN set, namely BN sets \(\emptyset,Z,W,\{Z,W\}\). Based on the theoretical analysis of Sec. 3, we expect that different choices of the BN set will lead to different trade-offs between SP and PP.
In particular, with the BN set being empty, any variation from \(X\) to \(Y\) would be considered discriminatory, so one would expect to be close to statistical parity. In contrast, if the BN set is \(\{Z,W\}\), then all variations (apart from the direct effect) are considered as non-discriminatory, so one would expect to be closer to satisfying predictive parity. The remaining options, \(\text{BN}{=}\)\(W\), and \(\text{BN}{=}\)\(Z\) should interpolate between these two extremes.
Based on the above intuition, we proceed as follows. For each choice of the business necessity set,
\[\text{BN}\in\{\emptyset,Z,W,\{Z,W\}\}, \tag{30}\]
we compute the adjusted version of the data again using the fairadapt package, with the causal effects in the complement of the BN set being removed. That is, if \(\text{BN}\) = \(W\), then our procedure removes the spurious effect, but keeps the indirect effect intact (other choices of the BN set are similar). Therefore, for each BN set, we obtain an appropriately adjusted predictor \(\widehat{Y}^{FP}_{\text{BN}}\), and in
particular compute the predictors \(\widehat{Y}_{\emptyset}^{FP},\widehat{Y}_{Z}^{FP},\widehat{Y}_{W}^{FP}\), and \(\widehat{Y}_{\{Z,W\}}^{FP}\).We note that the fair predictor \(\widehat{Y}^{FP}\) from the first part of the example is the predictor with the BN set equal to \(Z\), i.e., it corresponds to \(\widehat{Y}_{Z}^{FP}\). For each \(\widehat{Y}_{\text{BN}}^{FP}\), we in particular compute the SPM and iPPM measures, namely:
\[\text{SPM}_{x_{0},x_{1}}(\widehat{y}_{\text{BN}}^{FP}),\text{iPPM}_{x_{0},x_{ 1}}(\widehat{y}_{\text{BN}}^{FP}), \tag{31}\]
across 10 different repetitions of the adjustment procedure that yields \(\widehat{Y}_{\text{BN}}^{FP}\). For each business necessity set, this allows us to compute the SPM (measuring statistical parity), and iPPM (measuring predictive parity).
The results of the experiment are shown in Fig. 6, where the error bars indicate the standard deviation over different repetitions. As predicted by our theoretical analysis, the choice of BN\(=\emptyset\) yields the lowest SPM, but the largest iPPM. Conversely, BN\(=\{Z,W\}\) yields the lowest iPPM, but the largest SPM. The BN sets \(Z,W\) interpolate between the two notions, but the data indicates the spurious effect explained by \(Z\) does not have a major contribution. Fig. 6, therefore, shows a trade-off between statistical and predictive parity described through different business necessity options, and gives an empirical validation of the hypothesized spectrum of fairness notions in Fig. 1.
## 5 Conclusions
The literature in fair ML is abundant with fairness measures [9], many of which are mutually incompatible. Nonetheless, it is doubtful that each of these measures corresponds to a fundamentally different ethical conception of fairness. The multitude of possible approaches to quantifying discrimination makes the consensus on an appropriate notion of fairness unattainable. Further, the impossibility results between different measures may be discouraging to data scientists who wish to quantify and remove discrimination, but are immediately faced with a choice of which measure they wish to subscribe to.
In this work, we attempt to remedy a part of this issue by focusing on the impossibility of simultaneously achieving SP and PP. As our discussion shows, the guiding idea behind SP is that variations transmitted along causal pathways from the protected attribute to the predictor should equal \(0\), i.e., the decision should not depend on the protected attribute through the causal pathway in question (Def. 4). Complementary to this notion, and based on Thm. 1, the guiding principle behind PP is that variations transmitted along a causal pathway should be equal for the predictor as they are for the outcome _in the real world_ (Def. 5). SP will therefore be satisfied when the BN set includes all variations coming from \(X\) to \(Y\), while PP will be satisfied when the BN set is empty. The choice of the BN set interpolates between SP and PP forming a spectrum of fairness notions (see Fig. 1), in a way that can be formally assessed based on Alg. 1.
Therefore, our work complements the previous literature by reconciling the impossibility result between SP and PP [4]. Furthermore, it complements the existing literature on path-specific notions of fairness [14; 22; 7], which does not consider the true outcome \(Y\) and the predictor \(\widehat{Y}\) simultaneously, and does not explicitly specify which levels of discrimination are deemed acceptable along causal pathways in the BN set. Finally, we also mention the work on counterfactual predictive parity (Ctf-PP) [10] that is similar in name to our notion of causal predictive parity, but is in fact a very different notion. Ctf-PP deals with the setting of decision-making and considers counterfactuals of the outcome \(Y\) with respect to a treatment decision \(D\) that precedes it, while our work considers counterfactuals of the outcome \(Y\) and the predictor \(\widehat{Y}\) with respect to the protected attribute \(X\), in the context of fair predictions, and thus offers a different line of reasoning.
Figure 6: SP vs. PP Pareto frontier on COMPAS. |
2307.15163 | Gravitational collapse in Quadratic Gravity | This study explores the gravitational collapse of a massless scalar field
within Quadratic Gravity treated as a dimension-four operator Effective Field
Theory extension to General Relativity. The additional degrees of freedom
associated with the higher derivatives in this theory are removed by an Order
Reduction approach, where the truncated expansion nature of the theory is
exploited. Through simulations, we find scenarios where solutions remain within
the bounds of the Effective Field Theory while displaying significant
deviations from General Relativity in the dynamics of curvature invariants
during the collapse. Limitations of the approach taken, the Effective Field
Theory approximation, and the appearance of instabilities are also discussed. | Ramiro Cayuso | 2023-07-27T19:41:33Z | http://arxiv.org/abs/2307.15163v1 | # Gravitational collapse in Quadratic Gravity
###### Abstract
This study explores the gravitational collapse of a massless scalar field within Quadratic Gravity treated as a dimension-four operator Effective Field Theory extension to General Relativity. The additional degrees of freedom associated with the higher derivatives in this theory are removed by an Order Reduction approach, where the truncated expansion nature of the theory is exploited. Through simulations, we find scenarios where solutions remain within the bounds of the Effective Field Theory while displaying significant deviations from General Relativity in the dynamics of curvature invariants during the collapse. Limitations of the approach taken, the Effective Field Theory approximation, and the appearance of instabilities are also discussed.
## I Introduction
Gravitational Wave (GW) Astronomy [1; 2] has emerged as an extraordinary tool for probing the nature of gravity via a channel and regimes that were inaccessible before its time. By collecting and analyzing gravitational wave data from current and future detectors, we will be able to test General Relativity (GR) [3] with scrutiny limited only by the reach and precision of our detectors, as well as the quality of our predictions. In the search for deviations from GR, the community has developed many alternative theories of gravity, for which substantial theoretical efforts have been placed into modeling and predictions. GW signals produced in compact binary mergers are arguably the best source to peer into GR and possible modifications in the most dynamical and strong regime. There are now several instances [4; 5; 6; 7; 8; 9; 10; 11] where full nonlinear numerical simulations of compact binary coalescence (and the prediction of their respective GW emissions) have been achieved in modified gravity candidate theories. Understanding how modifications in the underlying theory change predictions is essential in pushing our searches for such deviations in the data.
Of the proposed theories which could be tested through the observation of gravitational waves, there is great interest in those that fall under what is commonly called Effective Field Theory (EFT) extensions to GR [12; 13; 14; 15]. These theories are constructed by adding terms to the Einstein-Hilbert action formed from powers of curvature invariants that are adequately suppressed by powers of a given cut-off scale \(\Lambda\). The scale \(\Lambda\) is related to the mass of the heavy fields modifying the theory, which in the EFT description are integrated out. This method then describes a perturbative expansion consistent with the desired symmetries and assumptions, without introducing new light degrees of freedom. In recent years there have been several efforts [4; 16; 17; 18; 19] in the modeling of these theories, and constraining the relevant parameters, such as the scale \(\Lambda\) at which modifications are introduced. These have mainly been focused on theories built using either six-dimensional or eight-dimensional operators, built from the contractions of three and four Riemann tensors. These are the leading and next-to-leading order operators in the absence of matter.
When matter is present, the leading order curvature operators in the EFT construction are dimension-four operators (\(R^{2}\), \(R_{ab}R^{ab}\) and \(R_{abcd}R^{abcd}\)). In this context, neutron star (NS) binaries become one of the most relevant scenarios. Modifications to GR may not only affect the dynamics during the inspiral and merger phases but the behavior and signatures of the merger remnant could also be highly altered. Given that these theories are constructed from powers of curvature invariants, it is natural that the effects of the modifications grow with the curvature, and small black holes (BHs) would give rise to the strongest effects. The merger of binary NSs [20] presents an ideal scenario for the formation of some of the smallest astrophysical black holes, with masses of approximately \(3M_{\odot}\). The post-merger dynamics of such an object could be one of the best windows to observe deviations from GR [21]. Exotic formation channels for smaller BHs could result in scenarios where such BHs interact with NSs in regimes of large spacetime curvature, where significant corrections could arise from these types of modifications to GR 1.
Footnote 1: See [22] for a study of a NS being consumed by a much less massive BH residing inside the star
The theory built from these four-dimensional operators is commonly called Quadratic Gravity [23], and there has been recent work performing fully nonlinear numerical simulations in spherical symmetry and very recently in the BH binary merger scenario [24; 25]. However, these works have focused on the vacuum scenario, most specifically in the Ricci-flat case, which, from the perspective of EFT, solutions and dynamics should be indistinguishable from GR.
This work explores the dynamics of this dimension-four operator EFT extension to GR in the presence of matter, where modifications should arise. For simplicity, the considered system has spherical symmetry, and we evolve the collapse of a massless minimally coupled scalar field into a BH. There are several objectives to this
work. First, we want to present an alternative approach to that presented in [24; 25], as well as incorporate matter into the system to study gravitational collapse. The second one is to study how the modifying terms affect the dynamics of the system. And finally, to determine in what region of the parameter space the system stays within the EFT description, simulations are well-behaved, and when their predictions can be trusted.
The paper is structured as follows: In section II, the four-dimensional operator EFT, its action, and its corresponding field equations are presented. In section III, the evolution and constraint equations are presented, and the "Order Reduction" procedure is introduced to deal with the higher derivatives in such equations. Section IV contains detailed information about the target problem and setup, including the prescription for initial data, the numerical implementation, and relevant monitoring quantities. The main results of the paper are presented in Section V. A brief discussion on the observed results and future outlook can be found in Section VI. The appendices contain additional information regarding the convergence test and constraint violations observed in the simulations. The following notation is adopted: The beginning of the Latin alphabet \((a,b,c,d,...)\) will be used to denote full spacetime indices, while the Latin letters \((i,j,k,l...)\) will be used to indicate spatial ones. The \((-,+,+,+)\) signature is used, and the speed of light is set to \(c=1\).
## II Leading order EFT, non vacuum equations
The leading order terms in an EFT extension to GR, which introduce no new light degrees of freedom and satisfy parity symmetry, are the ones built with the dimension-four operator curvature invariants \(R^{2}\), \(R_{ab}R^{ab}\) and \(R_{abcd}R^{abcd}\). Using the fact that the Gauss-Bonnet invariant is topological in four spacetime dimensions, one can exclude the Riemann-squared term from the effective action. The effective action can be written as:
\[S_{\text{eff}}=\frac{1}{16\pi G}\int d^{4}x\,\sqrt{-g}\left(R-\frac{a_{1}}{ \Lambda^{2}}R_{ab}R^{ab}+\frac{a_{2}}{\Lambda^{2}}R^{2}+\cdots\right)\,, \tag{1}\]
where \(a_{1}\) and \(a_{2}\) are dimensionless coefficients and \(\Lambda\) has units of inverse length and determines the cut-off of the EFT. Notice that in the vacuum case, since \(R_{ab}=0+\mathcal{O}(1/\Lambda^{2})\), then these terms would be pushed to higher orders of the perturbative scheme, and six-dimensional operators would dominate. This work includes matter in the form of a minimally coupled scalar field, so these terms are the leading order operators.
Upon variation of this action, the following field equations are obtained,
\[R_{ab}-\frac{1}{2}g_{ab}R+\frac{1}{2}\epsilon_{1}R_{cd}R^{cd}g_{ ab}+2\epsilon_{2}R_{ab}R-\frac{1}{2}\epsilon_{2}g_{ab}R^{2}\] \[-2\epsilon_{1}R^{cd}R_{acbd}+(\epsilon_{1}-2\epsilon_{2})\nabla_ {b}\nabla_{a}R-\epsilon_{1}\nabla^{2}R_{ab}\] \[-g_{ab}(\frac{1}{2}\epsilon_{1}-2\epsilon_{2})\nabla_{c}\nabla^{ c}R=8\pi T_{ab}, \tag{2}\] \[\nabla^{a}T_{ab}=0, \tag{3}\]
where \(\epsilon_{1}=a_{1}/\Lambda^{2}\), \(\epsilon_{2}=a_{2}/\Lambda^{2}\) (which will occasionally be called couplings) and \(T_{ab}\) is the usual energy-momentum tensor defined as,
\[T_{ab}=\nabla_{a}\phi\nabla_{b}\phi-\frac{1}{2}g_{ab}\nabla_{c}\phi\nabla^{c}\phi. \tag{4}\]
For convenience equation (2) will expressed as,
\[R_{ab}-\frac{1}{2}g_{ab}R=8\pi T_{ab}+M_{ab}, \tag{5}\]
where now \(M_{ab}\) encompasses all modifications to the equations. The \(M_{ab}\) tensor contains up to 4th-order derivatives of the metric; this sort of modification makes the task of formulating the problem as well-posed [26; 27] a challenging task, if not an impossible one with the standard techniques.
## III Evolution equations and constraints
Before addressing the issues raised at the end of the previous section, the equations will be first expressed in a formulation that, in the absence of correcting terms, renders the problem well-posed. To this end, the Generalized Harmonic formulation [28; 29; 30] that is written in terms of the usual 3+1 variables [31] is adopted. Under this formulation, the full set of evolution equations and constraints are expressed as,
\[\partial_{\perp}\gamma_{ij}= -2\alpha K_{ij}, \tag{6a}\] \[\partial_{\perp}K_{ij}= \alpha\left[R_{ij}^{(3)}-2K_{ik}K_{j}^{k}-\widetilde{\pi}K_{ij} \right]-D_{i}D_{j}\alpha\] \[-\alpha D_{(i}\mathcal{C}_{j)}-\kappa\alpha\gamma_{ij}\mathcal{C }_{T}/2\] (6b) \[-8\pi G\alpha\left[S_{ij}-\gamma_{ij}(S-\rho)/2\right]\] \[-\alpha\left[S_{ij}^{M}-\gamma_{ij}(S^{M}-\rho^{M})/2\right],\] \[\partial_{\perp}\alpha= \alpha^{2}\widetilde{\pi}-\alpha^{2}H_{T},\] (6c) \[\partial_{t}\beta^{i}= \beta^{j}D_{j}\beta^{i}+\alpha^{2}\rho^{i}-\alpha D^{i}\alpha+ \alpha^{2}H^{i},\] (6d) \[\partial_{\perp}\widetilde{\pi}= -\alpha K_{ij}K^{ij}+D_{i}D^{i}\alpha+\mathcal{C}^{i}D_{i}\alpha -\kappa\alpha\mathcal{C}_{T}/2\] (6e) \[-4\pi G\alpha(\rho+S)-\frac{\alpha}{2}(\rho^{M}+S^{M}),\] \[\partial_{\perp}\rho^{i}= \gamma^{k\ell}\bar{D}_{k}\bar{D}_{\ell}\beta^{i}+\alpha D^{i} \widetilde{\pi}-\widetilde{\pi}D^{i}\alpha-2K^{ij}D_{j}\alpha\] \[+2\alpha K^{jk}\Delta\Gamma^{i}_{jk}+\kappa\alpha\mathcal{C}^{i}\] (6f) \[-16\pi G\alpha j^{i}-2\alpha j_{M}^{i},\]
with the constraints,
\[\mathcal{C}_{T} \equiv\widetilde{\pi}+K, \tag{7a}\] \[\mathcal{C}^{i} \equiv-\rho^{i}+\Delta\Gamma^{i}_{jk}\gamma^{jk},\] (7b) \[\mathcal{H} \equiv K^{2}-K_{ij}K^{ij}+R-16\pi G\rho-2\epsilon\rho^{M},\] (7c) \[\mathcal{M}_{i} \equiv D_{j}K^{j}_{i}-D_{i}K-8\pi Gj_{i}-\epsilon j_{i}^{M}, \tag{7d}\]
where \(K\equiv\gamma^{ij}K_{ij}\), \(D_{i}\) and \(\bar{D}_{i}\) are the covariant derivatives for the three-metric \(\gamma_{ij}\) and the background 3-metric \(\bar{\gamma}_{ij}\) respectively. The derivative operator \(\partial_{\perp}\) is defined as \(\partial_{\perp}=\partial_{t}-\mathcal{L}_{\beta}\), where \(\mathcal{L}_{\beta}\) is the Lie derivative along the shift vector \(\beta^{i}\). Defining \(\Delta\Gamma^{i}_{jk}:=^{(3)}\Gamma^{i}_{jk}-^{(3)}\bar{\Gamma}^{i}_{jk}\), where these are the Christoffel symbols for the induced metric and background metric (flat in spherical coordinates) respectively. Defining also \(H_{T}:=H^{a}n_{a}\), where \(n_{a}\) is the normal vector to the spatial hypersurfaces defined by the spacetime foliation. The new dynamical variables \(\widetilde{\pi}\) and \(\rho^{i}\) are introduced through equations (6c-6d) to make the system (ignoring the extensions to gravity) first order in time derivatives. \(S_{ij}\), \(S\), \(\rho\) and \(j^{i}\) are the matter variables constructed from the energy-momentum tensor \(T_{ab}\) as, \(S_{ij}=P^{a}_{i}P^{b}_{j}T_{ab}\), its trace \(S=\gamma^{ij}S_{ij}\), \(\rho=n_{a}n_{b}T^{ab}\), and \(j^{i}=-P^{ia}n_{b}T_{ab}\). Where \(P^{ia}\) is a projection tensor to the spatial hypersurface. Here the definitions for \(S^{M}_{ij}\),\(S^{M}\), \(\rho^{M}\) and \(j^{i}_{M}\) are analogous to the ones for the matter sources, but instead of using \(T_{ab}\), we use \(M_{ab}\).
Let us now analyze the structure of the terms introduced by \(M_{ab}\), which modify Einstein's equations. These terms contain up to 4th-order time and spatial derivatives of metric components. In addition, they contain nonlinear combinations of derivatives that would make the usual hyperbolicity analysis [32] inapplicable. Furthermore, the constraint equations (7c)-(7d) contain time derivatives, which are not present in the Hamiltonian and Momentum constraints in GR. These sorts of issues are not uncommon when dealing with modified gravity theories, even in Horndeski theories, which are second order in derivatives and incorporate a non-minimally coupled scalar field, suffer from pathologies that can render the problem of interest ill-posed [33, 34, 35, 36]. In those cases, after significant theoretical efforts, appropriate new gauges were formulated [37, 38] that ameliorate these issues to the point where nonlinear studies of compact binary mergers are possible [5, 6, 39, 40] for some regime of small coupling values. In the case of higher derivative extensions to GR, fully nonlinear evolution has been performed [4, 41] for an eight-dimensional operator EFT extension through controlling pathological higher frequencies via a "fixing" method [42, 43, 44, 45] leaving the long wavelength physics unaltered.
Coming back to this paper's theory of interest works like [24, 25] tackle these issues by re-writing the theory following the work of Noakes [46], in which the Ricci scalar and the traceless part of the Ricci tensor can be elevated to massive spin-0 and spin-2 fields and are evolved with equations derived directly from the field equations of the theory. With this prescription, they can verify numerical stability in the Ricci-flat subsector and confirm that it is indistinguishable from GR. However, an opposing view to this method can be formed from the perspective of EFT. The extra modes that this theory introduces and that this approach makes explicit have masses that are above the cut-off scale of the EFT; hence the dynamics of these modes should be irrelevant in the EFT regime2. Furthermore, depending on the signs and values of \(\epsilon_{1}\) and \(\epsilon_{2}\), these massive degrees of freedom can become tachyonic, which would take them outside the regime of applicability of the EFT. In contrast, this work, taking this intuition from EFT, will actively remove these extra degrees of freedom by eliminating the higher order time derivatives in the field equations via an "Order Reduction" [48] procedure 3. Proceeding as done in [41] (see Section II-C of that work for more details), one can use the evolution and constraint equations to 0th order in \(\epsilon_{1}\) and \(\epsilon_{2}\) to find expressions of higher order time and spatial derivatives of the metric components in terms of lower order derivatives.
Footnote 2: See [47] for a similar argument on the massive degrees of freedom in six-dimensional operators EFT.
Footnote 3: This “Order Reduction” approach is not to be confused with the “Order reduction” techniques used in [49, 50], where order-reducing refers to replacing some problematic terms and solving them iteratively/perturbatively.
Schematically,
\[\frac{\partial\mathbf{g}}{\partial t^{2}} =\mathbf{E}(\mathbf{g},\partial_{a}\mathbf{g},\partial_{i}^{2}\mathbf{g}) \tag{8}\] \[+\epsilon\mathbf{M}(\mathbf{g},\partial_{a}\mathbf{g},\partial_{a}^{2}\mathbf{g},\partial_{a}^{3}\mathbf{g},\partial_{a}^{4}\mathbf{g})+\mathcal{O}(\epsilon^{2}),\]
represents the evolution system of equations (6) written in terms of the variables \(\mathbf{g}=\{\gamma_{ij},\alpha,\beta\}\). Here \(\mathbf{E}\)
represents the GR terms, which depend only up to first-time derivatives and second spatial derivatives of \(\mathbf{g}\). \(\mathbf{M}\) represents the terms from the modified theory, which depend on up to fourth-order spacetime derivatives. Truncating (8) to order \(\mathcal{O}(\epsilon^{0})\)
\[\frac{\partial\mathbf{g}}{\partial t^{2}}=\mathbf{E}(\mathbf{g},\partial_{a}\mathbf{g}, \partial_{i}^{2}\mathbf{g})+\mathcal{O}(\epsilon), \tag{9}\]
and taking derivatives of it gives expressions to higher than second-time derivatives of \(\mathbf{g}\) in terms of lower order derivatives. This way (9) and its derivatives can be used to replace \(\{\partial_{a}^{2}\mathbf{g},\partial_{a}^{3}\mathbf{g},\partial_{a}^{4}\mathbf{g}\}\) in \(\mathbf{M}\), in favor of \(\widetilde{\mathbf{M}}\), to obtain redefinitions of (8) that are lower in time derivatives and valid to \(\mathcal{O}(\epsilon)\),
\[\begin{split}\frac{\partial\mathbf{g}}{\partial t^{2}}& =\mathbf{E}(\mathbf{g},\partial_{a}\mathbf{g},\partial_{i}^{2}\mathbf{g})\\ &+\epsilon\widetilde{\mathbf{M}}(\mathbf{g},\partial_{a}\mathbf{g}, \partial_{a}\partial_{i}\mathbf{g},\partial_{a}\partial_{i}^{2}\mathbf{g},\partial_ {a}\partial_{i}^{3}\mathbf{g})+\mathcal{O}(\epsilon^{2}),\end{split} \tag{10}\]
This way, expressions for \(S_{ij}^{M}\), \(S^{M}\), \(\rho^{M}\) and \(j_{M}^{i}\), let us call them \(\widetilde{S_{ij}^{M}}\), \(\widetilde{S^{M}}\), \(\widetilde{\rho^{M}}\) and \(\widetilde{j_{M}^{i}}\) can be obtained, which no longer contain higher derivatives in time and that are valid to \(\mathcal{O}(\epsilon_{1})\) and \(\mathcal{O}(\epsilon_{2})\). Once all undesired time derivatives are eliminated, the constraint equations, which now only contain spatial derivatives, can be used to find expressions for some (not all) higher spatial derivative derivatives of the metric components in terms of lower derivatives. In spherical symmetry, even though not all higher spatial derivatives expressions are available through an order reduction of the constraints, this procedure is enough to eliminate all higher-than-second spatial derivatives of the metric components. During this procedure, one introduces higher-order spatial derivatives (up to third) of the scalar field \(\phi\). In some way, all of the higher-order time and spatial derivatives of gravity variables have been traded for 3rd derivatives of the scalar field. This is seen easily by noticing that this reduction of order is equivalent to replacing \(R_{ab}\) and \(R\) through \(T_{ab}\) in all the \(\epsilon\) proportional terms in (2). One could proceed as done in [41, 4] and control the higher frequencies via the "fixing" approach. One of the objectives of this work is to explore under what circumstances the system is well-behaved after performing the order reduction without attempting to control the higher frequencies.
## IV Target problem and setup
The objective is to study this theory and its equations in dynamical scenarios where nonlinearities are important. We want to explore in which regime of the parameter space one can carry out numerical evolution without instabilities. If such instabilities do appear, the objective is to asses whether this happens within the regime of applicability of the EFT. To this end, we evolve spacetimes consisting of an initial in-falling scalar Gaussian profile, ultimately collapsing into a BH. This work will avoid treating critical collapse [51], mainly because the EFT is doomed to be outside of its regime of validity during such a process.
Reducing the problem to spherical symmetry, the line element for this problem is given by,
\[\begin{split} ds^{2}&=(-\alpha^{2}+g_{rr}\beta^{2}) dt^{2}+2\beta g_{rr}drdt+g_{rr}dr^{2}\\ &+r^{2}g_{T}(d\theta^{2}+\sin^{2}\theta d\varphi^{2}),\end{split} \tag{11}\]
where \(\alpha\) is the lapse function, \(\beta\) is the radial component of the shift vector, and \(g_{rr}\) and \(g_{T}\) are the radial and angular components of the spatial metric \(\gamma_{ij}\).
The equations that arise from this ansatz contain factors of \(r^{-p}\), which lead to divergences at the origin \(r=0\). Using L'Hopital's rule, one can carefully redefine the equations at the origin to avoid these coordinate singularities. This technique is essential when dealing with the high \(p\) exponents that corrections to GR introduce.
### Initial data
What determines whether the scalar field collapses into a BH or bounces back to infinity depends on the properties of the initial profile of the field. All of this will be encoded in the initial data prescribed. In this section, we discuss how we construct initial data consistent with the constraints of the modified theory.
Starting from the conformal decomposition of the spatial metric as
\[\gamma_{ij}=\psi^{4}\tilde{\gamma}_{ij}, \tag{12}\]
where \(\psi\) is the conformal factor and \(\tilde{\gamma}_{ij}\) being the flat metric in spherical coordinates. With this choice, the _Hamiltonian Constraint_ takes the form,
\[8\nabla^{2}_{\textit{flat}}\psi+\psi^{5}(A_{ij}A^{ij}-\frac{2}{3}K^{2})+16 \pi\psi^{5}\rho+2\epsilon\psi^{5}\widetilde{\rho^{M}}=0, \tag{13}\]
where \(A_{ij}\) is the traceless part of the extrinsic curvature tensor \(K_{ij}\) and now the additional term \(2\psi^{5}\widetilde{\rho^{M}}\) contains the modifications to GR.
The _Momentum Constraint_ takes the form,
\[\nabla_{j}A^{ij}-\frac{2}{3}\nabla^{i}K-8\pi j^{i}-\epsilon\widetilde{j_{M}^{ i}}=0, \tag{14}\]
which includes the additional current-like term \(-\epsilon\widetilde{j_{M}^{i}}\). We take the extrinsic curvature to be traceless by setting the ansatz,
\[A_{ij}=\begin{pmatrix}K_{rr}&0&0\\ 0&-r^{2}\frac{K_{rr}}{2}&0\\ 0&0&-r^{2}\frac{K_{rr}\sin^{2}\theta}{2}\end{pmatrix}. \tag{15}\]
The expressions of the Hamiltonian and Momentum constraint under such ansatz read,
\[\begin{split}\frac{\partial^{2}\psi}{\partial r^{2}}& =-\frac{2}{r}\frac{\partial\psi}{\partial r}-\frac{3}{16}\frac{K_{rr}^{2}}{ \psi^{3}}-\pi\psi\left(\frac{\partial\phi}{\partial r}\right)^{2}-\pi\psi^{5} \Sigma^{2}\\ &-\frac{1}{4}\epsilon\psi^{5}\widetilde{\rho^{M}},\end{split} \tag{16}\]
\[\begin{split}\frac{\partial K_{rr}}{\partial r}& =-2\psi^{-1}K_{rr}\frac{\partial\psi}{\partial r}-\frac{3}{r}K_{rr}+8 \pi\psi^{4}\Sigma\frac{\partial\phi}{\partial r}\\ &+\epsilon\psi^{5}\widetilde{j_{M}^{\widetilde{\nu}}}.\end{split} \tag{17}\]
Notice that \(\widetilde{\rho^{M}}\) and \(\widetilde{j_{M}^{\widetilde{\nu}}}\) are the order reduced expressions that we obtained after the order reduction procedure, and when evaluated under this ansatz possess only up to first order derivatives of \(\psi\) and no derivatives of \(K_{rr}\). In this form, these equations can be integrated directly to find solutions once the scalar field initial data is specified and appropriate boundary conditions set. This technique was used in [41], as "order-reduced direct integration", to successfully construct BH initial data in spacetimes in the presence of a scalar field for an eight-dimensional operator EFT of GR.
#### ii.1.1 Scalar field
The initial scalar field is prescribed such that it is initially mostly in-falling towards the origin; this can be achieved by having a field of the form,
\[\phi(t,r)=\frac{\Phi(u)}{r}, \tag{18}\]
where \(u\equiv r+t\) and,
\[\Phi(u)=Au^{2}\exp\left(-\frac{(u-r_{c})^{2}}{\sigma^{2}}\right), \tag{19}\]
where \(A,r_{c}\) and \(\sigma\) are the amplitude, center, and width of the pulse respectively. Under this choice, the initial values of scalar field variables are given by,
\[\phi_{0}\equiv\phi(t=0,r)=Ar\exp\left(-\frac{(r-r_{c})^{2}}{\sigma^{2}}\right), \tag{20}\]
\[\Sigma(t=0,r)=\frac{\phi_{0}}{\alpha}\left(\beta\left(\frac{1}{r}-\frac{2(r- r_{c})}{\sigma^{2}}\right)-\left(\frac{2}{r}-\frac{2(r-r_{c})}{\sigma^{2}} \right)\right), \tag{21}\]
where \(\Sigma\) is defined as,
\[\Sigma(t,r)=\frac{1}{\alpha}\left(\beta\frac{\partial\phi}{\partial_{r}}- \frac{\partial\phi}{\partial_{t}}\right). \tag{22}\]
#### ii.1.2 Boundary conditions
To construct the initial data, boundary conditions for the fields must be prescribed. Regularity at the origin imposes \(\Omega(r=0)\equiv\partial_{r}\psi(r=0)=0\). For convenience, we can set \(K_{rr}=0\) at the origin. To determine the remaining condition on the \(\psi\) field we impose that the exterior boundary conditions should have the following form,
\[\left.\psi\right|_{r_{out}}=1+\frac{M}{2r_{out}}, \tag{23}\] \[\left.\frac{\partial\psi}{\partial r}\right|_{r_{out}}=-\frac{M}{ 2r_{out}^{2}}, \tag{24}\]
where \(r_{out}\) is the exterior grid boundary and \(M\) is the ADM mass (which will depend on the scalar field initial configuration). A way to achieve this is to perform a shooting procedure on the value of \(\psi(r=0)\) such that the integrated solution on the outer boundary satisfies,
\[\left.\psi\right|_{r_{out}}=1-r_{out}\left.\frac{\partial\psi}{\partial r} \right|_{r_{out}}, \tag{25}\]
we achieve this by implementing a Newton-Raphson method.
We impose that the initial values of gauge variables satisfy,
\[\alpha(t=0) =1, \tag{26}\] \[\beta(t=0) =0,\] (27) \[\widetilde{\pi}(t=0) =0,\] (28) \[\rho^{i}(t=0) =-2\psi^{-5}\Omega, \tag{29}\]
where the last two are required to initially satisfy the constraints (7a)-(7b).
### Numerical implementation
The following numerical scheme is implemented to evolve the system presented in section III. Time is integrated through a 4th-order Runge-Kutta with a CFL coefficient such that \(dt=0.25dx\), where \(dt\) is the time-step and \(dx\) denotes the uniform spatial grid spacing. Spatial derivatives are discretized via Finite Differences operators, which are 6th-order accurate in the interior and 3rd-order in the boundaries. Kreiss-Oliger dissipation is implemented with operators that are 8th-order accurate in the interior and 4th-order in the boundary. When no BH is present in the simulation, the grid extends from \(r_{i}=0\) to \(r_{out}=200\). During the evolution, the appearance of an apparent horizon is monitored; if one appears, then the code will excise a portion (including \(r=0\)) of the domain contained inside
this apparent horizon. A damped harmonic gauge [52, 53, 54] is adopted, which sets the gauge source vector to satisfy: \(H_{a}=z(\log{(\sqrt{g_{rr}}g_{T}\alpha^{-1})}n_{a}-g_{ab}\beta^{b}\alpha^{-1})\). We take a fixed value of \(z=0.5\).
### Monitoring quantities
As previously mentioned, an EFT description of a system involves a truncated expansion of a tower of curvature operators, and control over this expansion is lost if the curvature becomes too large. Determining whether the system remains within the regime of applicability of the EFT throughout evolution is a necessary condition 4 to guarantee that the observed behavior is representative of the true physics of the underlying theory in the low energy regime.
Footnote 4: Even if the theory is at all times within the EFT’s regime of validity, undesired issues such as secular effects [49, 50] could emerge and spoil the physics.
A reasonable indicator of whether the system is within the regime of applicability of the EFT is to compare if terms that are higher order in the perturbation scheme remain subdominant to lower order ones [55, 56]. For example one expects that \(\mid R\mid>\mid\epsilon_{1}R_{ab}R^{ab}\mid\;+\mid\epsilon_{2}R^{2}\mid\). Using the fact that \(R_{ab}=8\pi(T_{ab}-1/2Tg_{ab})+\mathcal{O}(\epsilon_{1},\epsilon_{2})\), (ignoring higher order terms in \(\epsilon_{1}\) and \(\epsilon_{2}\)) the inequality can be expressed as:
\[\mathcal{E}_{R}\equiv 8\pi(\mid\epsilon_{1}\mid+\mid\epsilon_{2}\mid)\mid \left(-\Sigma^{2}g_{rr}+(\partial_{r}\phi)^{2}\right)\mid g_{rr}^{-1}<1. \tag{30}\]
Another indicator that can be used to discern whether the theory remains in the EFT regime of applicability is through some curvature invariant that is non-vanishing for vacuum spacetimes, for instance, the Kretschmann scalar \(\mathcal{C}\equiv R_{abcd}R^{abcd}\). Using this invariant, a natural threshold for the regime of applicability of the EFT is given by \(\Lambda^{-2}\mathcal{C}>\Lambda^{-6}\mathcal{C}^{2}\), which can be easily rewritten as,
\[\mathcal{E}_{\mathcal{C}}\equiv\mathcal{C}\Lambda^{-4}\approx\mathcal{C}\max( \epsilon_{1}^{2},\epsilon_{2}^{2})<1. \tag{31}\]
During evolution, these two quantities will be monitored to get an idea whether the system is in the validity regime of the EFT, close to leaving it or outside of it5.
Footnote 5: There are, of course, many other quantities one could check, for example, checking that the six-dimensional operators should be subdominant to the four-dimensional ones, for example, \(R_{ab}R^{ab}\Lambda^{-2}>R_{ab}^{\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\
instabilities, and whether it occurs inside of the regime of applicability of the EFT.
## V Results
We turn our attention now to the evolution of the infalling self-gravitating scalar field with different choices of the coupling parameters \(\{\epsilon_{1},\epsilon_{2}\}\). Whether the incoming pulse collapses into a BH or bounces back to infinity will depend mostly on the choice of its initial parameters, amplitude \(A\), width, \(\sigma\), and position \(r_{c}\). To study the collapse case, these three parameters will be fixed to \(A=0.0023\), \(\sigma=1\), and \(r_{c}=10\). For these values in the initial scalar profile, the ADM mass of the system is \(M_{ADM}=1.024\) when \(\epsilon_{1}=\epsilon_{2}=0\). The relevant length scales in the modified theory (\(|\epsilon_{1}|^{1/2}\approx|\epsilon_{2}|^{1/2}\approx\Lambda^{-1}\)) should be then compared to the mass of the system. For reference, when these couplings are large \(|\epsilon_{1}|=|\epsilon_{2}|=0.1\) the difference in \(M_{ADM}\) is at the sub-percent level. Even though this work focuses on the collapsing scenario, the non-collapsing scenario was also studied. The evolution of that scenario in the regime of couplings explored is well-behaved up to \(|\epsilon|\approx 10^{-1}\). Above such couplings, the system leaves the regime of applicability of the EFT. The evolution of the collapse scenario is more interesting, as we shall see in this section.
The main objective of these simulations is to explore how the evolution is altered as we modify the coupling parameters \(\{\epsilon_{1},\epsilon_{2}\}\), such as the behavior of the apparent horizon and curvature invariants. When couplings are turned off, and GR is evolved, the initial pulse propagates toward the origin until a BH forms. It quickly accretes the scalar field and settles to its final configuration. The final mass of the formed BH is of \(M_{BH}\approx 1.022\), indicating that only a very small portion of the scalar field is not accreted by the BH. To study how this same scenario would evolve when couplings are non-vanishing, an array of simulations is run with pairs of \(\epsilon_{1}\) and \(\epsilon_{2}\) taking values from \(\{0,\tilde{\epsilon}_{n\pm}\}\), with \(\tilde{\epsilon}_{n\pm}=\pm\epsilon_{0}2^{n}\) for \(n=0..11\), with \(\epsilon_{0}=10^{-4}\).
Figure 1 displays whether the evolution for a pair of values \(\{\epsilon_{1},\epsilon_{2}\}\) is stable and collapses into a BH (green dots) or if it develops instabilities and crashes (red crosses). This figure shows that there are a lot of points in the parameter space which develop instabilities, mostly when at least one of the couplings is large, especially for large and positive \(\epsilon_{1}\) and large and negative \(\epsilon_{2}\).
To better understand what is happening, we will first focus on simulations with either \(\epsilon_{1}=0\) or \(\epsilon_{2}=0\) to study those terms individually. In Figure 2, we plot the maximum value of the Kretschmann scalar \(\mathcal{C}\) in space and time for this subset of the parameter space. Here dots represent simulations that were stable during the evolution and collapsed into BHs, while the crosses represent simulations that crashed. This figure shows how, relative to GR, a positive(negative) value of \(\epsilon_{1}(\epsilon_{2})\) tends to amplify the maximum value of \(\mathcal{C}\) achieved during the evolution. Similarly (for small enough) negative(positive) values of \(\epsilon_{1}(\epsilon_{2})\) induce a suppression on the maximum value of \(\mathcal{C}\). The magnitude of these amplifications or suppression grows as the scalar pulse approaches the origin, and corrections to GR become stronger. In Figure 3, we plot several snapshots of the \(\mathcal{C}\) radial profile close to the collapse to a BH. Notice however in Figure 2 how for \(\epsilon_{1}\lessapprox-10^{-2}\) the behavior of \(\mathcal{C}\) drastically changes to amplification as opposed to suppression.
Figure 1: Parameter space of simulations for the collapse scenario. In green dots are simulations that are stable and collapse into a BH, and in red crosses are simulations that develop instabilities and crash.
Figure 2: Maximum value of \(\mathcal{C}\) across space and time for simulations in the collapse scenario for either \(\epsilon_{1}\neq 0\) or \(\epsilon_{2}\neq 0\). Dots indicate simulations that collapsed into BHs and remained stable; crosses indicate simulations that crashed. The red shaded region indicates values of \(\mathcal{C}\) that lie outside of the regime of applicability of the EFT in accordance with (31). Values of \(\mathcal{C}>10^{8}\) have been labeled as \(10^{8}\) for convenience.
An indicator that the evolution for \(\epsilon_{1}\lessapprox-10^{-2}\) is pathological and not physical is its convergence, which we display in Figure 13. This figure shows how convergence falls rapidly as the scalar field approaches the origin in cases with \(\epsilon_{1}<0\) especially losing all convergence for cases with \(\epsilon_{1}\lessapprox-10^{-2}\). Furthermore, one can see that the constraints in this regime of the couplings, as shown in Figure 14, show violations above the one percent level, which indicates one should question the validity of the results.
Figure 2 also shows in the red shaded region the values of the Kretschmann scalar \(\mathcal{C}\) that would violate the EFT limit for each value of \(\epsilon\) in accordance with (31). Interestingly a small negative value of \(\epsilon_{1}\) shows a suppression of \(\mathcal{C}\), which in principle, helps to avoid the restricted region. However, as \(\epsilon_{1}\) becomes more negative at some point, an instability is triggered, generating an amplification of \(\mathcal{C}\), clearly driving the system outside of the EFT regime of applicability. Here it is important to stress the order of these events. If an instability was generated once the system was already outside the EFT regime, this means that physics drove the system there and not pathologies. Suppose the system naturally explores higher curvatures and numerical instabilities appear after leaving the regime in which the EFT approach is valid. In that case, we need not worry about these simulations crashing and acknowledge the inadequacy of the EFT prescription to describe these scenarios. This seems to be the case for positive(negative) values of \(\epsilon_{1}(\epsilon_{2})\), which induce an amplification on \(\mathcal{C}\) which drives the system outside of the valid EFT regime for \(|\epsilon|\gtrapprox 10^{-3}\) and crash. In contrast, positive values of \(\epsilon_{2}\) which induce suppression of \(\mathcal{C}\) manage to stay within the regime of applicability of the EFT and stable up to values of \(\epsilon_{2}\lessapprox 5\times 10^{-2}\), beyond this values some instabilities are triggered, the system leaves the regime of applicability of the EFT and crashes. Both large negative values of \(\epsilon_{1}\) and large positive values of \(\epsilon_{2}\) seem to be developing instabilities when they are within the regime of applicability of the EFT. Perhaps for these regimes, controlling the higher frequencies via a "fixing" approach as in [4, 41] could result in the resolution of the instabilities, but this is outside the scope of this work.
Similar behavior is observed on the maximum value of the Ricci scalar \(R\), which we show in Figure 4, where we also include the EFT of applicability exclusion region in shaded red as indicated by the relation \(\mathcal{E}_{\mathcal{R}}<1\), see eq.(30). Interestingly, all of the simulations that crashed for \(\epsilon_{2}<0\) do so within the allowed EFT regime dictated by (30); however, they are outside of the valid regime according to (31).
Another quantity that we can inspect is the radicand \(\chi_{3}\), see eq.(34), of the eigenvalue \(\lambda_{3\pm}\), which, as we stated before, if it becomes negative could be related to a character transition and the breakdown of the initial value problem. Figure 5 shows the spatial minimum value of \(\chi_{3}\) as a function of time for simulations \(\epsilon_{1}<0\)
Figure 4: Maximum value of the Ricci scalar \(R\) across space and time for simulations in the collapse scenario for either \(\epsilon_{1}\neq 0\) or \(\epsilon_{2}\neq 0\). Dots indicate simulations that collapsed into BHs and remained stable; crosses indicate simulations that crashed. The red shaded region indicates values of \(R\) that lie outside of the regime of applicability of the EFT in accordance with (30).
Figure 3: Snapshots of radial profiles of \(\mathcal{C}\) at different times close to the collapse into a BH for different values of \(\epsilon_{1}\) and \(\epsilon_{2}\).
or \(\epsilon_{2}>0\), which are the cases in which \(\chi_{3}\) decreases towards \(0\) and negative values. As Figure 5 shows for small(large) enough values of \(\epsilon_{1}(\epsilon_{2})\)\(\chi_{3}\) can become negative. As mentioned, very negative values of \(\epsilon_{1}\) trigger instabilities, losing convergence and leaving the EFT's applicability regime. Similar issues are present for large positive values of \(\epsilon_{2}\) where also \(\chi_{3}<0\). However, such issues manifest before the \(\chi_{3}<0\) threshold is violated. This suggests that this violation might not be the root cause of the instabilities but rather serve as a reliable indicator of their presence. This is not unexpected since this condition was built from an incomplete characteristic analysis in which the scalar field was considered a source, ignoring the presence of the higher derivatives of the field in the gravitational equations.
A noticeable effect that can be appreciated in Figure 5 is that simulations that develop negative values of \(\chi_{3}\) also form an apparent horizon sooner than the \(\chi_{3}>0\) or GR cases. Figure 6 shows the areal radius \(r_{\mathcal{A}}\) of the formed horizons as a function of time for different values of the couplings. The behavior for the GR case is as expected; around \(t\approx 8.3\), an apparent horizon is found, and the areal radius quickly grows until all the scalar profile has been accreted and then relaxes to its final state. This is the same behavior that some of the curves in the plot, for example, for \(\epsilon_{1}=-0.0032\), \(\epsilon_{1}=0.0064\), with the only difference that these curves follow slightly above and below the GR curve respectively. In contrast, for the \(\epsilon_{1}=-0.0256\), \(\epsilon_{1}=-0.0512\), \(\epsilon_{2}=0.0256\) cases, also shown in Figure 6, the systems experience premature collapses to smaller BHs, after that \(r_{\mathcal{A}}\) undergoes a brief growth, and then a substantial decrease before a new larger horizon (roughly the same size of the GR horizon) is formed. At this stage, we can see how the \(r_{\mathcal{A}}\) grows above the GR curve before decreasing7 to join it as the final BH relaxes. Figure 6 also shows in dotted lines (\(\epsilon_{1}=0.0128\) and \(\epsilon_{2}=-0.0064\) ) a couple of simulations that crashed, these also display the premature appearance of a small horizon before crashing. It is important to note that all of the simulations that show this type of exotic horizon behavior evolve away from the regime of applicability of the EFT defined by (31). The late-time behavior of all simulations, as shown in the plot, is similar; the final BH in all cases is essentially the same. This is not unexpected; once the scalar field has been accreted by the BH and the spacetime is essentially vacuum, the equations (2) reduce to Einstein's equation and can be evolved for very long times.
Footnote 7: The decrease of the BH’s areal radius, and hence, decrease of its area is related to violations of the Null Convergence Condition [57; 58], similar behavior was observed in [41]
Having studied the \(\epsilon_{1}\) and \(\epsilon_{2}\) cases individually, we can outline a few observations.
1. Positive(negative) values of \(\epsilon_{1}(\epsilon_{2})\) strongly amplify the maximum value of curvature invariants such as \(\mathcal{C}\) and \(R\) in contrast to GR. Their simulations are well behaved as long as the system stays within the regime of applicability of the EFT stipulated by (30)-(31), beyond that regime simulations tend to crash.
2. Negative(positive) values of \(\epsilon_{1}(\epsilon_{2})\) strongly suppress the maximum value of curvature invariants such as \(\mathcal{C}\) and \(R\) in contrast to GR. Even though the suppression of these curvature invariants would help keep the system within the regime of applicability of the EFT, for large enough values of the coupling (especially for \(\epsilon_{1}\)),
Figure 5: Minimum value of \(\chi_{3}\) (the radicand of the eigenvalue \(\lambda_{3\pm}\)) as a function of time for simulations in the collapse case for different values of \(\epsilon_{1}\) and \(\epsilon_{2}\). Once an apparent horizon is found, the minimum is computed outside the horizon, hiding negative values inside; this explains the sharp transitions.
Figure 6: Areal radius \(r_{\mathcal{A}}\) of the apparent horizon as a function of time for different values of \(\epsilon_{1}\) and \(\epsilon_{2}\). The dashed curves correspond to simulations that crashed after the appearance of the apparent horizon.
the solutions lose convergence, and the suppression becomes an amplification, driving the system outside of the EFT regime.
3. When the couplings are sufficiently small and within the regime of the EFT, the behavior of the BH formed is very similar to that of the BH formed in the GR case. Once the horizon is formed, the high curvature regions are hidden past the horizon, making modifications extremely small.
4. When the couplings are large enough, the BH formation becomes more exotic. Premature smaller BHs can form before a horizon similar to the one formed in the GR case appears. In addition, these smaller BHs can shrink in size during their short existence. Note, however, that the simulations in these regimes are always outside of the regime of applicability of the EFT, and hence the relevance of these results should be questioned.
With these observations, the interpretation of results where both \(\epsilon_{1}\) and \(\epsilon_{2}\) are non-zero is more direct. With our definitions of \(\epsilon_{1}=a_{1}\Lambda^{-2}\) and \(\epsilon_{2}=a_{2}\Lambda^{-2}\), \(\Lambda\) has dimension of inverse length and both \(a_{1}\) and \(a_{2}\) are dimensionless. For the most part, when one of the couplings is large, and the other small, the behavior of the system is closer to the behavior of the large coupling, as we observed in the \(\epsilon_{1}\neq 0\) or \(\epsilon_{2}\neq 0\). More interesting behavior is observed when \(\epsilon_{1}\) and \(\epsilon_{2}\) are of the same order. For example, in the case where both \(\epsilon_{1}\) and \(\epsilon_{2}\) are positive, there is a competition between suppression and amplification induced in the curvature invariants, sometimes allowing the system to evolve with larger values of these couplings (in comparison to the individual cases) and stay within the regime of applicability of the EFT. This is the case for simulations with \(\epsilon_{1}\approx 2\epsilon_{2}\) as it can be seen in Figure 7 were a snapshot of the radial profile for \(\mathcal{C}\) is plotted in such configurations. In the case where the signs of the couplings are opposite, the effects of their terms tend to push in the same direction and consequently sometimes take the system outside of the valid regime or trigger instabilities at smaller values of the coupling in comparison to the individual \(\epsilon_{1}\) or \(\epsilon_{2}\) cases.
We will not spend a lot of time going through different cases when both couplings are non-vanishing; however, informative plots are provided showing the different control quantities discussed for the \(\epsilon_{1}\) and \(\epsilon_{2}\) individual cases. Figure 8 shows the space-time minimum value of the radical \(\chi_{2}\) of the eigenvalue \(\lambda_{2\pm}\). In contrast to the previously observed for the \(\chi_{3}\) quantity, when \(\chi_{2}\) becomes negative, the couplings are already large enough to take the system outside the EFT regime. Figure 9 shows the minimum space-time value of \(\chi_{3}\) for each simulation. The interpretation of this plot follows directly from what was observed for the individual coupling cases. As mentioned before we can see that when \(\epsilon_{1}\approx\epsilon_{2}\) simulations that would have \(\chi_{3}<0\) if only \(\epsilon_{2}\) was turned on, or crash if only \(\epsilon_{1}\) was on, now suffer non of those issues. Similar behavior is observed for the rest of the relevant quantities. Figure 10 displays the maximum value of \(\mathcal{E}_{R}\), on it dark red dots correspond to points where the \(\mathcal{E}_{R}>1\) EFT condition was violated. Figure 11 shows the maximum of \(\mathcal{E}_{\mathcal{C}}\) over time and space; the dark red dots represent points at which the EFT condition was violated. Finally, Figure 12 shows the maximum space-time value of \(\mathcal{C}\).
Figure 8: Minimum value of \(\chi_{2}\) over time and space for the collapse scenario with \(A=0.0023\). Dark blue marks represent simulations where the minimum value of \(\chi_{2}\) was at some point smaller than \(0\), making the eigenvalue complex, potentially indicating loss of well-posedness. Here crosses indicate that the simulation crashed.
Figure 7: Snapshot of radial profile of \(\mathcal{C}\) at t=8.11 for simulations with pairs of values of \(\epsilon_{1}\) and \(\epsilon_{2}\). Notice how the simulation with \(\epsilon_{1}=0.0128\) and \(\epsilon_{2}=0.064\) does not achieve the large values of \(\mathcal{C}\) that the simulation with only \(\epsilon_{1}=0.0128\) does.
## VI Discussion
This study investigates the phenomenon of gravitational collapse in spherical symmetry within the framework of a dimension-four EFT extension to GR, commonly known as Quadratic Gravity. Within the EFT perspective, the solutions derived from this theory are expected to differ from those of GR only in the presence of matter, with the dimension-four operators representing leading-order corrections to GR within an EFT expansion.
In this particular research, instead of treating the additional degrees of freedom associated with higher derivatives as massive spin-0 and spin-2 modes, as done in previous studies such as [24; 25] under Ricci-flat (vacuum) scenarios, an "Order Reduction" technique [48] is employed to eliminate these degrees of freedom. Through numerical simulations, this work is able to dynamically form BHs from the collapse of a scalar field. In addition, we identify a parameter space regime where the system is well-behaved and remains within the applicable range of the EFT. However, strong deviations in the dynamics of curvature invariants during the collapse are observed within this regime. These deviations could be particularly relevant in astrophysical scenarios like the merger of a pair of neutron stars,
Figure 11: Maximum value of \(\mathcal{E}_{\mathcal{C}}\) over time and space for the collapse scenario with \(A=0.0023\). Dark red dots correspond to simulations where the EFT regime of applicability condition \(\mathcal{E}_{\mathcal{C}}<1\) was violated at some point. Here crosses indicate that the simulation crashed.
Figure 12: Maximum value of \(\mathcal{C}\) over time and space for the collapse scenario with \(A=0.0023\). Values of \(\mathcal{C}>10^{6}\) have been labeled as \(10^{6}\) for convenience. Here crosses indicate that the simulation crashed.
Figure 10: Maximum value of \(\mathcal{E}_{R}\) over time and space for the collapse scenario with \(A=0.0023\). Dark red dots correspond to simulations where the EFT regime of applicability condition \(\mathcal{E}_{R}<1\) was violated at some point. Here crosses indicate that the simulation crashed.
Figure 9: Minimum value of \(\chi_{3}\) over time and space for the collapse scenario with \(A=0.0023\). Dark blue dots represent simulations where the minimum value of \(\chi_{3}\) was at some point smaller than 0, making the eigenvalue complex, potentially indicating loss of well-posedness. Here crosses indicate that the simulation crashed.
where the altered system dynamics could have discernible effects on the emission of gravitational radiation. The study of neutron stars for individual and binary cases in this EFT extension to GR will be explored in future work.
Additionally, instances were found where simulations, initially showing good behavior, venture into high-curvature regimes that exceed the limits of the EFT approximation. In such cases, it becomes necessary to acknowledge the inadequacy of the chosen approach in describing the system dynamics within those specific scenarios. The specific value of the couplings \(\epsilon_{1}\) and \(\epsilon_{2}\) (consequently the value \(\Lambda\)) at which this will be the case is entirely dependent on the characteristics and relevant scales in the system8. Furthermore, specific regimes were identified where the system exhibits instabilities before the validity of the EFT description ceases. In these cases, alternative approaches such as "fixing the equations" may be implemented to mitigate the emergence of instabilities and control higher frequencies. This treatment will be explored in the single neutron star and neutron star binary scenarios in future work.
Footnote 8: For instance, allowing the scalar pulse to have a larger width, while adjusting the amplitude to keep the ADM mass fixed, allows to carry out stable simulations that stay within the limits of the EFT for larger values of \(\epsilon_{1}\) and \(\epsilon_{2}\).
## VII Acknowledgements
I thank Miguel Bezares, Pablo A. Cano, Guillaume Dideron, Pau Figueras, Aaron Held, Guillermo Lara, and Luis Lehner for valuable discussions. This work was supported in part by Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Economic Development, Job Creation and Trade.
## Appendix A Convergence
To check the convergence of the solutions, the base uniform grid spacing \(dx=0.04\) is adopted, and the convergence factor is computed as,
\[\mathcal{Q}\equiv\ln\bigg{(}\frac{||u_{dx}-u_{dx/2}||_{2}}{||u_{dx/2}-u_{dx/4 }||_{2}}\bigg{)}, \tag{10}\]
here, \(u_{dx}\), \(u_{dx/2}\) and \(u_{dx/4}\) stand for any field evolved with resolutions \(dx\), \(dx/2\) and \(dx/4\) respectively. In Figure 13 we plot the convergence factor \(\mathcal{Q}\) for the \(K_{rr}\) variable in the BH collapse scenario : \(A=0.0023\), \(\sigma=1\), \(r_{c}=10\), \(z=0.5\), \(\kappa=2\). For practical reasons, we only plot the convergence until an apparent horizon has been detected. The convergence factor behaves similarly to the other dynamical variables. The black curve in Figure 13 shows the convergence factor for the GR case and shows how the convergence is \(\approx 4\) at the beginning of the simulation and close to the collapse \(\mathcal{Q}\) quickly climbs to values between 5 and 6. This is consistent with the 4th-order accuracy of the Runge-Kutta time integrator and the 6th-order accuracy of finite difference derivative operators. This seems to be similar for essentially all the \(\epsilon_{2}\neq 0\) simulations. The result changes drastically for the \(\epsilon_{1}\neq 0\) simulations, where we can see the convergence factor does drop to lower values as the system is close to collapse. Some of these simulations retain acceptable convergence factors, for example, the cases with \(\epsilon_{1}=10^{-3}\), \(\epsilon_{1}=5\times 10^{-3}\) and \(\epsilon_{1}=-10^{-3}\) drop to convergence factors of values \(\mathcal{Q}\approx 4\), \(\mathcal{Q}\approx 3\) and \(\mathcal{Q}\approx 2\) respectively. However, when the magnitude of \(\epsilon_{1}\) increases, we can see how all convergence is quickly lost. This coincides mainly with the regime we have identified of simulations leaving the regime of applicability of the EFT.
## Appendix B Constraints
Monitoring that the constraints (7a), (7b), (7c), and (7d) remain under control is important to attest to the quality of the performed simulations. In Figure 14 we plot the value of the \(l2\)-norm of Hamiltonian constraint (7b) for different values \(\epsilon_{1}\) and \(\epsilon_{2}\). Here we have normalized by the \(l2\)-norm of the most relevant terms that define it to get a relative notion of the violation of constraints. The other constraints display similar behavior, so we omit to show them. The black curve shows our reference GR simulation using the same parameters used in the convergence test for the \(dx=0.02\) grid spacing. The GR case Hamiltonian violation
Figure 13: Convergence factor \(\mathcal{Q}\) for the \(K_{rr}\) variable as a function of time close the time of collapse for different values of \(\epsilon_{1}\) and \(\epsilon_{2}\).
remains extremely small during the evolution, rising as expected close to the collapse time but never rising above a relative error of \(10^{-8}\). For convenience, we only plot the constraint violations until an apparent horizon is formed; after this apparent horizon forms and excision is applied, the constraint violations naturally become smaller.
The situation changes once either of the couplings is non-vanishing; the constraint violations remain below the \(10^{-8}\) relative error for most of the simulation but then quickly rise as the scalar field profile approaches the center of coordinates. For most cases, the constraint violation remains below the \(1\%\) level throughout the simulation. However, there are cases in which violations are within a worrying \(1\%\) and \(10\%\) like for \(\epsilon_{2}=-10^{-3}\) and \(\epsilon_{1}=-2.5\times 10^{-2}\), and cases were the violations \(>10\%\) and greater than \(1000\%\) error, for \(\epsilon_{1}=5\times 10^{3}\) and \(\epsilon_{2}=-2.5\times 10^{3}\). These larger constraint violations are no surprise; manipulations in the constraint equations were performed that assume that the modifying terms remain corrective (i.e., within the applicable regime of the EFT), and these corrections become greater as the pulse collapses. The cases where constraint violations are large enough to be unable to trust simulations anymore also belong in the parameter regime that has shown either through loose or convergence or by leaving the EFT regime that these solutions can not be trusted.
|
2303.07090 | On $p_g$-ideals in positive characteristic | Let $(A,\mathfrak{m})$ be an excellent normal domain of dimension two
containing a field $k \cong A/\mathfrak{m}$. An $\mathfrak{m}$-primary ideal
$I$ to be a $p_g$-ideal if the Rees algebra $A[It]$ is a Cohen-Macaulay normal
domain. If $k$ is algebraically closed then Okuma, Watanabe and Yoshida proved
that $A$ has $p_g$-ideals and furthermore product of two $p_g$-ideals is a
$p_g$ ideal. In a previous paper we showed that if $k$ has characteristic zero
then $A$ has $p_g$-ideals. In this paper we prove that if $k$ is perfect field
of positive characteristic then also $A$ has $p_g$ ideals. | Tony J. Puthenpurakal | 2023-03-13T13:19:42Z | http://arxiv.org/abs/2303.07090v1 | # On \(p_{g}\)-ideals in positive characteristic
###### Abstract.
Let \((A,\mathfrak{m})\) be an excellent normal domain of dimension two containing a field \(k\cong A/\mathfrak{m}\). An \(\mathfrak{m}\)-primary ideal \(I\) to be a \(p_{g}\)-ideal if the Rees algebra \(A[It]\) is a Cohen-Macaulay normal domain. If \(k\) is algebraically closed then Okuma, Watanabe and Yoshida proved that \(A\) has \(p_{g}\)-ideals and furthermore product of two \(p_{g}\)-ideals is a \(p_{g}\) ideal. In a previous paper we showed that if \(k\) has characteristic zero then \(A\) has \(p_{g}\)-ideals. In this paper we prove that if \(k\) is perfect field of positive characteristic then also \(A\) has \(p_{g}\) ideals.
Key words and phrases:\(p_{g}\)-ideal, normal Rees rings, Cohen-Macaulay rings, stable ideals 2020 Mathematics Subject Classification: Primary 13A30, 13B22; Secondary 13A50, 14B05
## 1. introduction
Dear Reader, while reading this paper it is a good idea to have [10] nearby. Let \((A,\mathfrak{m})\) be a normal domain of dimension two. By Zariski's theory of integrally closed ideals in a two dimensional regular local ring, we get that if \(A\) is regular and \(I\) is an integrally closed \(\mathfrak{m}\)-primary ideal then the Rees algebra \(\mathcal{R}(I)=A[It]\) is a Cohen-Macaulay normal domain; see [4, Chapter 14] for a modern exposition. Later Lipman proved that if \((A,\mathfrak{m})\) is a two dimensional rational singularity then analogous results holds, see [6].
Assume \((A,\mathfrak{m})\) is an excellent normal domain of dimension two containing an algebraically closed field \(k\cong A/\mathfrak{m}\). For such rings Okuma, Watanabe and Yoshida in [8] introduced (using geometric techniques) the notion of \(p_{g}\)-ideals as follows: let \(I\) be an \(\mathfrak{m}\)-primary ideal in \(A\). The \(I\) has a resolution \(f\colon X\to Spec(A)\) with \(I\mathcal{O}_{X}\) invertible. Then \(I\mathcal{O}_{X}=\mathcal{O}_{X}(-Z)\) for some anti-nef cycle \(Z\). It can be shown that \(\ell_{A}(H^{1}(X,\mathcal{O}_{X}(-Z))\leq p_{g}(A)\) where \(p_{g}(A)=\ell_{A}(H^{1}(X,\mathcal{O}_{X})\) is the geometric genus of \(A\) and \(Z\) is an anti-nef cycle such that \(\mathcal{O}_{X}(-Z)\) has no fixed component. An integrally closed \(\mathfrak{m}\)-primary ideal \(I\) with \(\ell_{A}(H^{1}(X,\mathcal{O}_{X}(-Z))=p_{g}(A)\) is called a \(p_{g}\)-ideal. They showed that \(p_{g}\) ideals exist in \(A\). Furthermore if \(I,J\) are two \(\mathfrak{m}\)-primary \(p_{g}\) ideals then they also proved that \(IJ\) is a \(p_{g}\)-ideal. Furthermore \(I\) is stable and so the Rees algebra \(\mathcal{R}(I)\) is a Cohen-Macaulay normal domain. They also proved that if \(A\) is also a rational singularity then any \(\mathfrak{m}\)-primary integrally closed ideal is a \(p_{g}\)-ideal. In a later paper [9] they showed that if \(\mathcal{R}(I)\) is a Cohen-Macaulay normal domain then \(I\) is a \(p_{g}\)-ideal.
Motivated by this result we made the following definition in [10]:
**Definition 1.1**.: Let \((A,\mathfrak{m})\) be a normal domain of dimension two. An \(\mathfrak{m}\)-primary ideal \(I\) is said to be \(p_{g}\)-ideal in \(A\) if the Rees algebra \(\mathcal{R}(I)=A[It]\) is a normal Cohen-Macaulay domain.
We note that if \(I\) is a \(p_{g}\)-ideal then all powers of \(I\) are integrally closed. Furthermore if the residue field of \(A\) is infinite then \(I\) is stable (i.e., reduction number of \(I\) is \(\leq 1\)), see [3, Theorem 1]. From the definition it does not follow that if \(I,J\) are \(p_{g}\)-ideals then the product \(IJ\) is also a \(p_{g}\) ideal. However if \(A\) is also analytically unramified with an infinite residue field then by a result of Rees; product of two \(p_{g}\) ideals is \(p_{g}\), see [11, 2.6] (also see [10, 1.2]). _We do not know that whether every normal domain of dimension two has a \(p_{g}\) ideal._
In [10] we proved that if \((A,\mathfrak{m})\) is an excellent two dimensional normal domain containing a field \(k\cong A/\mathfrak{m}\) of characteristic zero, then there exists \(p_{g}\) ideals in \(A\). The technique in [10] fails if \(k\) has positive characteristic. In this paper we prove
**Theorem 1.2**.: _Let \((A,\mathfrak{m})\) be an excellent two dimensional normal domain containing a perfect field \(k\cong A/\mathfrak{m}\) of characteristic \(p>0\). Then there exists \(p_{g}\) ideals in \(A\)._
To prove Theorem 1.2 we build on the techniques developed in [10]. The main new technique are two spectral sequences discovered by Ellingsrud-Skjelbred; [1, section 2].
We now describe in brief the contents of this paper. In section two we discuss some preliminaries that we need. In section three we discuss the two spectral sequences discovered by Ellingsrud-Skjelbred and give an application that we need. In section four we give a proof of Theorem 1.2.
## 2. preliminaries
Throughout this section \((A,\mathfrak{m})\) is a Noetherian local ring of dimension two containing a perfect field \(k\cong A/\mathfrak{m}\).
Most of the following was proved in [10]
**Lemma 2.1**.: _Let \((A,\mathfrak{m})\) be a Noetherian local ring containing a perfect field \(k\cong A/\mathfrak{m}\). Let \(\ell\) be a finite extension of \(k\). Set \(B=A\otimes_{k}\ell\). Then we have the following_
1. \(B\) _is a finite flat_ \(A\)_-module._
2. \(B\) _is a Noetherian ring._
3. \(B\) _is local with maximal ideal_ \(\mathfrak{m}B\) _and residue field isomorphic to_ \(\ell\)_._
4. \(B\) _contains_ \(\ell\)_._
5. \(A\) _is Cohen-Macaulay (Gorenstein, regular) if and only if_ \(B\) _is Cohen-Macaulay (Gorenstein, regular)._
6. _If_ \(A\) _is excellent then so is_ \(B\)
_._
7. _If_ \(A\) _is normal then so is_ \(B\)_._
8. _If_ \(A\) _is excellent normal and_ \(I\) _is an integrally closed ideal in_ \(A\) _then_ \(IB\) _is an integrally closed ideal in_ \(B\)_._
9. _If_ \(\ell\) _is a Galois extension of_ \(k\) _with Galois group_ \(G\) _then_ \(G\) _acts on_ \(B\) _(via_ \(\sigma(a\otimes t)=a\otimes\sigma(t)\)_). Furthermore_ \(B^{G}=A\)_._
10. _If_ \(A\) _is Cohen-Macaulay of dimension two, the natural map_ \(H^{2}_{\mathfrak{m}}(A)\to H^{2}_{\mathfrak{m}}(B)\) _is an inclusion._
Proof.: For (1)-(8) see [10, 2.1].
(9) It is clear that \(G\) acts on \(B\) (via the action described) and \(A\subseteq B^{G}\). By normal basis theorem, cf., [5, Chapter 6, Theorem 13.1], there exists \(x\in\ell\) such that \(\{\sigma(x)\colon\sigma\in G\}\) is a basis of \(\ell\) over \(k\). A basis of \(B\) as an \(A\)-module is \(\{1\otimes\sigma(x)\colon\sigma\in G\}\).
Let \(\xi\in B^{G}\). Let \(\xi=\sum_{\sigma}a_{\sigma}(1\otimes\sigma(x))\). Let \(e\) be the identity in \(G\). Let \(\tau\in G\). Then notice
\[\xi=\tau^{-1}\xi=\sum_{\sigma}a_{\sigma}(1\otimes\tau^{-1}\sigma(x))\]
Comparing terms we get \(a_{e}=a_{\tau}\) for all \(\tau\in G\). So
\[\xi =a_{e}\left(\sum_{\sigma}1\otimes\sigma(x)\right),\] \[=a_{e}(1\otimes\sum_{\sigma}\sigma(x)),\] \[=a_{e}(r\otimes 1)\quad\text{where }r=\sum_{\sigma}\sigma(x)\in k,\] \[=a_{e}r(1\otimes 1)\in A.\]
The result follows.
(10) We have an exact sequence of finite dimensional \(k\) vector spaces
\[0\to k\to\ell\to V\to 0.\]
So we have an exact sequence of \(A\)-modules
\[0\to A\to B\to A\otimes_{k}V\to 0.\]
We note that both \(B\) and \(A\otimes_{k}V\) are free \(A\)-modules. As \(H^{i}_{\mathfrak{m}}(A)=0\) for \(i<2\) the result follows.
A construction: Fix an algebraic closure \(\overline{k}\) of \(k\). We investigate properties of \(A\otimes_{k}\overline{k}\).
Let
\[\mathcal{C}_{k}=\{E\mid E\text{ is a finite extension of }k\text{ in }\overline{k}\}.\]
We note that \(\mathcal{C}_{k}\) is a directed system of fields with \(\lim_{E\in\mathcal{C}_{k}}E=\overline{k}\). For \(E\in\mathcal{C}_{k}\) set \(A^{E}=A\otimes_{k}E\). Then by 2.1\(A^{E}\) is a finite flat extension of \(A\). Also \(A^{E}\) is local
with maximal ideal \(\mathfrak{m}^{E}=\mathfrak{m}A^{E}\). Clearly \(\{A^{E}\}_{E\in\mathcal{C}_{k}}\) forms a directed system of local rings and we have \(\lim_{E\in\mathcal{C}_{k}}A^{E}=A\otimes_{k}\overline{k}\). By [2, Chap. 0. (10.3.13)] it follows that \(A\otimes_{k}\overline{k}\) is a Noetherian local ring (say with maximal ideal \(\mathfrak{m}^{\overline{k}}\)). Note that we may consider \(A^{E}\) as a subring of \(A\otimes_{k}\overline{k}\). We have
\[A\otimes_{k}\overline{k}=\bigcup_{E\in\mathcal{C}_{k}}A^{E}\quad\text{and} \quad\mathfrak{m}^{\overline{k}}=\bigcup_{E\in\mathcal{C}_{k}}\mathfrak{m}^{E}.\]
It follows that \(\mathfrak{m}(A\otimes_{k}\overline{k})=\mathfrak{m}^{\overline{k}}\). It is also clear that \(A\otimes_{k}\overline{k}\) contains \(\overline{k}\) and its residue field is isomorphic to \(\overline{k}\). The extension \(A\to A\otimes_{k}\overline{k}\) is flat with fiber \(\cong\overline{k}\). In particular \(\dim A\otimes_{k}\overline{k}\) is two.
**2.4**.: Let \(F\in\mathcal{C}_{k}\). Set
\[\mathcal{C}_{F}=\{E\mid E\in\mathcal{C}_{k},E\supseteq F\}.\]
Then \(\mathcal{C}_{F}\) is cofinal in \(\mathcal{C}_{k}\). So we have \(\lim_{E\in\mathcal{C}_{F}}A^{E}=A\otimes_{k}\overline{k}\). Also note that if \(E\in\mathcal{C}_{F}\) then
\[A^{E}=A\otimes_{k}E=A\otimes_{k}F\otimes_{F}E=A^{F}\otimes_{F}E.\]
It also follows that \(\mathfrak{m}^{E}=\mathfrak{m}^{F}A^{E}\).
For a proof of the following result see [10, 3.3].
**Lemma 2.5**.: _If \(A\) is excellent then so is \(A\otimes_{k}\overline{k}\)_
The main properties of \(A\otimes_{k}\overline{k}\) that we need is the following is summarised in the following result which is Theorem 3.4 in [10].
**Theorem 2.6**.: _(with hypotheses as above) Set \(T=A\otimes_{k}\overline{k}\) and \(\mathfrak{n}=\mathfrak{m}^{\overline{k}}\). We have_
1. _A is Cohen-Macaulay (Gorenstein, regular) if and only if_ \(T\) _is Cohen-Macaulay (Gorenstein, regular)._
2. _If_ \(A\) _is a normal domain if and only if_ \(T\) _is a normal domain._
3. _Assume_ \(A\) _is an excellent normal domain. Then we have_ 1. \(I\) _is integrally closed in_ \(A\) _if and only if_ \(IT\) _is integrally closed in_ \(T\)__ 2. \(I\) _is a_ \(p_{g}\) _ideal in_ \(A\) _if and only if_ \(IT\) _is a_ \(p_{g}\) _ideal in_ \(T\)_._
## 3. Ellingsrud-Skjelbred spectral sequences and an application
In this section we describe the Ellingsrud-Skjelbred spectral sequences (we follow the exposition given in [7, 8.6]). We also give an application which is crucial for us.
**3.1**.: Let \(S\) be a commutative Noetherian ring. Let \(Mod(A)\) be the category of left \(A\)-modules.
(1) Let \(G\) be a finite group. Let \(S[G]\) be the group ring and let \(Mod(S[G])\) be the category of left \(S[G]\)-modules. Let \((-)^{G}\) be the functor of \(G\)-fixed points. Let \(H^{n}(G,-)\) be the \(n^{th}\) right derived functor of \((-)^{G}\).
(2) Let \(\mathfrak{a}\) be an ideal in \(S\). Let \(\Gamma_{\mathfrak{a}}(-)\) be the torsion functor associated to \(\mathfrak{a}\). Let \(H^{n}_{\mathfrak{a}}(-)\) be the \(n^{th}\) right derived functor of \(\Gamma_{\mathfrak{a}}(-)\). Usually \(H^{n}_{\mathfrak{a}}(-)\) is called the \(n^{th}\) local cohomology functor of \(A\) with respect to \(\mathfrak{a}\).
(3) If \(M\in Mod(S[G])\) then note \(\Gamma_{\mathfrak{a}}(M)\in Mod(S[G])\).
**3.2**.: Ellingsrud-Skjelbred spectral sequences are constructed as follows: Consider the following sequence of functors
\[(i)\quad Mod(S[G])\xrightarrow{(-)^{G}}Mod(S)\xrightarrow{\Gamma_{\mathfrak{a} }}Mod(S),\]
\[(ii)\quad Mod(S[G])\xrightarrow{\Gamma_{\mathfrak{a}}}Mod(S[G])\xrightarrow{(- )^{G}}Mod(S)\]
We then notice
1. The above compositions are equal.
2. It is possible to use Grothendieck spectral sequence of composite of functors to both (i) and (ii) above; see [7, 8.6.2].
Following Ellingsrud-Skjelbred we let \(H^{n}_{\mathfrak{a}}(G,-)\) denote the \(n^{th}\) right derived functor of this composite functor. So by (i) and (ii) we have two first quadrant spectral sequences for each \(S[G]\)-module \(M\)
\[(\alpha)\colon\qquad E^{p,q}_{2}=H^{p}_{\mathfrak{a}}(H^{q}(G,M))\Longrightarrow H ^{p+g}_{\mathfrak{a}}(G,M),\text{ and}\]
\[(\beta)\colon\qquad\mathcal{E}^{p,q}_{2}=H^{p}(G,H^{q}_{\mathfrak{a}}(M)) \Longrightarrow H^{p+g}_{\mathfrak{a}}(G,M).\]
**Remark 3.3**.: (1) If \(S\) is \(\mathbb{N}\)-graded ring then \(S[G]\) is also \(\mathbb{N}\)-graded (with \(\deg\sigma=0\) for all \(\sigma\in G\)).
(2) If \(M\) is a finitely generated graded left \(S[G]\)-module then \(H^{n}(G,M)\) are finitely generated graded \(S\)-module. This can be easily seen by taking a graded free resolution of \(S\) consisting of finitely generated free graded \(S[G]\)-modules.
(3) Ellingsrud-Skjelbred spectral sequences have an obvious graded analogue.
### Application
Setup: Let \((A,\mathfrak{m})\) be a two dimensional Cohen-Macaulay local ring containing a field \(k\cong A/\mathfrak{m}\). Let \(\ell\) be a finite Galois extension of \(k\) with Galois group \(G\). Set \(B=A\otimes_{k}\ell\) and let \(\mathfrak{n}\) be maximal ideal of \(B\). Let \(G\) act on \(B\) (as described in 2.1(9)). Note \(B^{G}=A\); see 2.1(9). Let \(I\) be an \(\mathfrak{n}\)-primary ideal of \(B\) which is \(G\)-invariant (i.e., \(\sigma(I)=I\) for each \(\sigma\in G\)). Let \(\mathcal{R}(I)=B[It]\) be the Rees algebra of \(I\). Then note we have a natural action of \(G\) on \(\mathcal{R}(I)\). Let \(\mathcal{C}=\mathcal{R}(I)^{G}\). By a result of E.Noether we have \(\mathcal{C}\) is a graded finitely generated \(A=\mathcal{C}_{0}\)-algebra. Furthermore \(\mathcal{R}(I)\) is a finite \(\mathcal{C}\)-module. We prove
**Theorem 3.5**.: _(with hypotheses as in 3.4.) Let \(\mathfrak{M}\) be the graded maximal ideal of \(\mathcal{C}\). If \(\mathcal{R}(I)\) is Cohen-Macaulay then_
1. \(H^{0}_{\mathfrak{M}}(\mathcal{C})=H^{1}_{\mathfrak{M}}(\mathcal{C})=0\)_._
2. \(H^{2}_{\mathfrak{M}}(\mathcal{C})\) _has finite length and_ \(H^{2}_{\mathfrak{M}}(\mathcal{C})_{0}=0\)_._
3. _For some_ \(r>0\) _there exists an_ \(\mathfrak{m}\)_-primary ideal_ \(J\) _in_ \(A\) _such that_ \(A[Jt]=\mathcal{C}^{<r>}\) _is Cohen-Macaulay._
Proof.: We first consider the Ellingsrud-Skjelbred spectral sequence \((\beta)\) with \(S=\mathcal{C}\) and \(M=\mathcal{R}(I)\). We note that as \(\mathcal{R}(I)\) is a finite \(\mathcal{C}\)-module we get that \(\sqrt{\mathfrak{M}\mathcal{R}(I)}\) is the graded maximal ideal of \(\mathcal{R}(I)\). As \(\mathcal{R}(I)\) is Cohen-Macaulay of dimension \(3\) we get that \(H^{i}_{\mathfrak{M}\mathfrak{M}}(\mathcal{R}(I))=0\) for \(i\leq 2\). Furthermore by Grothendieck Vanishing theorem we get \(H^{i}_{\mathfrak{M}\mathfrak{M}}(\mathcal{R}(I))=0\) for \(i>3\). Thus \((\beta)\) collapses at the second stage. In particular we have \(H^{r}_{\mathfrak{M}\mathfrak{M}}(G,\mathcal{R}(I))=0\) for \(r=0,1,2\).
Next we consider the Ellingsrud-Skjelbred spectral sequence \((\alpha)\) with \(S=\mathcal{C}\) and \(M=\mathcal{R}(I)\).
(1) We have \(E^{0,0}_{2}=H^{0}_{\mathfrak{M}\mathfrak{M}}(\mathcal{C})\). Furthermore it is clear that \(E^{0,0}_{2}=E^{0,0}_{\infty}\). As \(E^{0,0}_{\infty}\) is a sub-quotient of \(H^{0}_{\mathfrak{M}\mathfrak{M}}(G,\mathcal{R}(I))=0\) we get \(H^{0}_{\mathfrak{M}\mathfrak{M}}(\mathcal{C})=0\).
We also have \(E^{1,0}_{2}=H^{1}_{\mathfrak{M}\mathfrak{M}}(\mathcal{C})\). Furthermore it is clear that \(E^{1,0}_{2}=E^{1,0}_{\infty}\). As \(E^{1,0}_{\infty}\) is a sub-quotient of \(H^{1}_{\mathfrak{M}\mathfrak{M}}(G,\mathcal{R}(I))=0\) we get \(H^{1}_{\mathfrak{M}\mathfrak{M}}(\mathcal{C})=0\).
(2) We have \(E^{0,1}_{2}=H^{0}_{\mathfrak{M}\mathfrak{M}}(H^{1}(G,\mathcal{R}))\) and \(E^{2,0}_{2}=H^{2}_{\mathfrak{M}\mathfrak{M}}(\mathcal{C})\). Note we have an exact sequence
\[0\to E^{0,1}_{3}\to H^{0}_{\mathfrak{M}\mathfrak{M}}(H^{1}(G,\mathcal{R}(I))) \to H^{2}_{\mathfrak{M}\mathfrak{M}}(\mathcal{C})\to E^{2,0}_{3}\to 0.\]
Furthermore it is clear that \(E^{0,1}_{3}=E^{0,1}_{\infty}\) and \(E^{2,0}_{3}=E^{2,0}_{\infty}\). Furthermore \(E^{0,1}_{\infty}\) and \(E^{2}_{2,0}\) are sub-quotients of \(H^{1}_{\mathfrak{M}\mathfrak{M}}(G,\mathcal{R}(I))\) and \(H^{2}_{\mathfrak{M}\mathfrak{M}}(G,\mathcal{R}(I))\) which are zero. It follows that we have an graded isomorphism
\[H^{0}_{\mathfrak{M}\mathfrak{M}}(H^{1}(G,\mathcal{R}(I)))\cong H^{2}_{ \mathfrak{M}\mathfrak{M}}(\mathcal{C}).\]
2(a) As \(H^{1}(G,\mathcal{R}(I))\) is a finitely generated \(\mathcal{C}\)-module we get that \(H^{2}_{\mathfrak{M}\mathfrak{M}}(\mathcal{C})\) has finite length.
2(b) We note that we have a graded inclusion
\[H^{0}_{\mathfrak{M}\mathfrak{M}}(H^{1}(G,\mathcal{R}(I)))\subseteq H^{0}_{ \mathfrak{m}\mathcal{C}}(H^{1}(G,\mathcal{R}(I))).\]
Thus it suffices to prove \(H^{0}_{\mathfrak{m}\mathcal{C}}(H^{1}(G,\mathcal{R}(I)))_{0}=0\). Let \(\mathbb{F}\) be a graded free resolution of \(\mathcal{C}\) by finitely generated graded free \(\mathcal{C}[G]\)-modules with \(\mathbb{F}_{0}=\mathcal{C}[G]\). Then \(H^{1}(G,\mathcal{R}(I))\) is the first cohomology module of the complex \(\mathbb{W}=\operatorname{Hom}_{\mathcal{C}[G]}(\mathbb{F},\mathcal{R}(I))\). Let \(B^{1}\) and \(Z^{1}\) be the module of first co-boundaries and first co-cycles of \(\mathbb{W}\).
Note \(Z^{1}\) is a submodule of \(\operatorname{Hom}_{\mathcal{C}[G]}(\mathbb{F}_{1},\mathcal{R}(I))\) which in turn is a submodule of \(\operatorname{Hom}_{\mathcal{C}}(\mathbb{F},\mathcal{R}(I))\cong\mathcal{R}(I )^{s}\) for some \(s\). In particular we have \(H^{0}_{\mathfrak{m}\mathcal{C}}(Z^{1})=0\).
We also have a graded exact sequence \(0\to\mathcal{C}\to\mathcal{R}(I)\to B^{1}\to 0\). Note as \(\mathfrak{m}\) is an \(A\)-ideal (and as \(\mathcal{C}_{0}=A\) and \(\mathcal{R}(I)_{0}=B\)) we get an exact sequence
\[H^{1}_{\mathfrak{m}}(B)\to H^{1}_{\mathfrak{m}}(B^{1})_{0}\to H^{2}_{ \mathfrak{m}}(A)\to H^{2}_{\mathfrak{m}}(B).\]
Note as \(B\) is finite Cohen-Macaulay \(A\)-module of dimension \(2\) we get \(H^{1}_{\mathfrak{m}}(B)=0\). Also by 2.1(10) the map \(H^{2}_{\mathfrak{m}}(A)\to H^{2}_{\mathfrak{m}}(B)\) is an inclusion. Thus \(H^{1}_{\mathfrak{m}}(B^{1})_{0}=0\).
We have an exact sequence \(0\to B^{1}\to Z^{1}\to H^{1}(G,\mathcal{R}(I))\to 0\). Taking cohomology we get \(H^{0}_{\mathfrak{m}\mathcal{C}}(H^{1}(G,\mathcal{R}(I)))_{0}=0\), since \(H^{0}_{\mathfrak{m}\mathcal{C}}(Z^{1})=H^{1}_{\mathfrak{m}\mathcal{C}}(B^{1})_{ 0}=0\).
(3) We note that \(\mathcal{C}_{n}=I^{n}\cap A\). As \(B\) is a finite \(A\)-module we get that \(\mathfrak{m}B\) is \(\mathfrak{n}\)-primary. As \(I^{n}\) is \(\mathfrak{n}\)-primary we get that \(I^{n}\) will contain some power of \(\mathfrak{m}B\). As
is flat \(A\)-module we get \(\mathfrak{m}^{s}B\cap A=\mathfrak{m}^{s}\) for all \(s\geq 1\). Thus \(\mathcal{C}_{n}\) are \(\mathfrak{m}\)-primary ideals of \(A\). As \(\mathcal{C}\) is Noetherian it follows that some Veronese \(\mathcal{C}^{<m>}\) is standard graded. It follows that \(\mathcal{C}^{<m>}=A[\mathcal{C}_{m}t]\).
Local cohomology commutes with the Veronese functor. By (2) it follows that \(H^{i}_{\mathfrak{M}^{<s>}}C^{<s>}=0\) for all \(s\geq s_{0}\) and \(i=0,1,2\). We take \(r=s_{0}m\). Then note that \(\mathcal{C}^{<r>}=A[\mathcal{C}_{r}t]\) is Cohen-Macaulay. Furthermore as discussed above \(\mathcal{C}_{r}\) is \(\mathfrak{m}\)-primary.
## 4. proof of Theorem 1.2
In this section we give
Proof of Theorem 1.2.: Let \(T=A\otimes_{k}\overline{k}\). Let \(\mathfrak{n}\) be the maximal ideal of \(T\). We note that \(T\) is an excellent normal domain containing \(\overline{k}\cong T/\mathfrak{n}\) (see 2.3, 2.5 and 2.6(2)). By [8, 4.1] there exists a \(p_{g}\) ideal \(J\) in \(T\). By 2.3 we have \(T=\bigcup_{E\in\mathcal{C}_{k}}A^{E}\). So there exists \(F\in\mathcal{C}_{k}\) which contains a set of minimal generators of \(J\). We may further assume (by enlarging) that \(F\) is Galois over \(k\). Thus there exists ideal \(W\) in \(A^{F}\) with \(WT=J\). By 2.6(3)(b) we get that \(W\) is a \(p_{g}\) ideal in \(A^{F}\). Let \(G\) be the Galois group of \(F\) over \(k\). Then \(G\) acts on \(A^{F}\) (via \(\sigma(a\otimes f)=a\otimes\sigma(f)\)). By 2.1(9) we get \((A^{F})^{G}=A\). We also note that we have a natural \(G\) action on \(A^{F}[t]\) (fixing \(t\)) and clearly its invariant ring is \(A[t]\). Let \(\sigma\in G\). It's action on \(A^{F}[t]\) induces an isomorphism of between the Rees algebra's \(\mathcal{R}(W)\) and \(\mathcal{R}(\sigma(W))\). So \(\sigma(W)\) is a \(p_{g}\) ideal in \(A^{F}\). As product of \(p_{g}\) ideals is \(p_{g}\) we get that \(K=\prod_{\sigma\in G}\sigma(W)\) is a \(p_{g}\) ideal in \(A^{F}\). Note \(K\) is \(G\)-invariant. So the \(G\) action of \(A^{F}[t]\) restricts to a \(G\)-action on \(\mathcal{R}(K)\). Set \(\mathcal{C}=\mathcal{R}(K)^{G}\). We note that \(\mathcal{C}_{n}=K^{n}\cap A\) is an \(\mathfrak{m}\)-primary integrally closed ideal for all \(n\geq 1\). By Theorem 3.5 some Veronese of \(\mathcal{C}^{<r>}=A[Jt]\) is Cohen-Macaulay. As \(J^{n}\) is integrally closed \(\mathfrak{m}\)-primary for all \(n\) we get that \(A[Jt]\) is a Cohen-Macaulay normal domain. Thus \(J\) is a \(p_{g}\) ideal in \(A\).
|
2308.12089 | Modified cosmology from quantum deformed entropy | In Ref. [S. Jalalzadeh, Phys. Lett. B 829 (2022) 137058], Jalalzadeh
established that the thermodynamical entropy of a quantum-deformed black hole
with horizon area $A$ can be written as $S_q=\pi\sin\left(\frac{A}{8G\mathcal
N} \right)/\sin\left(\frac{\pi}{2\mathcal N} \right)$, where $\mathcal
N=L_q^2/L_\text{P}^2$, $L_\text{P}$ being the Planck length and $L_q$ denoting,
generically, the q-deformed cosmic event horizon distance $L_q$. Motivated by
this, we now extend the framework constructed in [S. Jalalzadeh, Phys. Lett. B
829 (2022) 137058] towards the Friedmann and Raychaudhuri equations describing
spatially homogeneous and isotropic universe dynamics. Our procedure in this
paper involves a twofold assumption. On the one hand, we take the entropy
associated with the apparent horizon of the Robertson-Walker universe in the
form of the aforementioned expression. On the other hand, we assume that the
unified first law of thermodynamics, $dE=TdS+WdV$, holds on the apparent
horizon. Subsequently, we find a novel modified cosmological scenario
characterized by quantum-deformed (q-deformed) Friedmann and Raychaudhuri
equations containing additional components that generate an effective dark
energy sector. Our results indicate an effective dark energy component, which
can explain the Universe's late-time acceleration. Moreover, the Universe
follows the standard thermal history, with a transition redshift from
deceleration to acceleration at $z_\text{tran}=0.5$. More precisely, according
to our model, at a redshift of $z = 0.377$, the effective dark energy dominates
with a de Sitter universe in the long run. We include the evolution of
luminosity distance, $\mu$, the Hubble parameter, $H(z)$, and the deceleration
parameter, $q(z)$, versus redshift. Finally, we have conducted a comparative
analysis of our proposed model with others involving non-extensive entropies. | S. Jalalzadeh, H. Moradpour, P. V. Moniz | 2023-08-23T12:20:41Z | http://arxiv.org/abs/2308.12089v1 | # Modified cosmology from quantum deformed entropy
###### Abstract
In Ref. [1], Jalalzadeh established that the thermodynamical entropy of a quantum-deformed black hole with horizon area \(A\) can be written as \(S_{q}=\pi\sin\left(\frac{\mathcal{N}}{\mathcal{R}G,\mathcal{F}}\right)/\sin \left(\frac{\pi}{2,\mathcal{F}}\right)\), where \(\mathcal{N}=L_{q}^{2}/L_{\rm P}^{2}\), \(L_{\rm P}\) being the Planck length and \(L_{q}\) denoting, generically, the q-deformed cosmic event horizon distance \(L_{q}\). Motivated by this, we now extend the framework constructed in [1] towards the Friedmann and Raychaudhuri equations describing spatially homogeneous and isotropic universe dynamics. Our procedure in this paper involves a twofold assumption. On the one hand, we take the entropy associated with the apparent horizon of the Robertson-Walker universe in the form of the aforementioned expression. On the other hand, we assume that the unified first law of thermodynamics, \(dE=TdS+WdV\), holds on the apparent horizon. Subsequently, we find a novel modified cosmological scenario characterized by quantum-deformed (q-deformed) Friedmann and Raychaudhuri equations containing additional components that generate an effective dark energy sector. Our results indicate an effective dark energy component, which can explain the Universe's late-time acceleration. Moreover, the Universe follows the standard thermal history, with a transition redshift from deceleration to acceleration at \(z_{\rm tran}=0.5\). More precisely, according to our model, at a redshift of \(z=0.377\), the effective dark energy dominates with a de Sitter universe in the long run. We include the evolution of luminosity distance, \(\mu\), the Hubble parameter, \(H(z)\), and the deceleration parameter, \(q(z)\), versus redshift. Finally, we have conducted a comparative analysis of our proposed model with others involving non-extensive entropies.
+
Footnote †: journal: Physics of the Dark Universe
## 1 Introduction
Entropy is a fundamental quantity, constituting a significant concept. It is desirable that a generalized definition of entropy may apply to all physical systems; otherwise, it may mean that we do not grasp what physical entropy is. In fact, as quantum mechanics, quantum field theory, and quantum gravity have developed, entropy has not been cast as a universal property or a generic form, but instead, it has been presented within many contexts, depending on the particular physical system under investigation. For instance, one of the key findings in theoretical physics is that a black hole (BH) is associated with blackbody radiation, which has a distinct temperature and Bekenstein-Hawking entropy [2; 3]. Unlike classical thermodynamics, where the entropy is directly related to the system's volume and represents an extensive quantity, the Bekenstein-Hawking entropy is proportional to the horizon's area.
Interestingly enough, it has also been shown that the field equations of general relativity can be obtained as a thermodynamic equation of state [4], a result also confirmed in modified gravity [5]. Parallel to this achievement signalling a deep connection between gravity and thermodynamics and hence statistical mechanics, it has also been addressed that quantum aspects of gravity may trigger gravity to show intrinsic non-extensive features [6; 7]. Indeed, the long-range nature of gravity is another similar reason to study cosmology in the framework of _generalized statistics_ and corresponding thermodynamics [8; 9; 10; 11; 12; 13]. Employing those statistics as mentioned earlier may also provide an origin for dark matter and MOND theory [14; 15]. The applications are not limited to these areas, and it seems there is hope to solve the problems of BH physics, astrophysics, and high-energy physics using the generalized statistics framework. [16; 17; 18; 19; 20; 21; 22].
Recent literature has proposed several definitions of entropy based on non-additive statistics, such as the Tsallis [23; 24], Renyi [25], and Barrow [26] entropies, in addition to the Bekenstein-Hawking one, due to the elusive nature of entropy. Moreover, the authors of Refs. [27; 28] proposed a fractional-fractal entropy that could encode the random fractal features of a BH horizon surface that resulted from fractional quantum gravity effects. The Sharma-Mittal entropy [10], the Kaniadakis entropy [29; 30; 31], the entropy in the setting of Loop Quantum Gravity [32; 33], the quantum cosmology [34; 35; 36; 37] and the entropy in fractional quantum gravity-cosmology [27; 28; 38] constitute some more well-known entropies.
Furthermore, a novel entropy formula for a BH was recently proposed by one of us [1], based on the quantum deformation (or q-deformation) approach to quantum gravity. Very briefly, let us explain this entropy.
According to various quantum gravity proposals, the BH horizon area, \(A_{n}\), may be quantized, and the appropriate eigen
values are given by [39]
\[A_{n}=\gamma L_{\rm P}^{2}n,\quad n=1,2,3,...\,, \tag{1}\]
where \(\gamma\) is a model-dependent dimensionless constant of order one, and \(L_{\rm P}=\sqrt{G}\) is the Planck length. Since Bekenstein's groundbreaking work, the literature has been enriched with a multitude of contributions that reinforce the conjecture of the area spectrum (1). These contributions are diverse and multifaceted, encompassing various considerations such as information theory (as outlined in [40; 41]), arguments from string theory [42], or the periodicity of time [37; 43; 44]. Furthermore, contributions extended from the quantization of a dust collapse using Hamiltonian methods [45; 46], can also be mentioned. However, a BH can be considered physically embedded in the Universe, characterized by event horizon \(A_{\rm U}\). As a result, the Schwarzschild event horizon of a BH cannot be larger than the cosmic event horizon of the Universe, \(A_{\rm U}\). The usual quantization methods with a spectrum given by Eq. (1) must consistently meet this reasoning. It has to be adjusted to be bounded from above. Quantum deformation of a model (using quantum groups) is one such method, allowing retrieval of the dimension of a Hilbert space into a finite value, assuming that the deformation parameter is a root of unity [47; 48; 49; 50; 51; 52; 53].
In this work, we propose to extend the quantum-deformed (q-deformed) entropy idea, applying the first law of thermodynamics to the Universe's horizon, adapting to a cosmological scenario the thermodynamical entropy of a quantum-deformed BH with horizon area \(A\) presented in [1]. This allows us to construct a q-deformed modified version of the Friedmann and Raychaudhuri equations. In particular, there will be newly added components whose dynamical implications motivate our investigation. To make clear our reasoning, let us add the following. The quantum-deformed black hole is a model constructed from the quantum Heisenberg-Weyl group. Quantum groups provide us with more complex symmetries than the classical Lie algebras, which are included in the former as a specific case. This indicates that quantum groups may be appropriate for describing symmetries of physical systems that lie beyond the scope of Lie algebras. Moreover, q-deformed models have a significant advantage because their corresponding Hilbert space is finite-dimensional when \(q\) is a root of unity [54]. This implies that using quantum groups with a deformation parameter of the root of unity is useful for constructing models with a finite number of states. These models can be used to explore applications in quantum gravity and quantum cosmology that adhere to the holographic principle and UV/IR mixing to solve the CC problem [36].
Last but not least, within modified gravity settings, it may be deemed necessary to elucidate the principal rationale behind the development of alternative models for the standard model of cosmology despite its achievement and conformity with observational data. The standard model of cosmology, though successful in its own right, is plagued by a fundamental and unresolved difficulty known as the cosmological constant (CC) problem. This enigma has long baffled cosmologists and remains a significant obstacle in advancing the field.
Numerous theoretical physicists expressed their reluctance to acknowledge the CC as a feasible justification for the accelerated expansion of the universe due to the fact that the anticipated value of CC from particle physics is \(\rho_{\Lambda}\simeq M_{\rm P}^{\rm d}\simeq(10^{18}\ {\rm GeV})^{4}\), a value that differs significantly from the astronomical limit for CC, which is \(\rho_{\Lambda}\simeq(10^{-3}\ {\rm eV})^{4}\) -roughly \(10^{123}\) times less than expected. The CC is regarded as the zero-point energy with a UV cutoff scale, such as the Planck scale or the supersymmetry breaking scale, from an effective field theory (EFT) perspective, whereas from a cosmological perspective, it is an IR scale problem that affects the entire universe's large-scale structure. As a result, the CC issue appears to contraven our preconceived notion of separating UV and IR scales, which is the foundation of EFT. The CC can be interpreted as both the zero point energy and the scale of the observed Universe, which contradicts the concept of local quantum fields. This suggests a mixing between the local UV and global IR physics. Some physicists argue that the CC problem is essentially a quantum gravity and quantum cosmology problem [36; 55; 56; 57]. Therefore, a candidate theory for quantum gravity must provide a classical continuum spacetime geometry at macroscopic scales with a global IR cutoff (CC) while also incorporating quantum corrections at the local UV scale. This approach would allow for a better understanding of the complex nature of the CC problem. It is commonly assumed that there exists a solution to the problematic "old" CC problem. This solution would result in the vacuum energy being exactly zero and radiatively stable.
With the current conditions, it is evident that any suggested substitute for the standard model of cosmology must elude the CC problem. For example, consider the replacement model with several free parameters. By adjusting the free parameters of the model, we may achieve an even more precise fit than the standard model of cosmology. However, the issue arises when we attempt to explain these free parameters' origin and physics in our model, which will involve a comparable problem. In section 5 of the article, we will provide an example to revisit this case.
Our paper structure is as follows. In the next section, we summarize with minimal detail q-deformed BH entropy. Section 3 examines the applicability of the previously mentioned procedure in cosmology. Consequently, this enables us to advance a new modified scenario built from the q-deformed entropy. Section 4 examines the cosmological consequences of the additional components in the q-deformed Friedmann and Raychaudhuri equations, concentrating on the dark energy density and equation-of-state (EoS) parameters. In section 5, a comparative analysis has been conducted between our proposed model and others previously established in the field of entropic cosmology, with a particular emphasis on examining the similarities and differences between ours and the Barrow model in [58]. Finally, in Section 6, we will examine our findings.
## 2 Entropy of a q-deformed Schwarzschild BH
Let us briefly go through a few significant results about the q-deformation quantization and entropy of the Schwarzschild
BH with mass \(M\). Following Louko in Ref. [59], we start with the reduced action of a Schwarzschild BH given by
\[S=\int\Big{\{}P_{M}\dot{M}-H(M)\Big{\}}dt, \tag{2}\]
where \(H(M):=M\) is the reduced Hamiltonian, and \(P_{M}\) is the canonical conjugate momenta of the BH mass \(M\). The solutions of the field equations are \(M=\text{const.}\) and \(P_{M}=-t\). The constancy of mass \(M\) follows Birkhoff's theorem, which states that the mass is the only time-independent and coordinate-invariant solution. Furthermore, \(P_{M}\) represents the asymptotic time coordinate at the spacelike slice [60]. Thus, \(\dot{M}\) contains all the pertinent information regarding the local geometry of the classical solutions. On the other hand, the conjugate momenta, \(P_{M}\), is equivalent to the disparity of the asymptotic Killing times between the left and right infinities on a constant \(t\) hypersurface [59], following the convention where the Killing time at the right (left) infinity increases towards the future (past). As a result, it does not contain any information regarding the local geometry. However, it instead contains information regarding securing the spacelike hypersurfaces at the two infinities [59].
Moreover, it is customary to restrict the mass manually and the corresponding momenta to the range \(|P_{M}|<\pi M/M_{\text{P}}^{2}\), where \(M_{\text{P}}=1/\sqrt{G}\) is the Planck mass. This implies that the Minkowski time on the asymptotic right-hand side of each classical solution only lies within an interval of length \(2\pi M/M_{\text{P}}^{2}\), centered around a value that is diagonally opposite to the non-evolving left end of the hypersurfaces in the Kruskal diagram. With respect to the time parameter \(t\), each classical solution is exclusively defined for an interval of \(-\pi M/M_{\text{P}}^{2}<t-t_{0}<\pi M/M_{\text{P}}^{2}\), where \(t=t_{0}\) represents the hypersurface whose two asymptotic ends are diagonally opposite. As previously stated, in light of Euclidean quantum gravity [61] (where the situation is similar to finite temperature quantum field theory, when the time is Euclideanized [62]), we posit that the conjugate momentum, \(P_{M}\), serves as a temporal measure and, therefore, must exhibit periodicity with a period inverse to the Hawking temperature \(T_{\text{H}}=M_{\text{P}}^{2}/8\pi M\)[63; 64]. This guarantees the absence of a conical singularity in the two-dimensional Euclidean section close to the black hole horizon. However, it is important to note that the aforementioned identification results in a physical phase space that constitutes a wedge extracted from the complete \((M,P_{M})\) plane. The \(M\) axis and the \(P_{M}=1/T_{\text{H}}\) line bound this wedge. Therefore,
\[P_{M}\sim P_{M}+\frac{1}{T_{\text{H}}}. \tag{3}\]
The above boundary condition verifies that there is no conical singularity in the \(2D\) Euclidean section.
Also, it indicates that the phase space is a wedge cut out from the full \((M,P_{M})\) phase space, bounded by the mass axis and the line \(P_{M}=1/T_{\text{H}}\)[65]. Hence, according to the references [59; 66], one could make the following canonical transformation \((M,P_{M})\rightarrow(x,p)\), which simultaneously opens up the phase space and also incorporates the periodicity condition (3)
\[x=\sqrt{\frac{A}{4\pi G}}\cos(2\pi P_{M}T_{\text{H}}),\ \ p=\sqrt{\frac{A}{4\pi G }}\sin(2\pi P_{M}T_{\text{H}}), \tag{4}\]
where \(A=16\pi M^{2}/M_{\text{P}}^{4}\) is the BH horizon area. From the above canonical transformations, one immediately finds the horizon area in terms of \((x,p)\)
\[A=4\pi L_{\text{P}}^{2}\Big{(}x^{2}+p^{2}\Big{)}, \tag{5}\]
where \(L_{\text{P}}=1/M_{\text{P}}\) is the Planck length.
Let us define the ladder operators, \(\{a_{-},a_{+}\}\), by
\[a_{\pm}=\frac{1}{\sqrt{2}}\Big{(}\pm\frac{d}{dx}+x\Big{)}. \tag{6}\]
The pairs of operators \(a_{\pm}\) act on states as the following form
\[a_{+}|n\rangle=\sqrt{n+1}|n+1\rangle,\quad a_{-}|n\rangle=\sqrt{n}|n-1\rangle. \tag{7}\]
This gives us the possibility to rewrite the area operator (5) of the BH in terms of ladder operators
\[A=4\pi L_{\text{P}}^{2}\Big{(}a_{+}a_{-}+a_{-}a_{+}\Big{)}. \tag{8}\]
Therefore, the area of the event horizon and the mass spectrum [59] are
\[A_{n}=8\pi L_{\text{P}}^{2}\Big{(}n+\frac{1}{2}\Big{)},\quad M_{n}=\frac{M_{ \text{P}}}{\sqrt{2}}\sqrt{n+\frac{1}{2}}, \tag{9}\]
where \(n\) is an integer. Expressions (9) give the well-known result: Hawking radiation takes place when the BH jumps from a higher state \(n+1\) to a lower state \(n\), in which the difference in quanta is radiated away. Also, they show that the BH does not evaporate completely, but a Planck-size remnant is left over at the end of the evaporation process.
In 1974, Hawking [67] showed that due to quantum fluctuations, BHs emit blackbody radiation, and the corresponding entropy is one-fourth of the event horizon area, namely \(A=16\pi G^{2}M^{2}\). Following Refs. [68] and [69], let us assume that Hawking radiation of a massive BH, i.e., \(M\gg M_{\text{P}}\) and \(n\gg 1\), is emitted when the BH system spontaneously jumps from the state \(n+1\) into the closest state level, i.e., \(n\), as described by (9). If we denote the frequency of emitted radiation as \(\omega_{0}\), then
\[\omega_{0}=M_{n+1}-M_{n}\simeq\frac{M_{\text{P}}}{2\sqrt{2n}}\simeq\frac{M_{ \text{P}}^{2}}{4M}. \tag{10}\]
This agrees with the classical BH oscillation frequencies, which scale \(1/M\). We thus expect a BH to radiate with a characteristic temperature \(T\propto M_{\text{P}}^{2}/M\), matching the Hawking temperature.
The BH entropy can be expressed in terms of the following adiabatic invariant
\[S_{\text{BH}}=8\pi\int_{M_{\text{P}}}^{M}\frac{dM}{\omega_{0}}=\frac{A}{4G}=4 \pi M^{2}G, \tag{11}\]
where \(A=4\pi R_{\text{S}}^{2}\) is the BH horizon area, and \(R_{\text{S}}=2MG\) is the Schwarzschild radius. Because this spectrum is equally spaced, the possible values for the area of a massive BH are equally spaced.
One can obtain the quantum deformed (q-deformed) extension of the BH horizon area (8) by replacing the ordinary ladder operators (6) with their q-extended ladder operators in which the Heisenberg-Weyl algebra turns into the quantum Heisenberg-Weyl algebra, \(U_{q}(h_{4})\). The quantum Heisenberg-Weyl algebra, is the associative unital [70]\(\mathbb{C}(q)\)-algebra with generators \(\{a_{+},a_{-},q^{\pm N/2}\}\) with the following q-deformed commutation relations [70]
\[\begin{split}& a_{-}a_{+}-q^{\frac{1}{2}}a_{+}a_{-}=q^{\frac{N}{2}},\ \ \ [N,a_{\pm}]=\pm a_{\pm},\\ & a_{\pm}^{\dagger}=a_{\mp},\ \ N^{\dagger}=N,\end{split} \tag{12}\]
where \(q\) is a primitive root of unity and is given by
\[q=\exp\left(\frac{2\pi i}{\mathcal{N}}\right), \tag{13}\]
where \(\mathcal{N}\) is the q-deformation parameter. The q-deformation is thought to be related to a fundamental dimensional constant. Furthermore, the deformation parameter, \(q\), must be a dimensionless function of such constant as well as of any specific system-characterizing features. The natural length scale of quantum gravity is the Planck length. Thus, the q-deformation parameter should be a function of the gravitational constant, \(G\), or the Planck length squared [71]\(\mathcal{N}=\mathcal{N}(L_{\rm P}^{2})\). In addition, one may expect that at the classical gravity limit, \(L_{\rm P}\to 0\), \(\mathcal{N}\) tends to infinity, and the theory turns into a classical gravity. This means that the q-deformation is a pure quantum gravity effect, and \(\mathcal{N}\propto 1/L_{\rm P}^{2}\). Regarding \(\mathcal{N}\) being a dimensionless parameter, we need another length scale in which the ratio of the Planck length and the new length gives us the q-deformation parameter, i.e., \(\mathcal{N}=L_{q}^{2}/L_{\rm P}^{2}\). In section 4, we will show that this assumption leads us to an asymptotically de Sitter cosmological model, where \(L_{q}\) plays the role of de Sitter radius. Moreover, we further take this new length scale as related to the "cosmological" constant, as consistently induced by the q-deformation of the model. Hence, we consider \(L_{q}\equiv\sqrt{3/\Lambda_{q}}\), where \(\Lambda_{q}\) is q-cosmological constant, and \(\mathcal{N}=L_{q}^{2}/L_{\rm P}^{2}\) which leads to
\[q=\exp\left(2\pi i\frac{L_{\rm P}^{2}}{L_{q}^{2}}\right). \tag{14}\]
Note that in our approach, \(L_{q}\), similar to the Planck length, is a fundamental dimensional constant of quantum gravity theory 1. One can obtain the classical gravity limit of the theory by \(L_{q}\rightarrow\infty\) or a vanishing q-cosmological constant. The above form of the deformation parameter realizes a holographic picture of quantum mechanics [78]. Concretely, as shown in [78], the Hilbert space of q-deformed neutral hydrogen gas in de Sitter space satisfies the strong holographic bound with the above deformation parameter.
Footnote 1: The CC, like the gravitational constant, is also used as a coupling constant in loop quantum gravity (LQG) and spinfoam frameworks [72; 73; 74; 75; 76]. In LQG, a q-deformation has been derived as a mechanism to apply the theory’s dynamics using \(\Lambda\) and the deformation parameter, \(q\), which is then given by (14) [77].
The above quantum deformation of the BH gives us the following eigenvalues for the surface area and the mass of the BH [1]
\[A_{n}=4\pi L_{\rm P}^{2}\frac{\sin\left(\frac{\pi}{\mathcal{N}}(n+\frac{1}{2}) \right)}{\sin\left(\frac{\pi}{2\mathcal{N}}\right)}, \tag{15}\]
\[M_{n}=\frac{M_{\rm P}}{2}\sqrt{\frac{\sin\left(\frac{\pi}{\mathcal{N}}(n+\frac {1}{2})\right)}{\sin\left(\frac{\pi}{2\mathcal{N}}\right)}}, \tag{16}\]
where \(n=0,...,\mathcal{N}-1\). Note that for \(\mathcal{N}\rightarrow\infty\) (or equivalently, \(\Lambda_{q}\to 0\)) the earlier eigenvalues will reduce to (9). Also, there is a two-fold degeneracy at the horizon's eigenvalues further at the BH's mass spectrum.
To summarize the consequences of the above eigenvalue relations, for simplicity, let us consider \(\Re\) is an odd natural number:
1) The area and the mass of the ground state \(n=0\), as well as the state \(n=\mathcal{N}-1\), are [1]
\[A_{0}=A_{\mathcal{N}-1}=4\pi L_{\rm P}^{2},\ M_{0}=M_{\mathcal{N}-1}=\frac{M_{ \rm P}}{2}. \tag{17}\]
These show that the ground state's area and mass are not deformed, and their values are the same as the non-deformed spectrum obtained in (9). Besides, the spectrum is bounded in which the most excited state, \(n=\mathcal{N}-1\), has the same mass and area as the ground state. In addition, for \(n\ll(\mathcal{N}-1)/2\) or \((\mathcal{N}-1)/2\ll n\leq\mathcal{N}-1\), Eqs. (15) and (16) will reduce to the non-deformed spectrum obtained in (9).
2) The above surface area spectrum of the BH leads us to the following q-deformed entropy [1]
\[S_{q}=\pi\frac{\sin\left(\frac{\pi}{\mathcal{N}}(n+\frac{1}{2})\right)}{\sin \left(\frac{\pi}{2\mathcal{N}}\right)}=\pi\frac{\sin\left(\frac{\pi R_{S}^{2}} {2G\mathcal{N}}\right)}{\sin\left(\frac{\pi}{2\mathcal{N}}\right)},\ \ \ \ \frac{\pi R_{S}^{2}}{2G \mathcal{N}}\leq\frac{\pi}{2}, \tag{18}\]
where \(R_{S}\) is the Schwarzschild radius of the BH.
With keeping one eye on Eq. (6), the commutation relation of the q-deformed \(x\) and \(p\) is
\[[x,p]|n\rangle=i\frac{\cos\left(\frac{\pi G\Lambda_{q}}{3}(n+\frac{1}{2}) \right)}{\cos\left(\frac{\pi G\Lambda_{q}}{6}\right)}|n\rangle. \tag{19}\]
According to Eq. (18), the q-entropy of a BH reaches its greatest value for \(n=(\mathcal{N}-1)/2\). In this case, regarding the second equality in Eq. (18), the Schwarzschild horizon radius is \(R_{S}^{2}=G\mathcal{N}=L_{q}^{2}\). On the other hand, Eq. (19) shows that for \(n=(\mathcal{N}-1)/2\), (or when the BH radius equals the de Sitter radius) \(x\) and \(p\) commute. As a result, the classical state is the one with the most entropy or maximum radius. Note that the effective Schwarzschild horizon radius is different from \(R_{S}\) and is given by
\[R_{\rm eff}=L_{\rm P}\sqrt{\frac{\sin\left(\frac{\pi R_{S}^{2}}{2\mathcal{N}L_{ \rm P}^{2}}\right)}{\sin\left(\frac{\pi}{2\mathcal{N}}\right)}}. \tag{20}\]
## 3 q-deformed Friedmann and Raychaudhuri equations
Our starting point is the Friedmann-Lemaitre-Robertson-Walker (FLRW) line element in comoving coordinates
\[ds^{2}=h_{ab}dx^{a}dx^{b}+R^{2}d\Omega_{(II)}^{2},\ \ \ a=0,1, \tag{21}\]
where \(d\Omega_{(II)}^{2}\) is the line element of the standard 2-sphere, \(R(r,t)=ra(t)\) is the areal radius, and \(h_{ab}=\text{diag}(-1,\frac{a(t)^{2}}{1+t^{2}})\). Moreover, \(x^{0}=t,x^{1}=r\), and as usual, the open, flat, and closed universes correspond to \(k=-1,0,1\), respectively
Several authors [79; 80; 81; 82; 83; 84] have taken the de Sitter event (and apparent) horizon thermodynamic formulae to encompass, constituting a larger, upper bound regarding a generic FLRW space's non-static apparent horizon, which differs from the event horizon. For the sake of clarity, let us emphasize that it is widely proposed in dynamical spacetimes that the apparent horizon represents a causal horizon associated to gravitational temperature and entropy (and surface gravity). If this is correct, the same might be said about cosmic horizons. It was suggested in Refs. [85; 86; 80; 82] that the FLRW thermodynamical event horizon needs to be better defined (except for the case of de Sitter space). The FLRW apparent horizon's Hawking radiation was calculated by the authors of [87; 88]. It was rederived in references [89; 90] by applying the Hamilton-Jacobi method [91; 92; 93] to the Parikh-Wilczek approach, which was initially developed for BH horizons [94].
A substantial amount of research has been conducted on the bridging of geometry and thermodynamics within FLRW spaces (see, for example, [95; 96] and references therein). The thermodynamical properties of the FLRW apparent horizon are described in Ref. [97]. Allow us to point to the Kodama vector, Kodama-Hayward surface gravity, and corresponding Hawking temperature calculations. The Kodama-Hayward temperature of the FLRW apparent horizon is given by
\[T_{\text{AH}}=\frac{\kappa_{\text{kodama}}}{2\pi}=-\frac{1}{2\pi R_{\text{AH} }}\left(1-\frac{\dot{R}_{\text{AH}}}{2HR_{\text{AH}}}\right), \tag{22}\]
where \(H=\dot{a}/a\) is the Hubble parameter, \(\kappa_{\text{Kodama}}\) is the Kodama surface gravity of the apparent horizon, and \(R_{\text{AH}}\) is the radius of the apparent horizon given by
\[R_{\text{AH}}=\frac{1}{\sqrt{H^{2}+\frac{k}{a^{2}}}}. \tag{23}\]
Let us assume that the FLRW universe is sourced by a perfect fluid with energy-momentum tensor
\[T_{\mu\nu}=(\rho+p)u_{\mu}u_{\nu}+g_{\mu\nu}p, \tag{24}\]
where \(\rho\), \(p\), and \(u_{\mu}\) are the fluid's total energy density, pressure, and 4-velocity field, respectively. The perfect fluid could be a combination of non-interacting dust (cold dark and baryonic matters), \(\rho_{\text{c}}\), and radiation, \(\rho_{\text{rad}}\)2
Footnote 2: We use a subscript \(x\) as one of “c” for the dust of baryons plus dark matter, “rad” for radiation (photons plus relativistic neutrinos), “m” for baryons plus dark matter plus radiation, and “DE” for the effective dark energy.
\[\begin{split}\rho&=\rho_{\text{c}}+\rho_{\text{rad} },\\ p&=p_{\text{rad}},\ \ \ \ p_{\text{c}}=0,\end{split} \tag{25}\]
Regarding these considerations, the unified first law of thermodynamics on the apparent horizon [83; 84] is given by
\[T_{\text{AH}}\dot{S}_{\text{AH}}=\dot{M}_{\text{AH}}+\frac{p-\rho}{2}\dot{V}_ {\text{AH}}, \tag{26}\]
where an overdot means time derivative, \(S_{\text{AH}}\) is the entropy of the apparent horizon, and
\[V_{\text{AH}}=\frac{4\pi}{3}R_{\text{AH}}^{3},\ \ \ M_{\text{AH}}=\rho V_{ \text{AH}}, \tag{27}\]
are the areal volume and the Misner-Sharp-Hernandez mass contained inside the apparent horizon, respectively.
Inserting the Kodama-Hayward temperature (22), the areal volume, and the Misner-Sharp-Hernandez mass defined in (27) into the unified first law of thermodynamics (26) gives us
\[\dot{S}_{\text{AH}}=8\pi^{2}HR_{\text{AH}}^{4}H(\rho+p). \tag{28}\]
If we postulate that the covariant conservation equation is satisfied by the energy-momentum tensor of the universe's perfect fluid composition (24), then we find
\[H\left(\rho_{i}+p_{i}\right)=-\frac{\dot{\rho}_{i}}{3},\ \ \ i=\text{c},\text{rad}. \tag{29}\]
Inserting this relation in the r.h.s of Eq. (28) simplifies it into
\[\frac{1}{8\pi^{2}R_{\text{AH}}^{4}}\dot{S}_{\text{AH}}=-\frac{1}{3}\rho. \tag{30}\]
Now, our assumption is to take the entropy associated with the apparent horizon in the form of q-deformed entropy (18) and replace the BH horizon radius, \(R_{S}\), with the apparent horizon radius \(R_{\text{AH}}\). This gives us the q-deformed entropy of the apparent horizon
\[S_{q}=\pi\frac{\sin\left(\frac{\gamma R_{\text{AH}}^{2}}{G}\right)}{\sin(\gamma )},\ \ \ \ 0\leq\frac{\gamma R_{\text{AH}}^{2}}{G}\leq\frac{\pi}{2}, \tag{31}\]
where \(\gamma=\pi/(2\mathcal{N})=\frac{\pi}{2}\left(\frac{L_{\text{F}}}{L_{q}}\right)^ {2}=\frac{\pi}{6}G\Lambda_{q}\). Let us first discuss some of the outcomes of this expression for the entropy before delving further.
1. Eq. (31) is non-classical by definition. According to the noncommutative perspective of quantization [1; 101; 98; 99; 100; 101], \(\hbar\) and \(\Lambda_{q}\) are both quantization parameters, and the classical limit of the model can be realized by establishing the \(\hbar\to 0\) limit. On the other hand, \(\Lambda_{q}\to 0\) gives us quantum gravity without a q-cosmological constant.
2. Our cosmological model did not account for a traditional CC, but our subsequently implemented quantum deformation led to the emergence of an effective one, strongly linked to the natural number \(\mathcal{N}\) defined by (13). This constant directly results from the assumption of a finite number of states in Hilbert spaces [36].
3. Regarding our discussion on the entropy of q-deformed BHs in the previous section, the interval \(0\leq\frac{\gamma R_{\rm AH}^{2}}{G}\leq\pi\) is divided to \(0\leq\frac{\gamma R_{\rm AH}^{2}}{G}\leq\frac{\pi}{2}\), and \(\frac{\pi}{2}\leq\frac{\gamma R_{\rm AH}^{2}}{G}\leq\pi\). As the first interval, \(0\leq\frac{\gamma R_{\rm AH}^{2}}{G}\leq\frac{\pi}{2}\) represents an expanding universe, the second interval realizes a contracting universe. These universes coexist simultaneously at the q-deformed quantum cosmology level and have an entangled quantum state [1]. By observation, the state collapses it into expanding or contracting universe. Here, we assumed (regarding the cosmological observations), the state has collapsed into the expanding universe.
4. The apparent horizon radius, \(R_{\rm AH}\), is quantized according to Eq. (18). As a result, \(R_{\rm AH}=L_{\rm P}\) gives the smallest value of the apparent horizon radius, whereas \(L_{q}\) is the larger scale within our knowledge of physics models and to ensure a finite Hilbert space. However, because we are interested in the late-time evolution of the universe, we assumed the apparent horizon radius was zero at the Big Bang for simplicity.
Implying the q-deformed entropy (31) for FLRW spaces into (30), gives us
\[\cos\Bigl{(}\frac{\gamma}{G}R_{\rm AH}^{2}\Bigr{)}\frac{\dot{R}_{\rm AH}}{R_ {\rm AH}^{3}}=-\frac{4\pi G}{3}\frac{\sin(\gamma)}{\gamma}\dot{\rho}. \tag{32}\]
Integration of the above differential equation yields
\[\frac{\cos\bigl{(}\frac{\gamma}{G}R_{\rm AH}^{2}\bigr{)}}{R_{\rm AH }^{2}}+\frac{\gamma}{G}\left\{\mathrm{Si}\left(\frac{\gamma}{G}R_{\rm AH}^{2} \right)-\mathrm{Si}\left(\frac{\pi}{2}\right)\right\}=\\ \frac{\sin(\gamma)}{\gamma}\frac{8\pi G}{3}\rho, \tag{33}\]
where
\[\mathrm{Si}(x)=\int_{0}^{x}\frac{\sin(y)}{y}dy, \tag{34}\]
is the integral sine function. Note that, regarding \(0\leq\frac{\gamma}{G}R_{\rm AH}^{2}\leq\pi/2\), the constant of integration is chosen in such a way that the above equation becomes trivial for \(\frac{\gamma}{G}R_{\rm AH}^{2}=\frac{\pi}{2}\). This means that, as we expected, the FLRW space would be instead asymptotic de Sitter, i.e., \(1/R_{\rm AH}^{2}=\frac{\Lambda_{q}}{3}\).
One can simplify Eq. (33) by considering the value of \(\gamma=\frac{\pi}{2}\left(\frac{L_{\rm P}}{L_{\rm P}}\right)^{2}\simeq 10^{-123}\). This shows that \(\sin(\gamma)/\gamma\simeq 1\). Hence, (33) reduces to
\[\frac{\cos\bigl{(}\frac{\pi\Lambda_{q}}{6}R_{\rm AH}^{2}\bigr{)}}{R _{\rm AH}^{2}} =\frac{8\pi G}{3}\rho\] \[+\frac{\pi\Lambda_{q}}{6}\left\{\mathrm{Si}\left(\frac{\pi}{2} \right)-\mathrm{Si}\left(\frac{\pi\Lambda_{q}}{6}R_{\rm AH}^{2}\right)\right\}. \tag{35}\]
Equation (35) is the q-deformed Friedmann equation based on the q-deformed entropy. Therefore, assuming the unified first law of thermodynamics applied at the apparent horizon of an FLRW universe, plus supposing that the apparent horizon area includes quantum deformed characteristics, we derive the corresponding modified Friedmann equation of an FLRW universe with any spatial curvature.
The Raychaudhuri equation, often known as the second Friedmann equation, may be obtained directly using Eq. (28) and the Friedmann equation (35). Inserting the q-entropy (31) into (28) and using
\[\dot{R}_{\rm AH}=-R_{\rm AH}^{3}\left(\frac{\ddot{a}}{a}-\frac{1}{R_{\rm AH}^ {2}}\right)H, \tag{36}\]
we obtain
\[-\frac{\ddot{a}}{a}+\frac{1}{R_{\rm AH}^{2}}=\frac{4\pi G}{\cos\bigl{(}\frac{ \gamma}{G}R_{\rm AH}^{2}\bigr{)}}\frac{\sin(\gamma)}{\gamma}(\rho+p). \tag{37}\]
This equation, combined with the Friedmann equation (35), gives the q-deformed Raychaudhuri equation
\[\cos\biggl{(}\frac{\pi\Lambda_{q}}{6}R_{\rm AH}^{2}\biggr{)}\frac {\ddot{a}}{a}=-\frac{4\pi G}{3}(\rho+3p)\\ +\frac{\pi\Lambda_{q}}{6}\left\{\mathrm{Si}\left(\frac{\pi}{2} \right)-\mathrm{Si}\left(\frac{\pi\Lambda_{q}}{6}R_{\rm AH}^{2}\right)\right\}. \tag{38}\]
Note that at the limit classical limit of quantum geometry, \(\Lambda_{q}\to 0\) (or \(\mathcal{N}\rightarrow\infty\)), Eqs. (35) and (38) reduce to the standard Friedmann and Raychaudhuri equations
\[H^{2}+\frac{k}{a^{2}}=\frac{8\pi G}{3}\rho,\] \[\frac{\ddot{a}}{a}=-\frac{4\pi G}{3}(\rho+3p), \tag{39}\] \[\rho=\rho_{\rm c}+\rho_{\rm rad},\ \ \ p=p_{\rm rad},\]
where \(\rho_{\rm c}\) is the energy density of cold matter, and \(\rho_{\rm rad}\), and \(p_{\rm rad}\) are the energy density and the pressure of the radiation.
## 4 q-deformed cosmology
In the remainder of this paper, for convenience, we focus on the spatially flat universe, namely \(k=0\). Then, \(1/R_{\rm AH}=H^{2}\), and one can rewrite the q-deformed Friedmann and Raychaudhuri equations (35) and (38) as
\[H^{2}=\frac{8\pi G}{3}(\rho+\rho_{\rm DE}), \tag{40}\]
\[\dot{H}=-4\pi G(\rho+p+\rho_{\rm DE}+p_{\rm DE}), \tag{41}\]
where we have introduced an effective q-dark energy sector, with energy density, \(\rho_{\rm DE}\), and pressure, \(p_{\rm DE}\), respectively, with the following form
\[\rho_{\rm DE}=\frac{3}{4\pi G}\sin^{2}\left(\frac{\pi\Lambda_{q} }{12H^{2}}\right)H^{2}+\\ \frac{\Lambda_{q}}{16G}\left\{\mathrm{Si}\left(\frac{\pi}{2} \right)-\mathrm{Si}\left(\frac{\pi\Lambda_{q}}{6H^{2}}\right)\right\}, \tag{42}\]
\[p_{\rm DE}=-\frac{1}{4\pi G}\sin^{2}\left(\frac{\pi\Lambda_{q}}{12H^{2} }\right)\left[2\dot{H}+3H\right]\\ -\frac{\Lambda_{q}}{16G}\left[\mathrm{Si}\left(\frac{\pi}{2}\right) -\mathrm{Si}\left(\frac{\pi\Lambda_{q}}{6H^{2}}\right)\right]. \tag{43}\]
These give us the EoS parameter for the q-dark energy sector
\[\omega_{\rm DE}=-1-\frac{8\sin^{2}\left(\frac{\pi\Lambda_{q}}{12H^{2}}\right) H}{12\sin^{2}\left(\frac{\pi\Lambda_{q}}{12H^{2}}\right)H^{2}+\Lambda_{q}\pi \left[\mathrm{Si}\left(\frac{\pi}{2}\right)-\mathrm{Si}\left(\frac{\pi\Lambda _{q}}{6H^{2}}\right)\right]}. \tag{44}\]
The equations are more easily expressed in terms of dimensionless variables. Defining the density parameters of the matter (radiation plus cold dark and baryonic matters), the q-deformation parameter, \(\Lambda_{q}\), and effective q-dark energy
\[\Omega^{\rm(m)}=\frac{8\pi G\rho_{\rm m}}{3H^{2}},\ \ \ \Omega^{\rm(q)}=\frac{ \Lambda_{q}}{3H^{2}},\ \ \ \Omega^{\rm(DE)}=\frac{8\pi G\rho_{\rm DE}}{3H^{2}}, \tag{45}\]
then, the continuity equation (29) and q-deformed Raychaudhuri equations are expressed as
\[\frac{\cos\left(\frac{\pi\Omega^{\rm(q)}}{2}\right)}{3\Omega^{\rm(i)}}\frac{ d\Omega^{\rm(i)}}{d\bar{N}}=\sum_{j}(1+\omega_{j})\Omega^{\rm(j)}-(1+\omega_{i}),\ \ \ i=\mathrm{c},\mathrm{rad}, \tag{46}\]
and
\[\frac{\cos\left(\frac{\pi\Omega^{\rm(q)}}{2}\right)}{H}\frac{dH}{d\bar{N}}=- \frac{3}{2}\sum_{j}(1+\omega_{j})\Omega^{\rm(j)}, \tag{47}\]
where \(\bar{N}=\ln(a/a_{0})\) is the e-folding factor. The equivalent expression of the above equation can be obtained by using the definition of the q-cosmological constant's density parameter
\[\frac{\cos\left(\frac{\pi\Omega^{\rm(q)}}{2}\right)}{\Omega^{\rm(q)}}\frac{ d\Omega^{\rm(q)}}{d\bar{N}}=3\sum_{j}(1+\omega_{j})\Omega^{\rm(j)}. \tag{48}\]
In addition, the q-deformed Friedmann (40) takes the following form
\[\Omega^{\rm(DE)}=1-\Omega^{\rm(m)}=1-\cos\left(\frac{\pi\Omega^{ \rm(q)}}{2}\right)+\\ \frac{\pi\Omega^{\rm(q)}}{2}\left\{\mathrm{Si}\left(\frac{\pi}{2 }\right)-\mathrm{Si}\left(\frac{\pi\Omega^{\rm(q)}}{2}\right)\right\},\ \ \ i=\mathrm{c},\mathrm{rad}. \tag{49}\]
This allows us to eliminate \(\Omega^{\rm(q)}\) (and \(\Omega^{\rm(DE)}\)) in terms of \(\Omega^{\rm(m)}\).
Eqs. (46) and (47) allow us to calculate the density parameters of the radiation, the cold matter, and the q-deformation
\[\Omega^{\rm(rad)}=\Omega_{0}^{\rm(rad)}\left(\frac{H_{0}}{H} \right)^{2}(1+z)^{4},\] \[\Omega^{\rm(c)}=\Omega_{0}^{\rm(c)}\left(\frac{H_{0}}{H}\right)^{ 2}(1+z)^{3}, \tag{50}\] \[\Omega^{\rm(q)}=\Omega_{0}^{\rm(q)}\left(\frac{H_{0}}{H}\right)^ {2},\]
where \(z=a_{0}/a-1\) is the redshift, and also \(\Omega_{0}^{\rm(rad)}\), \(\Omega_{0}^{\rm(c)}\), and \(\Omega_{0}^{\rm(q)}\) are the density parameters of the radiation, the cold matter, and q-deformation parameter at the present epoch. Inserting these into the Hamiltonian constraint (49) gives us
\[\cos\left(\frac{\pi\Omega_{0}^{\rm(q)}}{2E^{2}}\right)E^{2}=\frac {\pi\Omega_{0}^{\rm(q)}}{2}\Big{\{}\,\mathrm{Si}\left(\frac{\pi}{2}\right)-\\ \mathrm{Si}\left(\frac{\pi\Omega_{0}^{\rm(q)}}{2E^{2}}\right) \Big{\}}+\Omega_{0}^{\rm(rad)}(1+z)^{4}+\Omega_{0}^{\rm(c)}(1+z)^{3}, \tag{51}\]
where \(E=\frac{H}{H_{0}}\).
Evaluating (51) at the present epoch gives us the relation between the density parameters, which we now write as
\[\cos\left(\frac{\pi\Omega_{0}^{\rm(q)}}{2}\right)-\frac{\pi\Omega_ {0}^{\rm(q)}}{2}\Big{\{}\,\mathrm{Si}\left(\frac{\pi}{2}\right)-\\ \mathrm{Si}\left(\frac{\pi\Omega_{0}^{\rm(q)}}{2}\right)\Big{\}}= \Omega_{0}^{\rm(m)}. \tag{52}\]
It would be beneficial to delve deeper into this equation. Fig. 1 illustrates the profound influence of the q-deformation parameter on our universe. A significant value of \(\Omega_{0}^{\rm(q)}\) (which implies a smaller current Hubble distance of \(L_{q}\)) decreases the relevance of the baryonic and dark matters. Conversely, a q-deformation parameter of zero elevates cold matter to the position of the primary driving force behind the Universe's evolution. Consequently, very small q-deformation can significantly impact cosmology during later stages.
It is clear that the above expression is not analytically solvable. Hence, we use the global serie
Figure 1: Plot of the density parameter of the matter (radiation, plus cold matter) versus the density parameter of the q-cosmological constant. We plot this figure using Eq. (51).
\(x\,\text{Si}(x)=\sum_{n=0}^{\infty}\frac{(-1)^{n+1}x^{2n}}{(2n)!(2n-1)}\) to simplify Eq. (51) and obtain \(\Omega_{0}^{(q)}\) in terms of density parameter of the matter at the present epoch. Using this series expansion, Eq. (52) up to \(\mathcal{O}((\Omega^{(q)})^{4})\) simplifies to
\[1+\frac{1}{2}\left(\frac{\pi\Omega_{0}^{(q)}}{2}\right)^{2}-\text{Si}(\frac{ \pi}{2})\left(\frac{\pi\Omega_{0}^{(q)}}{2}\right)-\Omega_{0}^{(\text{m})}=0. \tag{53}\]
It is essential to note that we have not used any approximations until this point. However, the q-deformed Friedmann equation (40) and Raychaudhuri equation (41) do not have analytical solutions, making it complicated to calculate different quantities, like the age of the universe or distance modulus. Although numerical methods can be used to perform these calculations without approximation, our primary goal in this paper is to enable readers to follow the calculations easily. In the early universe and during the radiation-dominated epoch, Eqs. (40) and (41) are equivalent to the Friedmann equation and Raychaudhuri equations in the \(\Lambda\)CDM model with exceptional approximation; please see Fig. 2. Nonetheless, the differences between the standard model and our model are only noticeable at small redshifts.
Applying the above procedure for Eq. (51) results3
Footnote 3: It should be noted that the approximation (54) is inappropriate for negative redshifts, and therefore the higher order of approximation must be used.
\[\Omega^{(q)}=2\Omega_{0}^{(q)}\left\{\beta\Omega_{0}^{(q)}+f(z)+ \right.\\ \left.\sqrt{\left(\beta\Omega_{0}^{(q)}+f(z)\right)^{2}-4\left( \beta\Omega_{0}^{(q)}+\Omega_{0}^{(\text{m})}-1\right)}\right\}^{-1}, \tag{54}\]
where \(\beta=\frac{\pi}{2}\,\text{Si}(\frac{\pi}{2})\), and
\[f(z)=\sum_{j}\Omega_{0}^{(\text{j})}(1+z)^{3(\omega_{j}+1)},\ \ i=\text{c}, \text{rad}. \tag{55}\]
Also, in the first approximation, one can simplify Eq. (51) to find an explicit expression of the Hubble parameter
\[\left(\frac{H}{H_{0}}\right)^{2}=\left\{f(z)+\right.\\ \left.\frac{\pi}{2}\,\text{Si}(\frac{\pi}{2})\Omega_{0}^{(q)} \right\}\left\{1-\frac{\pi^{2}\left(\Omega_{0}^{(q)}\right)^{2}}{8\left(f(z) +\frac{\pi}{2}\,\text{Si}(\frac{\pi}{2})\Omega_{0}^{(q)}\right)^{2}}\right\}, \tag{56}\]
where the higher-order terms are omitted. This equation is tantamount to the Friedmann equation in the standard flat \(\Lambda\)CDM cosmology, supplemented with a correction term articulated in the last term. In addition, the induced effective density parameter of the CC at the present epoch is \(\frac{\pi}{2}\,\text{Si}(\frac{\pi}{2})\Omega_{0}^{(q)}\).
To analyze cosmological models, datasets obtained from SNIa observations are particularly valuable as they serve as primary evidence for the Universe's accelerated expansion. To achieve optimal outcomes with SNIa data, we commence with the observed distance modulus produced by SNIa detections and compare it against the theoretical value. For this study, we utilize the Pantheon sample, an up-to-date SNIa dataset that encompasses 1048 distance modulus \(\mu\) at various redshifts within the \(0.01<z<2.26\) range [102]. By utilizing luminosity distance, one can determine the distance modulus, which is presented as
\[\mu(z)=25+5\log_{10}\left((1+z)\int_{0}^{z}\frac{dz^{\prime}}{H(z^{\prime})} \right). \tag{57}\]
Cosmological observations show that the energy density parameter of the radiation is negligible at the present epoch, \(\Omega_{0}^{(\text{rad})}\simeq 8.7\times 10^{-5}\). Hence, for simplicity, we assume that the matter content of the model is only cold matter. Thus, one can safely ignore the radiation density parameter in the field equations, i.e., \(\Omega^{(\text{m})}=\Omega^{(\text{c})}\). As a result of this assumption, \(f(z)\) simplifies into
\[f(z)=\Omega_{0}^{(\text{c})}(1+z)^{3}. \tag{58}\]
Figure 3 displays the Pantheon Survey as the standard Hubble diagram of SNIa (absolute magnitude \(M_{0}=-19.5\)). By utilizing this dataset, we are able to determine the optimal values for the density parameter of cold matter \(\Omega_{0}^{(\text{c})}\) and \(\Omega_{0}^{(q)}\)
\[\Omega_{0}^{(q)}=0.43,\ \ \ \Omega_{0}^{(\text{c})}=0.3, \tag{59}\]
Figure 2: Plot of \(H/H_{0}\) (radiation\(+\)cold matter) versus the scale factor in flat \(\Lambda\)CDM and q-deformed cosmology. The black line (dash) shows the evolution of the Hubble parameter in q-deformed (\(\Lambda\)CDM) cosmology. We plot these figures using Eq. (51) for q-deformed cosmology and the standard equation of \(H/H_{0}\) in the standard model of cosmology, respectively. We used \(\Omega_{0}^{(q)}=0.43\), \(\Omega_{0}^{(0)}=0.3\) in q-cosmology and \(\Omega_{0}^{(\Lambda)}=0.4\) in \(\Lambda\)CDM models.
where the value of \(H_{0}\) is set to be 67.27 (Km/s)/Mpc based on the Planck 2018 results [104]. Furthermore, in this figure, our model has been juxtaposed against the standard flat cosmology. It is evident from the figure that the two models display a remarkable degree of conformity with each other.
In addition, using the approximations \(\Omega^{\rm(m)}=\Omega^{\rm(c)}\), the energy density parameter of q-dark energy and its EoS parameter, defined by Eqs. (45) and (44) simplify
\[\begin{split}\Omega^{\rm(DE)}&=1-\frac{\Omega^{(q )}}{\Omega^{(q)}_{0}}f(z),\\ \omega_{\rm DE}&=-1+\frac{\pi\left(\Omega^{(q)} \right)^{2}f(z)}{4\,\mathrm{Si}(\frac{\pi}{2})\Omega^{(q)}_{0}}\left\{1+\frac{ \pi\Omega^{(q)}}{4\,\mathrm{Si}(\frac{\pi}{2})}\right\}.\end{split} \tag{60}\]
By substituting the values of \(\Omega^{\rm(c)}_{0}\) and \(\Omega^{(q)}_{0}\) into the above equations, we find the energy density parameter and EoS parameter of q-dark energy at the present epoch
\[\Omega^{\rm(DE)}_{0}=0.7,\;\;\;\omega_{\rm DE0}=-0.91. \tag{61}\]
Furthermore, we were able to achieve a satisfactory level of agreement for the Hubble parameter by utilizing the fitting parameters (59). By referring to the data presented in Table 1 of [105], we plotted the Hubble parameter \(H\) versus the redshift \(z\) in Fig. 4. Our model demonstrated a high level of consistency with the standard model of cosmology, as evidenced by the figure. It is worth noting that all entropic-force models tend to fit the modulus distance data and Hubble parameter points in a similar fashion (cf. [106; 107; 108; 109; 110]). While more comprehensive data analysis (e.g., using a covariance matrix) is necessary to authenticate these models further, such work falls outside the scope of this article. Our present aim was to simply use the data to compare and contrast the effectiveness of various entropic force models.
In summary, using quantum-deformed entropy, we could derive analytical results for the observable parameters of the effective dark energy sector, namely the density parameter and EoS parameter of the dark energy of the proposed q-deformed cosmological scenario. We look at the following in more detail in cosmological ramifications. To investigate the facts of the model, let us use the present-time best-fit value of the density parameter of cold matter.
By using the obtained values in (59), one can determine the q-deformation parameter, \(\mathcal{N}\), and the q-deformed entropy of the apparent horizon defined by Eq. (31). Regarding the definitions of \(\mathcal{N}\) and \(\Omega^{(q)}_{0}\) in Eqs. (13) and (45) we obtain
\[\mathcal{N}=\frac{\Omega^{(q)}_{0}}{H_{0}^{2}L_{\rm P}^{2}}=1.70\times 10^{124}. \tag{62}\]
It is noteworthy to acknowledge the estimations made by Refs. [36; 111], which through general arguments in the context of quantum cosmology, approximated the orders \(\mathcal{N}\sim 10^{120}-10^{123}\), respectively. Furthermore, Eq. (62) provides evidence that the argument of trigonometric functions found in the deformation parameter, \(q=\exp(2\pi i/\mathcal{N})=\cos(2\pi/\mathcal{N})+i\sin(2\pi/\mathcal{N})\), is exceedingly minute. However, as can be observed from Eq. (60) (also shown in Fig. 5), its impact plays a
Figure 4: The evolution of \(H(z)\) (in units \(\mathrm{Km\cdot sec^{-1}\cdot Mpc^{-1}}\)) versus redshift \(z\) with error bars. The blue line denotes the dynamics of the Hubble parameter obtained from Eq. (51). The values of \(\Omega^{(q)}_{0}\), \(\Omega^{(c)}_{0}\) and \(H_{0}\) are same as the Eq. 57, and we assumed (according to the standard flat \(\Lambda\)CDM model) \(\Omega^{(\Lambda)}_{0}=0.7\). The dash-line denotes the evolution of the Hubble parameter in the standard flat \(\Lambda\)CDM model with the same values of \(H_{0}\), \(\Omega^{(c)}_{0}\) and \(\Omega^{(\Lambda)}_{0}\). The Hubble parameter measurements we use are taken from Table 1 of Ref. [105].
Figure 3: Evolution of Luminosity Distance, \(\mu(z)\), versus redshift \(z\). The black line denotes the best fit for the redshift evolution of the distance modulus obtained from Eqs. (57) and (51). We used \(\Omega^{(q)}_{0}=0.43\), \(\Omega^{(q)}_{0}=0.3\) and \(H_{0}=67.27\)\(\mathrm{Km\cdot sec^{-1}\cdot Mpc^{-1}}\), and the absolute B-band magnitude of a fiducial SNIa is \(M_{0}=-19.5\)[103]. The red line (dots) denotes the redshift evolution of the distance modulus in the standard flat \(\Lambda\)CDM model with \(\Omega^{(\Lambda)}_{0}=0.7\). The distance modulus measurements we use are taken from Ref. [102].
crucial role in the late time cosmology. Additionally, utilizing the aforementioned value of \(\mathcal{N}\) in Eq. (31) allows us to obtain the q-deformed entropy of the apparent horizon
\[S_{q}(t_{0})=2.298\times 10^{124}. \tag{63}\]
Thus, the implication of the diminutiveness of \(\mathcal{N}\) is the exceedingly high value of the entropy of the apparent horizon. Please note that in order for the universe to be of significant size, the argument of the trigonometric functions in \(q\) has to be very small. Furthermore, \(\mathcal{N}\) is representative of the entropy of the apparent horizon, as illustrated by Eq. (63). According to the holographic bound [112], the entropy budget of the observable universe must not surpass that of the apparent horizon. The total entropy of the observable universe is roughly \(S_{obs}\simeq 10^{102}-10^{103}\)[113], with supermassive black holes occupying a dominating role at the cores of galaxies. This also offers a hint regarding the minimum value of \(\mathcal{N}\), which is greater than \(10^{123}\), as confirmed by our discovery in Eq. (63).
Fig. 5 depicts the evolution of the effective dark energy, as we found in Eq. (60), and cold matter density parameters with redshift. It is simple to demonstrate, and as this figure indicates, at redshift \(z=0.377\), two density parameters are identical, and we then have a universe dominated by dark energy. Finally, at redshift \(z=-1\), the contribution of the cold matter is insignificant, resulting in a de Sitter universe. This demonstrates that the model is a plausible cosmological model since it accommodates an early matter-dominated phase in which structure could form and a recent acceleration phase corresponding to observations.
In addition, in Fig. 6, we show the corresponding behavior of the effective dark energy EoS parameter as a result of (60). This figure shows that the EoS parameter begins at \(\omega_{\rm DE}=-1\) at high redshifts, achieves a maximum of \(\omega_{\rm DE}=-0.7243\) at redshift \(z=0.2024\), and returns to \(\omega_{\rm DE}=-1\) at subsequent redshifts.
Finally, we show the deceleration parameter in Fig. 7 defined by
\[\mathfrak{q}=-1-\frac{\dot{H}}{H^{2}}=-1+\frac{3f(z)\Omega^{(q)}}{2\Omega_{0}^ {(q)}}\left\{1+\frac{\pi\Omega^{(q)}}{4}\right\}. \tag{64}\]
For the reader's convenience, we have extended evolution into the far future, i.e., \(z\to-1\). This graph shows the transition from deceleration to acceleration at \(z_{\rm tran}=0.5\), consistent with the cosmological observations [114; 115]. Also, Eq. (64) gives \(\mathfrak{q}_{0}=-0.3973\) for the deceleration parameter today, which is consistent with model-independent observations [114].
We close this section by calculating the Universe's age according to the scenario. Figure 8 illustrates the Universe's age as a function of the density parameter of the cold matter. There are two different estimates for the age of the Universe. Based on early observations of the Universe, the first estimate suggests that the Universe is 13.787 billion years old according to the \(\Lambda\)CDM model as of 2021 [104]. The second estimate is based on observations of the current universe and suggests that the Universe is younger [116; 117]. Recent studies have lowered the uncertainty of the first type of measurement to 20 million years. This was achieved through various studies that produced similar results, including analyses of the microwave background radiation by the Planck satellite, the WMAP Probe, and other space missions [118].
Inserting the obtained Hubble parameter in (56) to the expression of the Universe's age
\[t_{0}=\int_{0}^{\infty}\frac{dz}{(1+z)H(z)}, \tag{65}\]
Figure 5: The evolution of the effective dark energy density parameter \(\Omega^{(\rm DE)}\) (solid line) and the matter density parameter \(\Omega^{(\rm E)}\) (dashed) respectively, as a function of the redshift \(z\).
Figure 6: The evolution of the effective (q-deformed) dark energy EoS parameter, \(\omega_{\rm DE}\).
we find,
\[t_{0}=\frac{0.9479}{H_{0}}=13.779\;\text{Gyrs}. \tag{66}\]
The above value coincides with the value corresponding to the standard \(\Lambda\)CDM scenario, namely \(13.787\pm 0.020\;\text{Gyrs}\)[104].
## 5 Comparison with other models
The conventional Friedmann and Raychaudhuri equations involving ordinary matter fields are inadequate for characterizing the dark energy epoch of the present universe. It is in this context that we can fit the following comment. Concretely, Jacobson [4] successfully obtained the Einstein field equations from the entropy and horizon area proportionality by presuming the heat flow across the horizon, and Padmanabhan [119] derived the Friedmann and Raychaudhuri equations using the holographic equipartition law, which indicates that the difference between the degrees of freedom at the surface and in the bulk of an area of space causes cosmic space expansion. Verlinde [120] has also proposed a new concept that defines gravity as an entropic force arising in a system due to the statistical tendency to increase its entropy4. Consequently, alternative forms of entropy, especially non-extensive entropies, distinct from the Bekenstein-Hawking variety, have been advocated to induce accelerated expansion, as now found in the literature for thermodynamic interpretations of modern cosmology. In this regard, various non-extensive entropies have been proposed, such as Tsallis [23], Renyi [25], Sharma-Mittal [121], Kaniadakis [122], Loop Quantum Gravity [32; 33], Tsallis-Citro [24], Tsallis-Zamora [109], Barrow [26], and fractional [27; 28] entropies.
Footnote 4: This theory is a significant breakthrough in the field of physics and a promising avenue for future research. All the innovative ideas above offer fresh insights into the quantum gravity puzzle and could even shed light on the thermodynamic origins of space-time. These breakthroughs in the field may significantly enhance our understanding of the universe.
Tsallis entropy is a generalized form of Gibbs/Shannon entropy applicable in cases where entropy's additive and extensive properties do not hold. Renyi entropy, on the other hand, is a measure of the entanglement of quantum systems and is commonly used in the context of BHs and cosmological horizons. Tsallis-Citro entropy, which has been motivated by the need to make BH entropy extensive, is similar in nature. Regarding mathematical equivalence, Barrow and fractional entropies are comparable to Tsallis-Citro entropy. However, they are primarily driven by the fractal structure of the horizon caused by quantum fluctuations. The Barrow entropy, proposed as a toy model for the potential effects of quantum gravitational spacetime foam, was not supported by any concrete evidence. Barrow argued that quantum-gravitational effects could introduce intricate, fractal features on the BH structure, similar to the illustrations of the Covid-19 virus. In contrast, the fractal entropy of black holes in fractional quantum gravity has been proven to be real, and it has been shown that the structure of the BH surface horizon is a _random fractal surface_ distinct from Barrow's inspiration. Furthermore, the Sharma-Mittal entropy can extend the Tsallis and Renyi entropies, whereas the Kaniadakis entropy is derived from the principles of special relativity.
Referring to the findings presented in Ref. [123], it can be observed that a comprehensive overview of the majority of these entropies can be effectively encapsulated in the subse
Figure 8: The age of the Universe as a function of \(\Omega^{(\text{emm})}\). Regarding the Planck 2018 collaboration observations, we assumed \(H_{0}=67.27\;\text{Km\;-\;sec}^{-1}\cdot\text{Mpc}^{-1}\).
Figure 7: Evolution of the deceleration parameter \(a(z)\) versus redshift. The black line exhibits the deceleration parameter of the q-deformed cosmology, with \(\Omega_{0}^{(\text{q})}=0.43\), \(\Omega_{0}^{(\text{q})}=0.3\) and \(z_{\text{tran}}=0.5\). The dash-line denotes the deceleration parameter of the standard flat \(\Lambda\)CDM model with \(\Omega_{0}^{(\Lambda)}=0.7\) (with \(z_{\text{tran}}=0.67\)).
quent form
\[S_{g}(\alpha_{+},\alpha_{-},\beta,\gamma)=\\ \frac{1}{\gamma}\left[\left(1+\frac{\alpha_{+}}{\beta}\;S_{\rm BH} \right)^{\beta}-\left(1+\frac{\alpha_{-}}{\beta}\;S_{\rm BH}\right)^{-\beta} \right], \tag{67}\]
where \(\alpha_{+}\), \(\alpha_{-}\), \(\beta\), and \(\gamma\) are real and positive the parameters positive, and \(S_{\rm BH}\) is the Bekenstein-Hawking entropy. As authors of [123] showed, by suitable choice of these parameters, the generalized entropy (67) reduces to the
\[S_{g}=S_{\rm BH}^{\beta},\qquad\qquad\qquad\qquad(\text{Tsallis-Barrow- Fractional}), \tag{68}\] \[(\alpha_{+}\rightarrow\infty,\alpha_{-}=0,\gamma=(\alpha_{+}/ \beta)^{\beta}),\] \[S_{g}=\frac{1}{\alpha}\ln\left(1+\alpha\;S_{\rm BH}\right), \qquad\qquad\qquad(\text{Renyi})\] \[(\alpha_{-}=0,\;\beta\to 0,\;\frac{\alpha_{+}}{\beta} \rightarrow\text{finite}),\] \[S_{g}=\frac{1}{\gamma}\left[\left(1+\frac{\alpha_{+}}{\beta}\;S \right)^{\beta}-1\right],\;\;\;(\text{Sharma--Mittal}),\] \[(\alpha_{-}=0),\] \[S_{g}=\frac{1}{K}\sinh\left(KS\right),\qquad\qquad\qquad\qquad \qquad(\text{Kaniadakis}),\] \[(\beta\rightarrow\infty,\alpha_{+}=\alpha_{-}=\frac{\gamma}{2}=K),\] \[S_{g}=\frac{1}{(1-q)}\left[\mathrm{e}^{(1-q)S}-1\right],\;\;\; \;(\text{ Loop Quantum Gravity})\] \[(\alpha_{-}=0,\beta\rightarrow\infty,\gamma=\alpha_{+}=(1-q)),\]
entropies. The entropy function described in Eq. (67) and the q-deformed entropy (31) share several similar properties. Firstly, both of these functions satisfy the generalized third law of thermodynamics. Secondly, they both exhibit a monotonically increasing behavior concerning the Bekenstein-Hawking entropy. Finally, these functions converge to the Bekenstein-Hawking entropy under certain parameter limits. As previously stated in the introduction, one of the main challenges such models face is the presence of free parameters with varying origins. Upon reviewing the generalized entropy introduced in (67), it is apparent that the existing parameters have distinct origins. Even if the model were to provide a highly accurate representation of observational data, comprehending the physics behind these parameters would prove to be more challenging than the initial issue of the cosmological constant problem in the standard model of cosmology.
If we use the generalized entropy (67) in process of section 3, the resulting Friedmann equation will be [123]
\[\frac{G\beta H^{4}}{\pi\gamma}\Bigg{\{}\frac{1}{(2+\beta)}\left( \frac{GH^{2}\beta}{\pi\alpha_{-}}\right)^{\beta}\;\;_{2}F_{1}\Big{(}1+\beta,2+ \beta,3+\beta,\\ -\frac{GH^{2}\beta}{\pi\alpha_{-}}\Big{)}+\frac{1}{(2-\beta)} \left(\frac{GH^{2}\beta}{\pi\alpha_{+}}\right)^{-\beta}\;\;_{2}F_{1}\Big{(}1- \beta,2-\beta,\\ 3-\beta,-\frac{GH^{2}\beta}{\pi\alpha_{+}}\Big{)}\Bigg{\}}= \frac{8\pi G\rho}{3}+\frac{\Lambda}{3}, \tag{69}\]
where \(\;{}_{2}F_{1}(a,b,c,d)\) denotes the Hypergeometric function. Note that the "cosmological constant", \(\Lambda\), appears in the above generalized Friedmann equation as an integration constant. This is where our model truly distinguishes itself, as it separates from the generalized entropy and its various subset entropies as defined in equation (67). Let us go through this in further depth. The CC seen in the previous equation is a constant of integration that can be chosen arbitrarily. In contrast, the constant of integration in the Friedmann equation (33) is fixed to a specific value. Understanding this fixed value is simple. As the universe approaches very late times and the scale factor tends towards infinity, the energy density of matter fields in the right-hand side of the equation vanishes. Additionally, the first term on the right-hand side of (33), which is \(\frac{\cos\left(\frac{\pi}{2}R_{\rm BH}^{2}\right)}{R_{\rm BH}^{2}}\), identically vanishes as well. This boundary condition fixes the value of the constant of integration to \(\frac{\pi\Lambda_{0}}{6}\,{\rm Si}\left(\frac{\pi}{2}\right)\).
The presence of the CC in the generalized Friedmann equation leads to the CC problem. The challenge is explaining why the measured CC is not precisely zero but has a very small nonzero value. In contrast, the q-deformed Friedmann equation (33) does not suffer from this problem.
We can use Barrow's entropy model as an illustrative example to provide a more precise reference point. In the context of cosmology, various perspectives have been explored regarding the impact of Barrow entropy on the universe's evolution. One such view involves modifying the area law, which results in a novel holographic dark energy model based on Barrow entropy [58; 124]. Another cosmological scenario that incorporates Barrow entropy was presented in Ref. [125], where it was demonstrated that new additional terms emerge in the Friedmann and Raychaudhuri equations, forming an effective dark energy sector. While Ref. [125] argued that the modified cosmological equations rooted in Barrow entropy (68) could account for the thermal history of the universe, spanning from early deceleration to later acceleration during the dark-energy epoch that follows the cold matter dominated epoch, regardless of the presence of CC, \(\Lambda\), it seems that this conclusion is only accurate if there is a CC present [125; 126]. To illustrate this important point, let us rewrite the Friedmann and Raychaudhuri equations with Barrow's entropy [125; 126]
\[H^{2}=\frac{8\pi G}{3}(\rho+\rho_{\rm DE}), \tag{70}\] \[\dot{H}=-4\pi G(\rho+p+\rho_{\rm DE}+p_{\rm DE}),\]
where \(\rho\) and \(p\) are the energy density and pressure of the ordinary matter, and
\[\rho_{\rm DE} =\frac{3}{8\pi G}\Big{\{}\frac{\Lambda}{3}+H^{2}\Big{[}1-\frac{ \Delta+2}{2-\Delta}\Big{(}\frac{\pi}{G}\Big{)}^{\frac{\Lambda}{2}}H^{-\Delta} \Big{]}\Big{\}}, \tag{71}\] \[p_{\rm DE} =-\frac{1}{8\pi G}\Big{\{}\Lambda+2H\Big{[}1-(1+\frac{\Delta}{2}) \left(\frac{\pi}{G}\right)^{\frac{\Lambda}{2}}H^{-\Delta}\Big{]}\] \[+3H^{2}\Big{[}1-\frac{\Delta+2}{2-\Delta}\Big{(}\frac{\pi}{G} \Big{)}^{\frac{\Lambda}{2}}H^{-\Delta}\Big{]}\Big{\}},\]
where \(\Delta=2(\beta-1)\), \((0\leq\Delta\leq 1)\) is the Barrow's deformation parameter. For \(\rho=p=0\) and \(\Lambda=0\), the solution of the above
field equations is \(H=0\). On the other hand, for \(\rho=p=0\) and \(\Lambda\neq 0\) we obtain
\[H=\left(\frac{\Lambda}{3}\right)^{\frac{1}{2-3}}. \tag{72}\]
These findings validate that the accelerated phase can solely occur when the CC is present. However, we now have two unknown parameters in the theory, \(\Lambda\) and \(\Delta\). As previously discussed, the presence of a constant of integration, \(\Lambda\), reintroduces the cosmological constant problem and warrants an explanation. Furthermore, it is imperative to provide a sound justification for the particular numerical value of \(\Delta\) that is deemed valid based on cosmological observations.
On the other hand, the q-deformed Friedmann and Raychaudhuri equations (35) and (38) for \(\rho=p=0\) reduce to
\[\cos\!\left(\frac{\pi\Lambda_{q}}{6H^{2}}\right)\!H^{2}=\frac{\pi\Lambda_{q}}{ 6}\left\{\mathrm{Si}\left(\frac{\pi}{2}\right)-\mathrm{Si}\left(\frac{\pi \Lambda_{q}}{6H^{2}}\right)\right\}, \tag{73}\]
and
\[\cos\!\left(\frac{\pi\Lambda_{q}}{6H^{2}}\right)\!\frac{\ddot{a}}{a}=\frac{ \pi\Lambda_{q}}{6}\left\{\mathrm{Si}\left(\frac{\pi}{2}\right)-\mathrm{Si} \left(\frac{\pi\Lambda_{q}}{6H^{2}}\right)\right\}. \tag{74}\]
The solution of these equations is a de Sitter spacetime, \(H^{2}=\Lambda_{q}/3\), in which \(\Lambda_{q}\) plays the role of the CC. As previously described, our initial cosmological model did not include a conventional CC. However, our subsequent implementation of quantum deformation resulted in the emergence of an effective CC, which is strongly linked to the natural number \(\mathcal{N}\) defined by equation (13). This constant is directly derived from the assumption of a finite number of states in Hilbert spaces, as stated in [36].
## 6 Conclusions
In this paper, we developed a novel cosmological scenario assuming that thermodynamics is intertwined with gravity. It is known, for instance, that one may start with the first law of thermodynamics, apply it to the Universe horizon, and end up with the Friedmann and Raychaudhuri equations. This method utilizes the Bekenstein-Hawking entropy in the case of GR or the modified entropy expression in the case of modified gravity.
We examined the compatibility of the apparent horizon's entropy with thermodynamic laws on the assumption that it is the q-deformed entropy of the black hole-white hole pair. To achieve this, we first presupposed that the apparent horizon is subject to the unified first law of thermodynamics and that its entropy has the form (18). Then we demonstrated how the unified first law of thermodynamics on the apparent horizon might be expressed as modified Friedmann equations of an FLRW universe with arbitrary spatial curvature.
Our research shows that the q-deformed Friedmann and Raychaudhuri equations lead to the existence of an effective dark energy component, which can explain the Universe's late-time acceleration. According to our model, the density parameters of cold matter and the effective dark energy are equal at a redshift of \(z=0.377\), after which the effective dark energy dominates the universe. Additionally, our model predicts a de Sitter universe in the long run. The model predicts that the effective dark energy equation of state (EoS) parameter changes over time. Initially, it starts at \(\omega_{\mathrm{DE}}=-1\) during high redshifts. Then, it reaches a peak at \(\omega_{\mathrm{DE}}=-0.7243\) at redshift \(z=0.2024\). After that, it returns to its initial value of \(\omega_{\mathrm{DE}}=-1\) during subsequent redshifts. Furthermore, we find a transition from deceleration to acceleration at \(z_{\mathrm{tran}}=0.5\), consistent with the cosmological observations. The obtained age of the Universe is \(13.779\) Gyrs, which coincides with the value corresponding to the standard \(\Lambda\)CDM scenario, namely \(13.787\pm 0.020\) Gyrs. These findings suggest that our model is a viable cosmological model, as it accommodates an early phase of matter domination that allowed the formation of structures and a recent acceleration phase that aligns with observations. Concretely, we support this claim upon using suitable calculations and corresponding plots, in particular of \(H/H_{0}\) (radiation plus cold matter) versus the scale factor, either in flat \(\Lambda\)CDM or our q-deformed cosmology, and the evolution of luminosity distance, \(\mu\), versus redshift \(z\), the evolution of \(H(z)\) versus redshift \(z\) (including error bars) and the evolution of the deceleration parameter \(q(z)\) versus redshift (see figures in section 4). Additionally, we have conducted a thorough comparative analysis of our proposed model with others involving non-extensive entropies.
In conclusion, the q-deformed horizon entropy cosmology scenario demonstrates intriguing phenomenology and is consistent with cosmological observations. As a result, it may be an intriguing option for the description of Nature.
Last but not least, let us add that it is imperative to acknowledge that the current models possess both advantageous and disadvantageous features and are not without flaws. For example, in entropic cosmology, the form of the driving entropic force terms is determined by the definition of entropy. In the original entropic force models, the Bekenstein entropy and the Hawking temperature have been used to get the entropic force terms. However, such models fail to account for both the Universe's acceleration and deceleration, and it has been shown in [127; 128] that they neither account for cosmic fluctuations nor are compatible with the formation of structures. As we have thoroughly discussed in section 5, using Barrow entropy-based models, with the addition of the cosmological constant as an integration constant, can effectively elucidate the shift from a decelerated universe to an accelerated universe. Nevertheless, including the cosmological constant in the model leads to a resurgence of the cosmological constant problem and the incorporation of obscure parameters into the cosmological model, further complicating the matter. Therefore, exploring alternative models that can account for these phenomena is necessary and provides a more accurate description of the Universe's behavior. Ultimately, this will lead to a better understanding of our Universe and its complex dynamics.
## Acknowledgements
S.J. acknowledges financial support from the National Council for Scientific and Technological Development-CNPq, Grant
no. 308131/2022-3. P.V.M. acknowledges the FCT grants UID-B-MAT/00212/2020 at CMA-UBI as well as the COST Action CA18108 (Quantum gravity phenomenology in the multi-messenger approach).
|
2303.16823 | Combined proper orthogonal decompositions of orthogonal subspaces | We present a method for combining proper orthogonal decomposition (POD) bases
optimized with respect to different norms into a single complete basis. We
produce a basis combining decompositions optimized with respect to turbulent
kinetic energy (TKE) and dissipation rate. The method consists of projecting a
data set into the subspace spanned by the lowest several TKE optimized POD
modes, followed by decomposing the complementary component of the data set
using dissipation optimized POD velocity modes. The method can be fine-tuned by
varying the number of TKE optimized modes, and may be generalized to
accommodate any combination of decompositions. We show that the combined basis
reduces the degree of non-orthogonality compared to dissipation optimized
velocity modes. The convergence rate of the combined modal reconstruction of
the TKE production is shown to exceed that of the energy and dissipation based
decompositions. This is achieved by utilizing the different spatial focuses of
TKE and dissipation optimized decompositions. | Peder J. Olesen, Azur Hodžić, Clara M. Velte | 2023-03-29T16:16:15Z | http://arxiv.org/abs/2303.16823v1 | # Combined proper orthogonal decompositions
###### Abstract
We present a method for combining proper orthogonal decomposition (POD) bases optimized with respect to different norms into a single complete basis. We produce a basis combining decompositions optimized with respect to turbulent kinetic energy (TKE) and dissipation rate. The method consists of projecting a data set into the subspace spanned by the lowest several TKE optimized POD modes, followed by decomposing the complementary component of the data set using dissipation optimized POD velocity modes. The method can be fine-tuned by varying the number of TKE optimized modes, and may be generalized to accommodate any combination of decompositions. We show that the combined basis reduces the degree of non-orthogonality compared to dissipation optimized velocity modes. The convergence rate of the combined modal reconstruction of the TKE production is shown to exceed that of the energy and dissipation based decompositions. This is achieved by utilizing the different spatial focuses of TKE and dissipation optimized decompositions.
## 1 Introduction
Reduced order models (ROMs) of turbulent flows approximate the high-dimensional flow dynamics by projecting them into a lower dimensional subspace spanned by a truncated modal basis. Ideally, the full basis should (1) be complete, in the sense that the full dynamics are recovered when all modes are included, and (2) ensure that the truncated model preserves critical aspects of the dynamics such that the approximation remains meaningful.
Proper orthogonal decomposition (POD) provides a complete modal basis optimized for representing the underlying data set, minimizing the error as measured by a given norm. While this basis satisfies (1) by construction, it is not _a priori_ given that (2) holds (Holmes _et al._, 2012). In canonical implementations of POD and POD-based ROMs the norm is chosen to optimally represent the mean turbulent kinetic energy (TKE) (Lumley, 1967). Such ROMs prioritize TKE-rich large-scale structures, at the expense of small-scale dissipative structures encoded in truncated modes, potentially leading to inaccuracies and instabilities.
Among the general approaches for enhancing ROM stability and accuracy are _closure models_, in which the effects of unresolved modes are modelled by introducing artificial dissipation, and _modification of the optimization problem itself_ to better capture all relevant scales in the resulting POD modes (Bergmann _et al._, 2009). Aubry _et al._ (1988) applied the former approach through an effective viscosity model; it has since been suggested that viscosity specific to modes or to pairs of interacting modes might be used instead of global viscosity corrections (Rempfer & Fasel, 1994; Rempfer, 1996). More recently, Wang _et al._ (2012) investigated a number of different closure models.
The second approach involves modifying the POD procedure itself. Christensen _et al._ (1999) demonstrated a procedure with some similarities to what is shown in the present work, forming a partial basis using a decomposition of predefined states and a complementary basis using a decomposition of the data component orthogonal to the first basis. Iollo _et al._ (2000) formulated a Sobolev norm minimizing combined error in velocity and gradient fields. Kostas _et al._ (2005) used enstrophy-optimized vorticity modes to analyse the velocity and vorticity fields for a backward-facing step flow. Similarly, Lee & Dowell (2020) used enstrophy-optimized modes for the expansion of gradient terms to supplement the classical TKE-optimized POD basis. Olesen _et al._ (2023) presented a related dissipation optimized decomposition, and demonstrated a method for spanning the velocity field using such modes. The resulting velocity basis is complete (satisfying (1)), though it might suffer from similar issues as energy-optimized POD bases regarding (2), due to poor representation of energetic large-scale structures. Directly combining the two bases would generally compromise (1), producing either an incomplete or an overcomplete set of modes. In the present work we lay out a generalizable method for combining energy and dissipation optimized POD bases in such a way as to ensure exact completeness of the resulting basis, while also allowing for an adjustable balancing of the two optimizations.
The remainder of this paper is laid out as follows. The basic POD formalism is summarized in Section 2, and the combined POD formalism is laid out in Section 3. Basic POD results are given in Section 4. Convergence of reconstructed TKE, TKE production, and dissipation rate are investigated in Section 5. A conclusion is presented in Section 6.
## 2 Single-basis POD formalisms
This section summarizes the basic formalism for the TKE and dissipation rate optimized PODs (e-POD and d-POD, respectively), including the computation of d-POD velocity modes. The formalism largely follows that presented in Olesen _et al._ (2023).
### Energy-optimized POD
We consider an ensemble of flow realisations in the form of velocity fluctuation snapshots \(\mathcal{U}=\{\mathbf{u}_{m}\}_{m=1}^{M}\subset\mathcal{H}^{\mathrm{e}}\) defined on the domain \(\Omega^{\mathrm{e}}\), where \(\mathcal{H}^{\mathrm{e}}\) is the Hilbert space defined by
\[\mathcal{H}^{\mathrm{e}}:=\left\{\mathbf{\alpha}:\Omega^{\mathrm{e}}\to\mathbb{R} ^{3}\left|\mathbf{\alpha}\in C^{1}\,,\,\sum_{i=1}^{3}\int_{\Omega^{\mathrm{e}}} \left|\mathbf{\alpha}^{i}\right|^{2}\,\mathrm{d}x<\infty\right\}\right.\,. \tag{1}\]
While not necessary for performing e-POD in itself, the requirement that \(\mathbf{\alpha}\) be differentiable (\(\mathbf{\alpha}\in C^{1}\)) is included here as it will be needed later when computing strain rate tensors.
This Hilbert space is equipped with the inner product \((\cdot,\cdot)_{\mathcal{H}^{\mathrm{e}}}\) and the norm \(\left\|\cdot\right\|_{\mathcal{H}^{\mathrm{e}}}\):
\[(\mathbf{\alpha},\mathbf{\beta})_{\mathcal{H}^{\mathrm{e}}}=\sum_{i=1}^{3}\int_{ \Omega^{\mathrm{e}}}\alpha^{i}\beta^{i}\,\mathrm{d}x\,,\quad\|\mathbf{\alpha}\|_{ \mathcal{H}^{\mathrm{e}}}=\sqrt{(\mathbf{\alpha},\mathbf{\alpha})_{\mathcal{H}^{ \mathrm{e}}}}\,,\quad\mathbf{\alpha},\mathbf{\beta}\in\mathcal{H}^{\mathrm{e}}\,. \tag{2}\]
We define the e-POD operator \(R^{\mathrm{e}}:\mathcal{H}^{\mathrm{e}}\to\mathcal{H}^{\mathrm{e}}\) associated with the ensemble \(\mathcal{U}\) by its action on \(\mathbf{\alpha}\in\mathcal{H}^{\mathrm{e}}\),
\[R^{\mathrm{e}}\alpha=\left\langle\left\{(\mathbf{u}_{m},\mathbf{\alpha})_{\mathcal{H }^{\mathrm{e}}}\,\mathbf{u}_{m}\right\}_{m=1}^{M}\right\rangle\,, \tag{3}\]
where \(\langle\cdot\rangle\) denotes the averaging operation. The e-POD operator has orthogonal eigenmodes \(\{\mathbf{\varphi}_{n}\}_{n=1}^{N}\) (e-POD modes) and real and non-negative eigenvalues \(\left\{\lambda_{n}^{\mathrm{e}}\right\}_{n=1}^{N}\) which are assumed to be indexed in descending order,
\[R^{\mathrm{e}}\mathbf{\varphi}_{n}=\lambda_{n}^{\mathrm{e}}\mathbf{\varphi}_{n}\,, \quad(\mathbf{\varphi}_{n},\mathbf{\varphi}_{n^{\prime}})_{\mathcal{H}^{\mathrm{e}}}= \delta_{nn^{\prime}}\,,\quad\lambda_{1}^{\mathrm{e}}\geq\lambda_{2}^{\mathrm{ e}}\geq\ldots\geq\lambda_{N}^{\mathrm{e}}\geq 0\,, \tag{4}\]
where \(\delta_{nn^{\prime}}\) is the Kronecker delta.
The e-POD modes form a complete orthogonal basis for \(\mathcal{U}\), allowing flow realisations to be expanded using uncorrelated coefficients \(\left\{a_{mn}\right\}_{n=1}^{N}\),
\[\mathbf{u}_{m}=\sum_{n=1}^{N}a_{mn}\mathbf{\varphi}_{n}\,,\quad a_{mn}=(\mathbf{\varphi}_{n},\mathbf{u}_{m})_{\mathcal{H}^{\rm e}}\,\quad\left\langle\left\{a_{mn}a_{mn^{\prime}}\right\}_{m=1}^{M} \right\rangle=\lambda_{n}^{\rm e}\delta_{nn^{\prime}}\,. \tag{5}\]
This expansion is optimal with respect to \(\left\|\cdot\right\|_{\mathcal{H}^{\rm e}}\), in the sense that truncating the expansion in (5) to \(\hat{N}\leq N\) terms minimizes the ensemble mean error as measured by \(\left\|\cdot\right\|_{\mathcal{H}^{\rm e}}\), compared to any other \(\hat{N}\)-term expansion. Since the square of this norm is proportional to TKE, the expansion provides the most efficient modal reconstruction of TKE possible. The lowest e-POD modes represent the flow structures carrying the most TKE in the mean, corresponding in general to structures residing on the largest scales where most of the TKE resides.
### Dissipation optimized POD
Olesen _et al._ (2023) developed an analogous formalism leading to a d-POD basis that spans the corresponding ensemble of strain rate tensors (SRTs) \(\mathcal{S}=\left\{\mathbf{s}_{m}\right\}_{m=1}^{M}\subset\mathcal{H}^{\rm d}\), defined on the domain \(\Omega^{\rm d}\). This SRT basis can then be mapped to a dissipation optimized velocity basis using a spectral inverse SRT operator. The Hilbert space containing \(\mathcal{S}\) is
\[\mathcal{H}^{\rm d}:=\left\{\mathbf{\alpha}:\Omega^{\rm d}\to\mathbb{R}^{3\times 3 }\left|\sum_{i,j=1}^{3}\int_{\Omega^{\rm d}}\left|\alpha^{ij}\right|^{2}\,{\rm d }x<\infty\right\}\, \tag{6}\]
which is equipped with the inner product \((\cdot,\cdot)_{\mathcal{H}^{\rm d}}\) and norm \(\left\|\cdot\right\|_{\mathcal{H}^{\rm d}}\) given by
\[(\mathbf{\alpha},\mathbf{\beta})_{\mathcal{H}^{\rm d}}=\sum_{i,j=1}^{3}\int_{\Omega^ {\rm d}}\alpha^{ij}\beta^{ij}\,{\rm d}x\,,\quad\|\mathbf{\alpha}\|_{\mathcal{H}^{ \rm d}}=\sqrt{(\mathbf{\alpha},\mathbf{\alpha})_{\mathcal{H}^{\rm d}}}\,;\quad\mathbf{ \alpha},\mathbf{\beta}\in\mathcal{H}^{\rm d}\,. \tag{7}\]
The SRT snapshots forming the ensemble \(\mathcal{S}\) are derived from velocity fluctuation snapshots in \(\mathcal{U}\) using the SRT operator \(D:\,\mathcal{H}^{\rm e}\to\mathcal{H}^{\rm d}\),
\[\mathbf{s}_{m}=D\mathbf{u}_{m}\,,\quad(D\mathbf{\alpha})^{ij}=\frac{1}{2}\left(\nabla^{i }\alpha^{j}+\nabla^{j}\alpha^{i}\right)\,,\quad\mathbf{\alpha}\in\mathcal{H}^{\rm e }\,. \tag{8}\]
The d-POD operator \(R^{\rm d}\) and the corresponding eigenvalue problem are then formed in analogy with (3) and (4), replacing \(\mathcal{H}^{\rm e}\) with \(\mathcal{H}^{\rm d}\) and \(\mathcal{U}\) with \(\mathcal{S}\):
\[R^{\rm d}\mathbf{\alpha}=\left\langle\left\{(\mathbf{s}_{m},\mathbf{\alpha} )_{\mathcal{H}^{\rm d}}\,\mathbf{s}_{m}\right\}_{m=1}^{M}\right\rangle\,,\quad\mathbf{ \alpha}\in\mathcal{H}^{\rm d}\,; \tag{9a}\] \[R^{\rm d}\mathbf{\psi}_{n}=\lambda_{n}^{\rm d}\mathbf{\psi}_{n}\,,\quad( \mathbf{\psi}_{n},\mathbf{\psi}_{n^{\prime}})_{\mathcal{H}^{\rm d}}=\delta_{nn^{ \prime}}\,,\quad\lambda_{1}^{\rm d}\geq\lambda_{2}^{\rm d}\geq\ldots\geq \lambda_{N}^{\rm d}\geq 0\,. \tag{9b}\]
The eigenmodes \(\left\{\mathbf{\psi}_{n}\right\}_{n=1}^{N}\subset\mathcal{H}^{\rm d}\) form the d-POD basis. SRTs \(\mathbf{s}_{m}\in\mathcal{S}\) may now be expanded in this basis, again with uncorrelated coefficients \(\left\{b_{mn}\right\}_{n=1}^{N}\),
\[\mathbf{s}_{m}=\sum_{n=1}^{N}b_{mn}\mathbf{\psi}_{n}\,,\quad b_{mn}=(\mathbf{\psi}_{n},s_{ m})_{\mathcal{H}^{\rm d}}\,\quad\left\langle\left\{b_{mn}b_{mn^{\prime}}\right\}_{m=1}^{M}\right\rangle= \lambda_{n}^{\rm d}\delta_{nn^{\prime}}\,. \tag{10}\]
This expansion is optimal with respect to \(\left\|\cdot\right\|_{\mathcal{H}^{\rm d}}\) in a sense analogous to that discussed above for the e-POD. Since the square of this norm is proportional to the mean dissipation rate, the resulting basis gives a reconstruction of the SRT which is optimal with respect to the dissipation rate. While dissipation is associated with small scales in the flow, structures associated with d-POD modes have been found to span a range of scales throughout the d-POD spectrum, as discussed by Olesen _et al._ (2023).
Olesen _et al._ (2023) formulated a spectral inverse SRT operator \(D^{-1}:\mathcal{H}^{\rm d}\to\mathcal{H}^{\rm e}\), using the one-to-one correspondence given by (8) between fluctuation velocity snapshots \(\mathbf{u}_{m}\in\mathcal{U}\) and SRT snapshots \(\mathbf{s}_{m}\in\mathcal{S}\). We have for any \(\mathbf{\psi}_{n}\) with \(\lambda_{n}^{\rm d}\neq 0\)
\[D^{-1}\mathbf{\psi}_{n}=\frac{1}{\lambda_{n}^{\rm d}}\left\langle\left\{(\mathbf{s}_{m },\mathbf{\psi}_{n})_{\mathcal{H}^{\rm e}}\,\mathbf{u}_{m}\right\}_{m=1}^{M}\right\rangle\,. \tag{11}\]
This operation produces a velocity field corresponding to each d-POD mode with \(\lambda_{n}^{\rm d}\neq 0\), and the resulting set of d-POD velocity fields \(\left\{D^{-1}\mathbf{\psi}_{n}\right\}_{\lambda_{n}^{\rm d}\neq 0}\subset\mathcal{H}^{\rm e}\) forms a complete basis for \(\mathcal{U}\). The optimality with respect to dissipation is inherited by this basis, meaning that any velocity-derived term can be expanded in a dissipation-optimized manner.
## 3 Combined POD bases
The formalism laid out in Section 2 results in two distinct velocity bases for \(\mathcal{U}\). The e-POD basis is optimal with respect to TKE, and provides an efficient representation of large-scale turbulent structures in the flow. It possesses all of the attractive properties associated with POD, including modal orthogonality and uncorrelated coefficients (5). For a channel flow, the e-POD basis is particularly efficient for reconstructing flow features located in the TKE-rich bulk region, while it is less efficient for features in the near-wall region (Olesen _et al._, 2023). The d-POD velocity basis, on the other hand, gives a dissipation rate optimized reconstruction. It reconstructs near-wall features in a turbulent channel flow more efficiently than does e-POD, including the TKE density in this region. Like the case of e-POD, the d-POD velocity basis is characterized by uncorrelated expansion coefficients, but it is in general not orthogonal with respect to \((\cdot,\cdot)_{\mathcal{H}}\).
In this section we present a method for combining the two velocity bases so as to balance the representation of energetic and dissipative flow features. The idea is to project the flow onto the lowest \(N^{\prime}\) e-POD modes, perform a complementary d-POD on the unresolved part of the flow, and map the d-POD modes to velocity modes using (11). A complete basis for the velocity data set is formed by combining the \(N^{\prime}\) e-POD modes with the complementary d-POD velocity basis.
First, e-POD is performed on \(\mathcal{U}\) as shown in Section 2.1. We then project \(\mathcal{U}\) into the subspace orthogonal to the first \(N^{\prime}\leq N\) e-POD modes, resulting in the projected data set \(\mathcal{U}^{\perp N^{\prime}}=\left\{\mathbf{u}_{m}^{\perp N^{\prime}}\right\}_{ m=1}^{M}\) given by
\[\mathbf{u}_{m}^{\perp N^{\prime}}=\mathbf{u}_{m}-\sum_{n=1}^{N^{\prime}}a_{mn}\mathbf{ \varphi}_{n}\,. \tag{12}\]
This operation represents a projection into a space of lower dimension, and thus a reduction of the rank of the data set by \(N^{\prime}\). SRTs are computed from \(\mathcal{U}^{\perp N^{\prime}}\) using (8) to produce \(\mathcal{S}^{\perp N^{\prime}}=\left\{\mathbf{s}_{m}^{\perp N^{\prime}}\right\}_{ m=1}^{M}\). We apply d-POD as described in Section 2.2 to this data set, resulting in complementary d-POD modes \(\{\mathbf{\psi}_{n}^{\perp N^{\prime}}\}_{n=1}^{N}\) and eigenvalues \(\{\mathcal{A}_{n}^{\mathrm{d},\perp N^{\prime}}\}_{n=1}^{N}\). Due to the reduction of rank from the projection (12), \(\mathcal{A}_{n}^{\mathrm{d},\perp N^{\prime}}=0\) for \(n>N-N^{\prime}\). The complementary modes are converted to velocity fields using (11), yielding a basis \(\{D^{-1}\mathbf{\psi}_{n}^{\perp N^{\prime}}\}_{A_{n}^{\perp N^{\prime}}\neq 0}\) for \(\mathcal{U}^{\perp N^{\prime}}\),
\[\mathbf{u}_{m}^{\perp N^{\prime}}=\sum_{n\mid\mathcal{A}_{n}^{\perp N^{\prime}} \neq 0}b_{mn}^{\perp N^{\prime}}D^{-1}\mathbf{\psi}_{n}^{\perp N^{\prime}}\,,\quad b _{mn}^{\perp N^{\prime}}=\left(\mathbf{\psi}_{n}^{\perp N^{\prime}},\mathbf{s}_{m}^{ \perp N^{\prime}}\right)_{\mathcal{H}}\,. \tag{13}\]
Combining the complementary d-POD velocity basis with the e-POD modes subtracted in (12) produces a complete basis \(\mathcal{B}=\left\{\mathbf{\varphi}_{n}\right\}_{n=1}^{N^{\prime}}\cup\left\{D^{- 1}\mathbf{\psi}_{n}^{\perp N^{\prime}}\right\}_{A_{n}^{\perp N^{\prime}}\neq 0}\) for \(\mathcal{U}\), the coefficients of which can be shown to be uncorrelated:
\[\mathbf{u}_{m}=\sum_{n=1}^{N^{\prime}}a_{mn}\mathbf{\varphi}_{n}+\sum_{n\mid\mathcal{A }_{n}^{\perp N^{\prime}}\neq 0}b_{mn}^{\perp N^{\prime}}D^{-1}\mathbf{\psi}_{n}^{ \perp N^{\prime}}\,;\quad\left\{\left\{a_{mn}b_{mn}^{\perp N^{\prime}}\right\} _{m=1}^{M}\right\}=0\,,\quad n\leq N^{\prime}\,. \tag{14}\]
The uncorrelatedness of coefficients implies that cross terms vanish in the expansion of second-order mean quantities.
The formalism presented here provides a gradual transition between full e-POD (\(N^{\prime}=N\)) and full d-POD bases (\(N^{\prime}=0\)). While the method is in principle straightforward, it hinges entirely on the use of (11) to map SRT modes to velocity modes. By adapting (11) the method can be generalized, such that any combination of decompositions may in principle be smoothly linked using the ideas presented in this section.
## 4 Basic POD results
In Sections 4 and 5 we apply the combined POD to the turbulent channel data set described in Olesen _et al._ (2023). The data set consists of \(N=1078\) velocity fluctuation snapshots of a channel cross section obtained from a direct numerical simulation. The rank of the data set is \(1077\), since the subtraction of the mean field reduces the rank by one; thus, all but one eigenvalue is non-zero. For further details on the simulation and the data set we refer to the above referenced paper.
### POD spectra
The e-POD spectrum is shown in figure 1\(a\), and the d-POD spectra resulting from applying the procedure to the data set with \(N^{\prime}\in\{0,10,50,100,200\}\) are shown in figure 1\(b\). The latter spectra are shown with indices shifted by \(N^{\prime}\) to align the spectra at high mode numbers. Compared to the base d-POD spectrum (\(N^{\prime}=0\)) each of the complementary d-POD spectra (\(N^{\prime}>0\)) is lifted for small \(n\) before collapsing with increasing \(n\), indicating low-dimensional dissipative structures unresolved by the e-POD sub-basis being effectively resolved by relatively few complementary d-POD modes.
### Non-orthogonality of complementary d-POD velocity modes
When building a ROM from a non-orthogonal basis, the projected system of equations is complicated by the inclusion of cross-terms. It is therefore of interest to minimize the degree of non-orthogonality when possible. While e-POD modes are orthogonal with respect to the inner product \((\cdot,\cdot)_{\mathcal{H}^{e}}\), and d-POD modes are orthogonal with respect to \((\cdot,\cdot)_{\mathcal{H}^{e}}\), d-POD velocity modes are generally not orthogonal to each other with respect to \((\cdot,\cdot)_{\mathcal{H}^{e}}\). Here we quantify the degree of non-orthogonality. We define the normalized velocity mode overlap \(e_{nn^{\prime}}^{\perp N^{\prime}}\) as
\[e_{nn^{\prime}}^{\perp N^{\prime}}=\frac{\left|\left(D^{-1}\mathbf{\psi}_{n}^{ \perp N^{\prime}},D^{-1}\mathbf{\psi}_{n^{\prime}}^{\perp N^{\prime}}\right)_{ \mathcal{H}^{e}}\right|}{\left\|D^{-1}\mathbf{\psi}_{n}^{\perp N^{\prime}}\right| _{\mathcal{H}^{e}}\left\|D^{-1}\mathbf{\psi}_{n^{\prime}}^{\perp N^{\prime}} \right\|_{\mathcal{H}^{e}}}\,. \tag{15}\]
Table 1 summarizes the maximum and mean off-diagonal overlaps for pairs formed from the lowest 500 d-POD velocity modes for each \(N^{\prime}\in\{0,10,50,100,200\}\). The maximum magnitude of overlaps decreases when increasing \(N^{\prime}\) up to 100, and increases slightly at \(N^{\prime}=200\). Meanwhile, the mean magnitude decreases uniformly across the values of \(N^{\prime}\) considered. The overall trend is thus that larger e-POD subspace dimensionality leads to smaller overlaps among complementary d-POD velocity modes.
For an expansion including a given number of modes of a combined decomposition, the degree of non-orthogonality compared to the expansion with the same number of full d-POD modes is ameliorated on two accounts. First, e-POD modes are orthogonal to each other as well as to complementary d-POD modes, providing orthogonality to the e-POD diagonal block as well as the off-diagonal blocks of the overlap matrix. Second, as shown here the magnitude of overlaps in the remaining complementary d-POD diagonal block are smaller. Both effects are generally enhanced when increasing e-POD subspace dimensionality, but must be balanced with the advantages of mixing the bases which will be shown in Section 5.
## 5 Reconstruction of TKE, TKE production, and dissipation rate
In this section we investigate the convergence of mean TKE \(\langle T\rangle\), TKE production \(\langle\mathcal{P}\rangle\), and dissipation rate \(\langle\varepsilon\rangle\) when expanded in the combined basis. Applying the expansion in (14) leads to the following expressions, where superscripts (1) and (2) denote the streamwise and transverse directions in the channel, respectively, and \(\nabla^{(2)}U^{(1)}\) is the transverse
\begin{table}
\begin{tabular}{c|c c c c c} \(N^{\prime}\) & 0 & 10 & 50 & 100 & 200 \\ \hline \(\max_{n\neq\neq\neq}e_{nn^{\prime}}^{\perp N^{\prime}}\) & 23.8 & 13 & 9.1 & 7.8 & 8.3 \\ \(\left\{\left\{\epsilon_{nn^{\prime}}^{\perp N^{\prime}}\right\}_{n\neq n^{ \prime}}\right\}\) & 3.3 & 2.3 & 1.7 & 1.4 & 1.2 \\ \end{tabular}
\end{table}
Table 1: Maximum and mean off-diagonal mode overlap magnitudes \(\epsilon_{nn^{\prime}}^{\perp N^{\prime}}\) computed from pairs formed by the lowest 500 full (\(N^{\prime}=0\)) or complementary (\(N^{\prime}>0\)) d-POD modes.
Figure 1: (\(a\)): e-POD spectrum normalized by \(\sum_{n}\lambda_{n}^{e}\). (\(b\)): Complementary d-POD Spectra using e-POD subspace dimensions \(N^{\prime}\in\{0,10,50,100,200\}\), with \(N^{\prime}=0\) corresponding to the full d-POD spectrum. Each spectrum is shifted horizontally by \(N^{\prime}\) to align the far ends, and normalized by \(\sum_{n}\lambda_{n}^{d,\perp 0}\).
gradient of the streamwise mean velocity:
\[\left\langle T\right\rangle =\frac{1}{2}\left\langle\left\{\left|\mathbf{u}_{m}\right|^{2}\right\} _{m=1}^{M}\right\rangle=\frac{1}{2}\left(\sum_{n=1}^{N^{\prime}}\lambda_{n}^{ \epsilon}\left|\mathbf{\varphi}_{n}\right|^{2}+\sum_{n^{\prime}\left|\lambda_{n^{ \prime}}^{\epsilon+1,N^{\prime}}\right.\neq 0}\lambda_{n^{\prime}}^{ \epsilon+1,N^{\prime}}\left|D^{-1}\mathbf{\psi}_{n^{\prime}}^{\epsilon+1,N^{ \prime}}\right|^{2}\right), \tag{16a}\] \[\left\langle\mathcal{P}\right\rangle =-\left\langle\left\{\left|\mathbf{u}_{m}^{(1)}\mathbf{u}_{m}^{(2)} \right\}_{m=1}^{M}\right\rangle\nabla^{(2)}U^{(1)}\] \[=-\left(\sum_{n=1}^{N^{\prime}}\lambda_{n}^{\epsilon}\varphi_{n} ^{(1)}\varphi_{n}^{(2)}+\sum_{n^{\prime}\left|\lambda_{n^{\prime}}^{\epsilon +1,N^{\prime}}\right.\neq 0}\lambda_{n^{\prime}}^{\epsilon+1,N^{\prime}}\left(D^{-1} \mathbf{\psi}_{n^{\prime}}^{\epsilon+1,N^{\prime}}\right)^{(1)}\left(D^{-1}\mathbf{ \psi}_{n^{\prime}}^{\epsilon+1,N^{\prime}}\right)^{(2)}\right)\nabla^{(2)}U^ {(1)}\,,\] (16b) \[\left\langle\epsilon\right\rangle =2\nu\left\langle\left\{\left|D\mathbf{u}_{m}\right|^{2}\right\}_{m=1 }^{M}\right\rangle=2\nu\left(\sum_{n=1}^{N^{\prime}}\lambda_{n}^{\epsilon} \left|D\mathbf{\varphi}_{n}\right|^{2}+\sum_{n^{\prime}=1}^{N-N^{\prime}}\lambda_ {n^{\prime}}^{\epsilon+1,N^{\prime}}\left|\mathbf{\psi}_{n^{\prime}}^{\epsilon+1,N ^{\prime}}\right|^{2}\right)\,, \tag{16c}\]
Due to the geometry of the channel flow only one component enters the mean TKE production.
Reconstructions including \(\hat{N}\leq N^{\prime}\) modes are built from the first \(\hat{N}\) e-POD modes, and are identical to pure e-POD reconstructions, while those including \(N^{\prime}<\hat{N}\leq N\) include \(N^{\prime}\) e-POD modes and the first \(\hat{N}-N^{\prime}\) complementary d-POD velocity modes.
### Convergence of integrated quantities
We reconstruct TKE using (16a) with \(N^{\prime}\in\{0,10,50,100,200,N\}\). Figure 2\(a\) shows the convergence of the integrated mean TKE profile \(\left\langle T\right\rangle_{\hat{N}}^{\perp N^{\prime}}\), normalized to the full mean TKE. We achieve the most efficient TKE reconstruction with the full e-POD (\(N^{\prime}=N\)), which by construction is the optimal basis for this purpose, while the full d-POD reconstruction (\(N^{\prime}=0\)) is least effective among those considered. We define the convergence lead over the full d-POD reconstruction for each of the remaining reconstructions as
\[\Delta\left\langle T\right\rangle_{\hat{N}}^{\perp N^{\prime}}=\left\langle T \right\rangle_{\hat{N}}^{\perp N^{\prime}}-\left\langle T\right\rangle_{\hat{ N}}^{\perp 0}\,,\quad N^{\prime}\in\{10,50,100,200,N\}\, \tag{17}\]
which is shown in figure 2\(d\). The combined bases (\(0<N^{\prime}<N\)) each follow the e-POD convergence for \(\hat{N}\leq N^{\prime}\), at which point they start to approach the d-POD curve, with the reconstructed TKE fraction remaining between that of e-POD and d-POD until \(\hat{N}=N\).
For the mean TKE production \(\left\langle\mathcal{P}\right\rangle_{\hat{N}}^{\perp N^{\prime}}\), shown in figure 2\(b\), there is little difference between the convergence using full e-POD and full d-POD, although the latter is consistently the least efficient. However, in this case, the combined basis reconstruction convergence curves are not confined to the region between the full e-POD and d-POD curves. Figure 2\(e\) shows \(\Delta\left\langle\mathcal{P}\right\rangle_{\hat{N}}^{\perp N^{\prime}}\), the production convergence lead over full d-POD defined similarly to \(\Delta\left\langle T\right\rangle_{\hat{N}}^{\perp N^{\prime}}\) in (17). The combined bases each follow e-POD convergence up to \(\hat{N}=N^{\prime}\) as before, after which they overtake the e-POD convergence. The convergence of \(\left\langle\mathcal{P}\right\rangle_{\hat{N}}^{\perp 10}\) falls behind that of \(\left\langle\mathcal{P}\right\rangle_{\hat{N}}^{\perp N}\) around \(\hat{N}\approx 230\), while the remaining ones maintain their leads up to the full reconstruction at \(\hat{N}=N\). The maximum leads over full d-POD convergence are achieved with \(N^{\prime}=50\) (at \(\hat{N}\approx 140\)) and \(N^{\prime}=100\) (at \(\hat{N}\approx 220\)). While the leads are minor in absolute terms (at least in the present flow) it is nevertheless interesting to note that combining e-POD and d-POD permits a more efficient reconstruction of TKE production than either of these decompositions alone.
The mean dissipation rate reconstruction, \(\left\langle\epsilon\right\rangle_{\hat{N}}^{\perp N^{\prime}}\), is shown in figure 2\(c\). The optimal convergence is achieved with full d-POD (which is optimized for this task), while full e-POD is the least optimal among those considered. Again, the differences are modest; figure 2\(f\) shows the dissipation convergence lead over e-POD, \(\Delta\left\langle\epsilon\right\rangle_{\hat{N}}^{\perp N^{\prime}}\), again defined similarly to \(\Delta\left\langle T\right\rangle_{\hat{N}}^{\perp N^{\prime}}\) in (17), for \(N^{\prime}\in\{0,10,50,100,200\}\). Again, the convergence of each combined base follows that of e-POD up to \(\hat{N}=N^{\prime}\), and it remains below that of d-POD until the full reconstruction is achieved. The combined basis reconstruction convergence leads over full e-POD decreases uniformly with \(N^{\prime}\), while the number of modes at which the max lead occurs increases with \(N^{\prime}\). The behaviours observed in figures 2\(d\) and 2\(f\) demonstrate that the combined bases bridge the gap between full e-POD and d-POD, while figure 2\(e\) shows that they provide improved convergence for TKE production.
### Convergence of reconstructed profiles
We reconstruct profiles of mean TKE, TKE production and dissipation rate using (16) for full e-POD, full d-POD, and for the combined basis with \(N^{\prime}=50\). These reconstructions are shown in figure 3. As shown by Olesen _et al._ (2023), the full e-POD emphasizes bulk structures in the turbulent channel flow, while the full d-POD instead emphasizes near-wall structures. This is reflected in the different spatial distributions seen for low values of \(\hat{N}\) in figures 3\(a\) and 3\(g\) for TKE, and in Figures 3\(c\) and 3\(i\) for dissipation rate.
The mean TKE profile reconstructed using \(N^{\prime}=50\), shown in figure 3\(d\), combines the different spatial emphases of e-POD and d-POD. The \(\hat{N}=100\) profile reconstruction, including \(N^{\prime}=50\) e-POD modes and \(\hat{N}-N^{\prime}=50\) complementary d-POD modes, includes a significant portion of both bulk TKE (compared to the corresponding d-POD profile) and near-wall TKE (compared to the e-POD profile).
The different spatial emphases are also seen for the production profiles in figures 3\(b\) and 3\(h\), where the e-POD production profile for \(\hat{N}=100\) exhibits a thick tail extending into the bulk, whereas the corresponding d-POD profile captures a comparatively large part of the peak at \(y^{\star}\approx 10\). These features are again combined for \(N^{\prime}=50\), for which the profile at \(\hat{N}=100\) includes both features. Since much of production is localized on the transition between the dissipative near-wall region and the TKE-rich bulk, it benefits from combining the basic decompositions. This also causes the enhanced convergence of integrated TKE production using combined bases as found in Section 5.1.
The mean dissipation rate is strongly localized in the near-wall region. Compared to the full d-POD reconstruction (figure 3\(i\)) this limits the advantage gained by enhancing the representation of the bulk region through inclusion of e-POD modes in the combined profile reconstruction (figure 3\(f\)).
## 6 Conclusion
We have presented a method for combining different POD bases into a single complete basis, allowing for combined optimization of the overall basis which is controllable through a single parameter. We show that the combined basis reduces the magnitude of modal overlaps between complementary dissipation rate optimized velocity modes, potentially reducing the importance of cross terms in modally projected equations.
We reconstruct mean profiles of TKE, TKE production, and dissipation rate using combined TKE and dissipation rate optimized PODs for different values of the weighting parameter. The full decompositions optimized for TKE and dissipation rate show the fastest convergence for their respective optimized quantities by construction, but using combined bases for reconstructing mean TKE production yields faster convergence globally than using either of the full POD bases. This is due to balancing of different spatial emphases of the respective full POD bases.
Figure 2: Convergence of integrated mean TKE (\(a\)), TKE production (\(b\)), and dissipation rate (\(c\)) using \(N^{\prime}\in\{0,10,50,100,200,N\}\); each is normalized by their fully converged value. Convergence lead over full d-POD for integrated mean TKE (\(d\)) and TKE production (\(e\)), and over full e-POD for integrated mean dissipation rate (\(f\)).
Combining e-POD and d-POD in this way in principle gives a way to fine-tune the balance between the reconstruction of different quantities. The observed effects are likely to depend strongly on the investigated flow. In higher Reynolds number flows or less homogeneous flows we expect the quantities to be characterized by more well-separated scales and distinct structures, leading to a larger gap between the performance of e-POD and d-POD as well as a greater potential advantage from combining the two.
These features suggest that the method may lead to improved ROM performance compared to existing POD bases. It is of interest to test its capability in a full ROM, as well as its behaviour for different flows and with different combinations of POD bases.
AcknowledgementsThe authors gratefully acknowledge the computational and data resources provided on the Sophia HPC Cluster at the Technical University of Denmark, DOI: 10.57940/FAFC-6M81.
FundingPIO acknowledges financial support from the Poul Due Jensen Foundation: Financial support from the Poul Due Jensen Foundation (Grundfos Foundation) for this research is gratefully acknowledged. AH and CMV acknowledge financial support from the European Research Council: This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 803419).
Declaration of interestsThe authors report no conflict of interest.
Author ORCIDsP.J. Olesen: [https://orcid.org/0000-0003-3444-493X](https://orcid.org/0000-0003-3444-493X); A. Hodzic: [https://orcid.org/0000-0003-1307-5290](https://orcid.org/0000-0003-1307-5290); C.M. Velte: [https://orcid.org/0000-0002-8657-0383](https://orcid.org/0000-0002-8657-0383).
Figure 3: Modal reconstruction of TKE (left column), TKE production (middle column), and dissipation rate (right column), using full e-POD (\(a\)–\(c\)), the combined basis with \(N^{\prime}=50\) (\(d\)–\(f\)), and full d-POD (\(g\)–\(i\)). All quantities are normalized to dimensionless form using kinematic viscosity \(\nu\) and friction velocity \(u_{\tau}\). The colour of each profile indicates the number of modes \(\hat{N}\) entering the reconstruction, and dashed lines marks the profile for each additional 50 modes included. |
2306.02050 | Provable Dynamic Fusion for Low-Quality Multimodal Data | The inherent challenge of multimodal fusion is to precisely capture the
cross-modal correlation and flexibly conduct cross-modal interaction. To fully
release the value of each modality and mitigate the influence of low-quality
multimodal data, dynamic multimodal fusion emerges as a promising learning
paradigm. Despite its widespread use, theoretical justifications in this field
are still notably lacking. Can we design a provably robust multimodal fusion
method? This paper provides theoretical understandings to answer this question
under a most popular multimodal fusion framework from the generalization
perspective. We proceed to reveal that several uncertainty estimation solutions
are naturally available to achieve robust multimodal fusion. Then a novel
multimodal fusion framework termed Quality-aware Multimodal Fusion (QMF) is
proposed, which can improve the performance in terms of classification accuracy
and model robustness. Extensive experimental results on multiple benchmarks can
support our findings. | Qingyang Zhang, Haitao Wu, Changqing Zhang, Qinghua Hu, Huazhu Fu, Joey Tianyi Zhou, Xi Peng | 2023-06-03T08:32:35Z | http://arxiv.org/abs/2306.02050v2 | # Provable Dynamic Fusion for Low-Quality Multimodal Data
###### Abstract
The inherent challenge of multimodal fusion is to precisely capture the cross-modal correlation and flexibly conduct cross-modal interaction. To fully release the value of each modality and mitigate the influence of low-quality multimodal data, dynamic multimodal fusion emerges as a promising learning paradigm. Despite its widespread use, theoretical justifications in this field are still notably lacking. _Can we design a provably robust multimodal fusion method?_ This paper provides theoretical understandings to answer this question under a most popular multimodal fusion framework from the generalization perspective. We proceed to reveal that several uncertainty estimation solutions are naturally available to achieve robust multimodal fusion. Then a novel multimodal fusion framework termed Quality-aware Multimodal Fusion (QMF) is proposed, which can improve the performance in terms of classification accuracy and model robustness. Extensive experimental results on multiple benchmarks can support our findings.
Machine Learning, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion Multimodal Fusion, Multimodal Fusion, Multimodal Fusion, Multimodal Fusion Multimodal Fusion, Multimodal Fusion Multimodal Fusion, Multimodal Fusion Multimodal Fusion, Multimodal Fusion Multimodal Fusion, Multimodal Fusion Multimodal Fusion, Multimodal Fusion Multimodal Fusion, Multimodal Fusion Multimodal Fusion, Multimodal Fusion Multimodal Fusion, Multimodal Fusion Multimodal Fusion, Multimodal Fusion Multimodal Fusion, Multimodal Fusion Multimodal Fusion, Multimodal Fusion Multimodal Fusion, Multimodal Fusion Multimodal Fusion, Multimodal Fusion Multimodal Fusion, Multimodal Fusion Multimodal Fusion, Multimodal Fusion Multimodal Fusion Multimodal Fusion, Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Fusion Multimodal Multimodal Fusion Multimodal Multimodal Fusion Multimodal
2021b; Wang et al., 2020a), the framework we study is also abstracted from decision-level multimodal fusion, which is one of the most fundamental research topics in multimodal learning (Baltrusaitis et al., 2018). In particular, we devise a novel Quality-aware Multimodal Fusion (**QMF**) framework for multimodal learning. Key to our framework, we leverage energy-based uncertainty to characterize the quality of each modality. Our contributions can be summarised as follows:
* This paper provides a rigorous theoretical framework to understand the advantage and criterion of robust multimodal fusionas shown in Figure 2. Firstly, we characterize the generalization error bound of decision-level multimodal fusion methods from a Rademacher complexity perspective. Then, we identify under what conditions dynamic fusion outperforms static, i.e., when the fusion weights of multimodal fusion is negatively correlates to the unimodal generalization errors, dynamic fusion methods provably outperform static.
* Under the theoretical analysis, we proceed to reveal that the generalization ability of dynamic fusion coincides with the performance of uncertainty estimation. This directly implies a principle to design and evaluate new dynamic fusion algorithms.
* Directly motivated by the above analysis, we propose a novel dynamic multimodal fusion method termed Quality-aware Multimodal Fusion (**QMF**), which serves as a realization for provably better generalization ability. As shown in Figure 1, extensive experiments on commonly used benchmarks are carried out to empirically validate the theoretical observations.
## 2 Related works
### Multimodal Fusion
Multimodal fusion is one of the most original and fundamental topics in multimodal learning, which typically aims to integrate modality-wise features into a joint representation for downstream multimodal learning tasks. Multimodal fusion can be classified into early fusion, intermediate fusion and late fusion. Although studies in neuroscience and machine learning suggest that intermediate fusion could benefit representation learning (Schroeder and Foxe, 2005; Macaluso, 2006), late fusion is still the most widely used method for multimodal learning due to its interpretation and practical simplicity. By introducing modality-level dynamics based on various strategies, dynamic fusion practically improves overall performance. As a concrete example, the previous work (Guan et al., 2019) proposes a dynamic weighting mechanism to depict illumination conditions of scenes. By introducing dynamics, they can integrate reliable cues from multi-spectral data for around-the-clock applications (e.g., pedestrian detection in security surveillance and autonomous driving). Combining with additional dynamic mechanism (e.g., a simple weighting strategy or Dempster-Shafer Evidence Theory (Shafer, 1976)), recent uncertainty-based multimodal fusion methods show remarkable advantages in various tasks, including clustering (Geng et al., 2021), classification (Han et al., 2021, 2022b; Tellamekala et al., 2022; Subedar et al., 2019; Chen et al., 2022a), regression (Ma et al., 2021), object detection (Zhang et al., 2019; Li et al., 2022b) and semantic segmentation (Tian et al., 2020; Chang et al., 2022).
### Uncertainty Estimation
Multimodal machine learning has achieved great success in various real-world application. However, the reliability of current fusion methods is still notably unexplored, which limits their application in safety-critic field (e.g., financial risk, medical diagnosis). The motivation of uncertainty estimation is to indicate whether the predictions given by machine learning models are prone to be wrong. Many uncertainty estimation methods have been proposed
Figure 1: Visualization of accuracy gap between multimodal learning methods (e.g., late fusion, align-based fusion, MMTM) and single-modal learning methods using the best modality on noisy multimodal data. Noted that the performance existing multimodal fusion methods degrade significantly of compared with their best unimodal counterparts in a high noise regime, while the proposed QMF consistently outperforms unimodal methods on low-quality data.
in the past decades, including Bayesian neural networks (BNNs) (Denker and LeCun, 1990; Mackay, 1992; Neal, 2012) and its varieties (Gal and Ghahramani, 2016; Han et al., 2022), deep ensembles (Lakshminarayanan et al., 2017; Havasi et al., 2021), predictive confidence (Hendrycks and Gimpel, 2017), Dempster-Shafer thoery (Han et al., 2021) and energy score (Liu et al., 2020). Predictive confidence expects the predicted class probability to be consistent with the empirical accuracy, which is usually referred in classification tasks. Dempster-Shafer theory (DST) is a generalization of Bayesian theory to subjective probabilities and a general framework for modeling epistemic uncertainty. Energy score emerges as a promising way to capture Out-of-Distribution (OOD) uncertainty, which arises when a machine learning model encounters an input that differs from its training data, and thus the output from the model is unreliable. A plethora of recent researches have studied the issue of OOD uncertainty (Ming et al., 2022; Chen et al., 2021; Meinke and Hein, 2019; Hendrycks et al., 2019). In this paper, we investigate predictive confidence, the Dempster-Shafer theory and energy score due to their theoretical interpretability and effectiveness.
## 3 Theory
In this section, we first clarify the basic notations and the formal definition of multimodal fusion used in Section 3.1. Then we provide main theoretical results in Section 3.2 to rigorously demonstrate when and how dynamic fusion methods work from the perspective of generalization ability (Bartlett and Mendelson, 2002). Due to space constraints, we defer the full details to Appendix A and only present a brief summary of the proofs.
### Preliminaries
We initialize by introducing the necessary notations for our theoretical frameworks. Considering a learning task on the data \((x,y)\in\mathcal{X}\times\mathcal{Y}\), where \(x=\{x^{(1)},\cdots,x^{(M)}\}\) has \(M\) modalities and \(y\in\mathcal{Y}\) denotes the data label. The multimodal training data is defined as \(D_{\text{train}}=\{x_{i},y_{i}\}_{i=1}^{N}\). Specifically, we use \(\mathcal{X}\), \(\mathcal{Y}\) and \(\mathcal{Z}\) to denote the input space, target space and latent space. Similar to the previous work in multimodal learning theory (Huang et al., 2021), we define \(h:\mathcal{X}\mapsto\mathcal{Z}\) is a multimodal fusion mapping from the input space to the latent space, and \(g:\mathcal{Z}\mapsto\mathcal{Y}\) is a task mapping. Our goal is to learn a reliable multimodal model \(f=g\circ h(x)\) performing well on the unknown multimodal test dataset \(D_{\text{test}}\). \(D_{\text{train}}\) and \(D_{\text{test}}\) are both drawn from joint distribution \(\mathcal{D}\) over \(\mathcal{X}\times\mathcal{Y}\). Here \(f=g\circ h(x)\) represents the composite function of \(h\) and \(g\).
### When and How Dynamic Multimodal Fusion Help
For simplicity, we provide analysis of ensemble-like late fusion strategy using logistic loss function in two-class classification setting. Our analysis follows this roadmap: (1) we first characterize the generalization error bound of dynamic late fusion using Rademacher complexity (Bartlett and Mendelson, 2002) and then separate the bound into three components (Theorem 1); (2) base on above separation, we further prove that dynamic fusion achieves better generalization ability under certain conditions (Theorem 2). We initiate our analysis with the basic setting as follows.
**Basic setting.** Under a \(M\) input modalities and two-class classification scenario, we define \(f^{m}\) as the unimodal classifier on modality \(x^{(m)}\). The final prediction of late fusion multimodal method is calculated by weighting decisions
Figure 2: **Left:** The generalization error upper bound of multimodal fusion method \(f\) can be characterized by its performance on each modality in terms of empirical loss, model complexity and uncertainty awareness. **Right:** Dynamic vs Static multimodal fusion hypothesis space, where the latter is a subset of the former. \(f_{\text{static}}\), \(f_{\text{dynamic}}\) are the hypothesises of static and dynamic fusion methods respectively and \(f^{*}\) is the true mapping. Informally, closer to the true mapping leads to less error. Under some certain conditions, dynamic multimodal fusion methods (e.g., the proposed QMF) can be well regularized and thus provably achieve better generalization ability.
from different modalities: \(f(x)=\sum_{m=1}^{M}w^{m}\cdot f^{m}(x^{(m)})\), where \(f(x)\) denotes the final prediction. In contrast to static late fusion, the weights in dynamic multimodal fusion are generated dynamically and vary for different samples. For clarity, we use subscript to distinguish them, i.e., \(w^{m}_{\text{static}}\) refers to the ensemble weight of modality \(m\) in static late fusion and \(w^{m}_{\text{dynamic}}\) refers to the weight in dynamic fusion. Specifically, \(w^{m}_{\text{static}}\) is a constant and \(w^{m}_{\text{dynamic}}(\cdot)\) is a function of the input sample \(x\). The generalization error of two-class multimodal classifier \(f\) is defined as:
\[\text{GError}(f)=\mathbb{E}_{(x,y)\sim\mathcal{D}}[\ell(f(x),y)], \tag{1}\]
where \(\mathcal{D}\) is the unknown joint distribution, and \(\ell\) is logistic loss function. For convenience, we simplify unimodal classifier loss \(\ell(f^{m}(x^{m}),y)\) as \(l^{m}\) and omit the inputs in the following analysis. Now we present our first main result regarding multi-modal fusion.
**Theorem 1** (Generalization Bound of Multimodal Fusion).: Let \(D_{\text{main}}=\{x_{i},y_{i}\}_{i=1}^{N}\) be a training dataset of \(N\) samples, \(\hat{E}(f^{m})\) is the unimodal empirical errors of \(f^{m}\) on \(D_{\text{train}}\). Then for any hypothesis \(f\) in \(\mathcal{H}\) (i.e., \(\mathcal{H}:\mathcal{X}\rightarrow\{-1,1\}\), \(f\in\mathcal{H}\)) and \(1>\delta>0\), with probability at least \(1-\delta\), it holds that
\[\text{GError}(f)\leq\underbrace{\sum_{m=1}^{M}\mathbb{E}(w^{m}) \hat{E}(f^{m})}_{\text{Term-L (average empirical loss)}}+\underbrace{\sum_{m=1}^{M}\mathbb{E}(w^{m}) \mathfrak{R}_{m}(f^{m})}_{\text{Term-C (average complexity)}}\] \[+\underbrace{\sum_{m=1}^{M}Cov(w^{m},l^{m})}_{\text{Term-Cov (covariance)}}+M\sqrt{\frac{ln(1/\delta)}{2N}}, \tag{2}\]
where \(\mathbb{E}(w^{m})\) is the expectations of fusion weights on joint distribution \(\mathcal{D}\), \(\mathfrak{R}_{m}(f^{m})\) is Rademacher complexity, \(Cov(w^{m},\ell^{m})\) is the covariance between fusion weight and loss.
Intuitively, Theorem 1 demonstrates that the generalization error of multimodal classifier is bounded by the weighted average performances of all the unimodal classifiers in terms of empirical loss, model complexity and the covariance between fusion weight and unimodal loss. Having established the general error bound, our next goal is to verify when dynamic multimodal late fusion indeed achieves tighter bound than that of static late fusion. Informally, in Eq. 1, Term-Cov measures the joint variability of \(w^{m}\) and \(\ell^{m}\). Remember that in static multimodal fusion \(w^{m}_{\text{static}}\) is a constant, which means Term-Cov \(=0\) for any static fusion methods. Thus the generalization error bound of static fusion methods reduces to
\[\text{GError}(f_{\text{static}})\leq \underbrace{\sum_{m=1}^{M}w^{m}_{\text{static}}\hat{E}(f^{m})}_{ \text{Term-L (average empirical loss)}}\] \[+ \underbrace{\sum_{m=1}^{M}w^{m}_{\text{static}}\mathfrak{R}_{m}(f ^{m})}_{\text{Term-C (average complexity)}}+M\sqrt{\frac{ln(1/\delta)}{2N}}. \tag{3}\]
So when summation of Term-L, Term-C is invariant or smaller in dynamic fusion and Term-Cov \(\leq 0\), we can ensure that dynamic fusion provably outperforms static fusion. This theorem is formally presented as
**Theorem 2**.: Let \(\mathcal{O}(\text{GError}(f_{\text{dynamic}}))\), \(\mathcal{O}(\text{GError}(f_{\text{static}}))\) be the upper bound of generalization error of multimodal classifier using dynamic and static fusion strategy respectively. \(\hat{E}(f^{m})\) is the unimodal empirical errors of \(f^{m}\) on \(D_{\text{train}}\) defined in Theorem 1. Then for any hypothesis \(f_{\text{dynamic}}\), \(f_{\text{static}}\) in \(\mathcal{H}:\mathcal{X}\rightarrow\{-1,1\}\) and \(1>\delta>0\), it holds that
\[\mathcal{O}(\text{GError}(f_{\text{dynamic}}))\leq\mathcal{O}(\text{GError} (f_{\text{static}})) \tag{4}\]
with probability at least \(1-\delta\), if we have
\[\mathbb{E}(w^{m}_{\text{dynamic}})=w^{m}_{\text{static}} \tag{5}\]
and
\[r(w^{m}_{\text{dynamic}},\ell(f^{m}))\leq 0 \tag{6}\]
for all input modalities, where \(r\) is the Pearson correlation coefficient which measures the correlation between fusion weights \(w^{m}_{\text{dynamic}}\) and unimodal loss \(\ell^{m}\).
**Remark.** Theoretically, optimizing over the same function class efficiently results in the same empirical loss. Suppose for each modality \(m\), the unimodal classifier \(f^{m}\) we used in dynamic and static fusion are of the same architecture, then the intrinsic complexity of unimodal classifier \(\mathfrak{R}_{m}(f^{m})\) and empirical risk \(\hat{E}(f^{m})\) can be invariant. Thus in this case, it holds that
\[\sum_{m=1}^{M}\mathbb{E}(w^{m}_{\text{dynamic}})\hat{E}(f^{m})\leq\sum_{m=1}^{M }w^{m}_{\text{static}}\hat{E}(f^{m}), \tag{7}\]
and
\[\sum_{m=1}^{M}\mathbb{E}(w^{m}_{\text{dynamic}})\mathfrak{R}_{m}(f^{m})\leq \sum_{m=1}^{M}w^{m}_{\text{static}}\mathfrak{R}_{m}(f^{m}), \tag{8}\]
if Eq. 5 is satisfied for any modality \(m\). According to Theorem 2, it is easy to derive the conclusion that the main
challenge of achieving reliable dynamic multimodal fusion is to learn a reasonable \(w^{m}_{\text{dynamic}}(x)\) for each modality that satisfies Eq. 5 and Eq. 6.
## 4 Method
Now we proceed to answer "How to realize robust dynamic fusion?". In this section, we theoretically identify the connection between dynamic multimodal fusion and uncertainty estimation. Then, a unified dynamic multimodal fusion framework termed Quality-aware Multimodal Fusion (QMF) is proposed. We next show how to realize this framework in decision-level late fusion and classification tasks to support our findings.
### Coincidence with Uncertainty Estimation
Firstly, we focus on how to satisfy Eq. 6. As we discuss in Section 2.2, the common motivation of various uncertainty estimation methods is to provide an indicator of whether the predictions given by models are prone to be wrong. This motivation is inherently close to obtaining weights that satisfy Eq. 6. We formulate this claim with the following assumption
**Assumption 1**.: _Given an effective uncertainty estimator \(u^{m}:\mathcal{X}\rightarrow\mathbb{R}\) on modality \(m\), the estimated uncertainty \(u^{m}(x)\) is positively correlated with its modal-specific loss \(\ell^{m}(x)\): \(r(u^{m},\ell^{m}(x))\geq 0\), where \(r\) is the Pearson correlation coefficient._
This insight offers opportunity to explore novel dynamic fusion methods provably outperform conventional static fusion methods. Similar to previous dynamic fusion methods (Blundell et al., 2015; Zhang et al., 2019; Han et al., 2022), we deploy modal-level weighting strategy to introduce dynamics.
**Uncertainty-aware weighting.** The uncertainty-aware fusion weighting \(w^{m}:\mathcal{X}\rightarrow\mathbb{R}\) is a function that linearly and negatively relates to the corresponding uncertainty
\[w^{m}(x)=\alpha^{m}\ u^{m}(x)+\beta^{m}, \tag{9}\]
where \(\alpha^{m}<0\), \(\beta^{m}\geq 0\) are modal-specific hyper-parameters. \(u^{m}(x)\) is the uncertainty of modality \(m\). By tuning hyper-parameters \(\alpha^{m},\beta^{m}\), we can ensure dynamic fusion weights satisfied Eq. 5 and 6 simultaneously. This lemma is formally presented as
**Lemma 1** (Satisfiability).: With Assumption 1, for any \(w^{m}_{\text{static}}\in\mathbb{R}\), there always exist \(\beta^{m}\in\mathbb{R}\) such that
\[\mathbb{E}(w^{m}_{\text{dynamic}})=w^{m}_{\text{static}},r(w^{m}_{\text{ dynamic}},\ell(f^{m}))\leq 0. \tag{10}\]
Once we obtain the fusion weights, we can perform uncertainty-aware weighting fusion in decision-level according to the following rule
\[f(x)=\sum_{m=1}^{M}w^{m}(x)\cdot f^{m}(x), \tag{11}\]
where \(f^{m}(x)\) defined in Section 3.2 denotes unimodal prediction on modality \(m\).
### Enhance Correlation by Additional Regularization
With the above analysis, the core challenges of robust dynamic multimodal fusion present in Section. 3.2 have been reduced to obtain an effective uncertainty estimator in Assumption 1. In our implementation, we leverage energy score (Liu et al., 2020), which is a widely accepted metric in the literature of uncertainty learning. Energy score 1 bridges the gap between the Helmholtz free energy of a given data point and its density. For multimodal data, the density functions of different modalities can be estimated by the corresponding energy function:
Footnote 1: While another line of previous works usually incorporate an auxiliary outlier dataset (e.g., random noised out-of-distribution data) during training for higher performance, for clarity and a strictly fair comparison, we conduct our experiments without the help of additional data.
\[log\ p(x^{(m)})=-\text{Energy}(x;f^{m})/\mathcal{T}^{m}-log\ Z^{m}, \tag{12}\]
where \(x^{(m)}\) is the \(m\)-th input modality and \(f^{m}\) is the unimodal classification model. \(\text{Energy}(\cdot)\) is the energy function and \(Z^{m}\) is an intractable constant for all \(x^{m}\). The above equation suggests that \(-\text{Energy}(x^{(m)};f^{m})\) is linearly aligned with density \(p(x^{(m)})\). The energy score for the \(m\)-th modality of input \(x\) can be calculated as
\[\text{Energy}(x^{(m)})=-\mathcal{T}^{m}\cdot log\sum_{k}^{K}e^{f^{m}_{k}(x^{(m )})/\mathcal{T}^{m}}, \tag{13}\]
where \(f^{m}_{k}(x^{(m)})\) is the output logits of classifier \(f^{m}\) corresponding to the \(k\)-th class label and \(\mathcal{T}^{m}\) is a temperature parameter. Intuitively, more uniformly distributed prediction leads to higher estimated uncertainty.
However, it has been shown experimentally that the uncertainty estimated in this way without additional regularization is not well enough to satisfy our Assumption 1. To address this, we propose a sampling-based regularization technology to enhance the original method in terms of correlation. The most simple and straightforward way to improve the correlation between estimated uncertainty and respective loss is to leverage the sample-wise loss during training stage as supervision information. However, due to the over-parameterization phenomenon of deep neural networks, the
losses constantly reduce to zero during training. Inspired by recent works in Bayesian learning (Maddox et al., 2019) and uncertainty estimation (Moon et al., 2020; Han et al., 2022), we propose to leverage the information from historical training trajectory to regularize the fusion weights. Specifically, given the \(m\)-th modality of a sample \((x_{i},y_{i})\), the training average loss for \(x_{i}^{m}\) is calculated as:
\[\kappa_{i}^{m}=\frac{1}{\mathrm{T}}\sum_{\mathrm{t=T_{*}}}^{\mathrm{T_{*}+T}} \ell(y_{i},f_{\theta_{t}}^{m}(x_{i})), \tag{14}\]
where \(f_{\theta_{t}}^{m}\) is the unimodal classifier on each iteration epoch \(t\) with parameters \(\theta_{t}\). After training \(\mathrm{T_{s}}-1\) epochs, we sample \(\mathrm{T}\) times and calculate the average training loss.
Empirically, recent works (Geifman et al., 2019) shown that easy-to-classify samples are learned earlier during training compared to hard-to-classify samples (e.g., noise samples (Arazo et al., 2019)). It is desirable to regularize a dynamic fusion model by learning the following relationship during training
\[\kappa_{i}^{m}\geq\kappa_{j}^{m}\iff w_{i}^{m}\leq w_{j}^{m}. \tag{15}\]
We now present the full definition of our regularization term as follows
\[\mathcal{L}_{\text{reg}}=max(0,g(w_{i}^{m},w_{j}^{m})(\kappa_{i}^{m}-\kappa_{ j}^{m})+|w_{i}^{m}-w_{j}^{m}|), \tag{16}\]
where
\[g(w_{i}^{m},w_{j}^{m})=\begin{cases}1\text{ if }w_{i}^{m}>w_{j}^{m},\\ 0\text{ if }w_{i}^{m}=w_{j}^{m},\\ -1\text{ otherwise}.\end{cases} \tag{17}\]
Inspired by multi-task learning, we define the total loss function as a summation of standard cross-entropy classification losses of multiple modalities and the regularization term
\[\mathcal{L}_{\text{overall}}=\mathcal{L}_{\text{CE}}(y,f(x))+\sum_{m=1}^{M} \mathcal{L}_{\text{CE}}(y,f^{m}(x^{m}))+\lambda\mathcal{L}_{\text{reg}}, \tag{18}\]
where \(\lambda\) is a hyperparamter which controls the strength of regularization, \(\mathcal{L}_{\text{CE}}\) and \(\mathcal{L}_{\text{reg}}\) are the cross-entropy loss and regularization term respectively. The whole training process is shown in Algorithm 1.
```
Input : Multimodal training dataset \(D_{\text{train}}\), the number of sampling \(\mathrm{T}\), hyperparameters \(\lambda\), temperature parameters \(\{T^{m}\}_{m=1}^{M}\), unimodal predictors \(\{f^{m}(\cdot)\}_{i=m}^{M}\); Output : The multimodal classifier \(f\);
1foreachiterationdo
2 Obtain training sample \((x_{i},y_{i})\) from dataset \(D_{\text{train}}\) and the decisions on each modality \(f^{m}(x)\);
3 Calculate uncertainty-aware fusion weights \([w_{i}^{1},\cdots,w_{i}^{m}]\) defined in Eq. 9;
4 Update the average training loss \(\kappa_{i}^{m}\) of each modalities;
5 Obtain the multimodal decision by weighting unimodal predictions dynamically according to Eq. 11;
6 Update model parameters of each unimodal predictor by minimizing \(\mathcal{L}_{\text{overall}}\) in Eq. 18.
7 end for
```
**Algorithm 1** Training Pseudo Code of Quality-aware Multimodal Fusion (QMF)
**Intuitive explanation of the effectiveness of QMF.** Without loss of generality, we assume modality \(x^{A}\) is clean and modality \(x^{B}\) is noisy due to unknown environmental factors or sensor failure. At this time, \(x^{A}\) is in the distribution of clean training data but \(x^{B}\) deviates significantly from it. Accordingly, we have \(u(x^{A})\leq u(x^{B})\) and thus \(w^{A}\geq w^{B}\). Therefore, for our QMF, the multimodal decision will tend to more rely on the high-quality modality \(x^{A}\) than the other modality \(x^{B}\). By dynamically determining the fusion weights of each modality, the influence of the unreliable modalities can be alleviated.
## 5 Experiment
In this section, we conduct experiments on multiple datasets of diverse applications 2. The main questions to be verified are highlighted here:
Footnote 2: Code is available at [https://github.com/QingyangZhang/QMF](https://github.com/QingyangZhang/QMF).
* Q1 Effectiveness I. Does the proposed method has better generalization ability than its counterparts? (Support Theorem 1)
* Q2 Effectiveness II. Under what conditions does uncertainty-aware dynamic multimodal fusion work? (Support Theorem 2)
* Q3 Reliability. Does the proposed method have an effective perception for the uncertainty of modality? (Support Assumption 1)
* Q4 Ablation study. What is the key factor of performance improvement in our method?
### Experimental Setup
We briefly present the experimental setup here, including the experimental datasets and comparison methods. Please
refer to Appendix B for more detailed setup.
**Tasks and datasets.** We evaluate our method on two multimodal classification tasks. \(\circ\) Scenes Recognition: NYU Depth V2 (Silberman et al., 2012) and SUN RGB-D (Song et al., 2015) are two public indoor scenes recognition datasets, which are associated with two modalities, i.e., RGB and depth images. \(\circ\) Image-text classification: The UPMC FOOD101 dataset (Wang et al., 2015) contains (possibly noisy) images obtained by Google Image Search and corresponding textual descriptions. MVSA sentiment analysis dataset (Niu et al., 2016) includes a set of image-text pairs with manual annotations collected from social media. Although the datasets above are all under the condition that \(M=2\), it is intuitive and easy to generalize to \(M\geq 3\).
**Evaluation metrics.** Due to the randomness involved, we report the mean accuracy, standard deviation and worst-case accuracy on NYU Depth V2 and SUN RGB-D over 10 different seeds. To be consistent with existing works (Han et al., 2022; Kiela et al., 2019; Yadav and Vishwakarma, 2023), we repeat experiments over 3 times on UMPC FOOD101 and 5 times on MVSA.
**Compared methods.** For scene recognition task, we compare the proposed method with three static fusion methods: Late fusion, Concatenate-based fusion, Alignment-based fusion methods (Wang et al., 2016) and two representative dynamic fusion methods, i.e., MMTM (Joze et al., 2020) and TMC3(Han et al., 2021). For image-text classification, we compare against strong unimodal baselines (i.e., Bow, Bert and ResNet-152) as well as sophisticated multimodal fusion methods, including Late fusion, ConcatBow, ConcatBERT and recent sota MMBT (Kiela et al., 2019).
Footnote 3: There are two variants in (Han et al., 2021): TMC and ETMC (with additional concatenated-based multimodal fusion strategy). TMC has comparable performance and is a more fair comparison.
### Experimental Results
**Classification robustness (Q1).** To validate the robustness of the uncertainty-aware weighting fusion, we evaluate QMF and the compared methods in terms of average and worst-case accuracy under Gaussian noise (for image modality) and blank noise (for text modality) following previous works (Han et al., 2021; Ma et al., 2021; Verma et al., 2021; Hu et al., 2019; Xie et al., 2017). More results under different types of noise (e.g. Salt-Pepper Noise) can be found in Appendix C.2. The experimental results are presented in Table 1. It is observed that QMF usually performs in the top three in terms of both average and worst-case accuracy. This observation indicates that QMF has better generalization ability than their counterparts experimentally. It is also noteworthy that the QMF outperforms the prior **state-of-the-art** methods (i.e., MMBT and TMC) on large-scale benchmark UPMC FOOD101, which illustrates the superiority of the proposed method.
**Connection to uncertainty estimation (Q2).** We further conduct comparisons with QMF realized by various uncertainty estimation algorithms, i.e., prediction confidence (Hendrycks and Gimpel, 2017) and Dempster-Shafer evidence theory (DST) (Han et al., 2021). According to comparison results shown in the Figure 3, it is clear that (i) the generalization ability (i.e., average and worst-case accuracy) of dynamic fusion methods coincide with their uncertainty estimation ability and (ii) our QMF achieves the best performance in terms of classification accuracy and uncertainty estimation in the meantime. This comparison reveals the underlying reason of why QMF outperforms other fusion methods and supports Theorem 2. We show the results on NYU Depth V2 and SUN RGB-D under Gaussian
Figure 3: Test accuracy and Pearson correlation coefficient achieved by different fusion methods over 10 times random experiments. The average and worst-case accuracy are highly consistency with uncertainty estimation ability.
\begin{table}
\begin{tabular}{c|c c c c c c c c} \hline \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Dynamic} & \multirow{2}{*}{Method} & \multicolumn{2}{c}{\(\epsilon=\mathbf{0.0}\)} & \multicolumn{2}{c}{\(\epsilon=\mathbf{5.0}\)} & \multicolumn{2}{c}{\(\epsilon=\mathbf{10.0}\)} \\ & & & Avg. & Worst. & Avg. & Worst. & Avg. & Worst. \\ \hline & � & RGB & \(63.30\) & \(62.54\) & \(53.12\) & \(50.31\) & \(45.46\) & \(42.20\) \\ & � & Depth & \(62.65\) & \(61.01\) & \(50.95\) & \(42.81\) & \(44.13\) & \(35.93\) \\ & � & Late fusion & \(69.14\) & \(68.35\) & \(59.63\) & \(53.98\) & \(51.99\) & \(44.95\) \\ NYU & � & Concat & \(70.30\) & \(69.42\) & \(59.97\) & \(55.89\) & \(53.20\) & \(47.71\) \\ Depth V2 & � & Align & \(70.31\) & \(68.50\) & \(59.47\) & \(56.27\) & \(51.74\) & \(44.19\) \\ & ✓ & MMTM & \(71.04\) & \(\mathbf{70.18}\) & \(60.37\) & \(56.73\) & \(52.28\) & \(46.18\) \\ & ✓ & TMC & \(\mathbf{71.06}\) & \(69.57\) & \(61.04\) & \(58.72\) & \(53.36\) & \(49.23\) \\ \cline{2-8} & ✓ & Ours & \(70.09\) & \(68.81\) & \(\mathbf{61.62}\) & \(\mathbf{58.87}\) & \(\mathbf{55.60}\) & \(\mathbf{51.07}\) \\ \hline & ✗ & RGB & \(56.78\) & \(56.51\) & \(48.40\) & \(47.16\) & \(42.94\) & \(41.02\) \\ & ✗ & Depth & \(52.99\) & \(51.32\) & \(37.81\) & \(35.63\) & \(33.07\) & \(30.41\) \\ & ✗ & Late fusion & \(62.09\) & \(60.55\) & \(52.44\) & \(50.83\) & \(47.33\) & \(44.60\) \\ SUN & ✗ & Concat & \(61.90\) & \(61.19\) & \(52.69\) & \(50.61\) & \(45.64\) & \(42.95\) \\ RGB-D & ✗ & Align & \(61.12\) & \(60.12\) & \(50.05\) & \(47.63\) & \(44.19\) & \(38.12\) \\ & ✓ & MMTM & \(61.72\) & \(60.94\) & \(51.86\) & \(50.80\) & \(46.03\) & \(44.28\) \\ & ✓ & TMC & \(60.68\) & \(60.31\) & \(51.24\) & \(49.45\) & \(45.66\) & \(41.60\) \\ \cline{2-8} & ✓ & Ours & \(\mathbf{62.09}\) & \(\mathbf{61.30}\) & \(\mathbf{53.40}\) & \(\mathbf{52.07}\) & \(\mathbf{48.58}\) & \(\mathbf{47.50}\) \\ \hline & ✗ & Bow & \(82.50\) & \(82.32\) & \(61.68\) & \(60.98\) & \(41.95\) & \(41.41\) \\ & ✗ & Img & \(64.62\) & \(64.22\) & \(34.72\) & \(34.19\) & \(33.03\) & \(32.67\) \\ & ✗ & Bert & \(86.46\) & \(86.42\) & \(67.38\) & \(67.19\) & \(43.88\) & \(43.56\) \\ FOOD & ✗ & Late fusion & \(90.69\) & \(90.58\) & \(68.49\) & \(65.05\) & \(58.00\) & \(55.77\) \\
101 & ✗ & ConcatBow & \(70.77\) & \(70.68\) & \(38.28\) & \(37.95\) & \(35.68\) & \(34.92\) \\ & ✗ & ConcatBert & \(88.20\) & \(87.81\) & \(61.10\) & \(59.25\) & \(49.86\) & \(47.79\) \\ & ✓ & MMBT & \(91.52\) & \(91.38\) & \(72.32\) & \(71.78\) & \(56.75\) & \(56.21\) \\ & ✓ & TMC & \(89.86\) & \(89.80\) & \(73.93\) & \(73.64\) & \(61.37\) & \(61.10\) \\ \cline{2-8} & ✓ & Ours & \(\mathbf{92.92}\) & \(\mathbf{92.72}\) & \(\mathbf{76.03}\) & \(\mathbf{74.68}\) & \(\mathbf{62.21}\) & \(\mathbf{61.76}\) \\ \hline & ✗ & Bow & \(48.79\) & \(35.45\) & \(42.20\) & \(32.56\) & \(41.57\) & \(32.18\) \\ & ✗ & Img & \(64.12\) & \(62.04\) & \(49.36\) & \(45.67\) & \(45.00\) & \(39.31\) \\ & ✗ & Bert & \(75.61\) & \(74.76\) & \(69.50\) & \(65.70\) & \(47.41\) & \(45.86\) \\ & ✗ & Late fusion & \(76.88\) & \(74.76\) & \(63.46\) & \(58.57\) & \(55.16\) & \(47.78\) \\ MVSA & ✗ & ConcatBow & \(64.09\) & \(62.04\) & \(49.95\) & \(45.28\) & \(45.40\) & \(40.95\) \\ & ✗ & ConcatBert & \(65.59\) & \(64.74\) & \(50.70\) & \(44.70\) & \(46.12\) & \(41.81\) \\ & ✓ & MMBT & \(\mathbf{78.50}\) & \(\mathbf{78.04}\) & \(71.99\) & \(69.94\) & \(55.35\) & \(52.22\) \\ & ✓ & TMC & \(74.88\) & \(71.10\) & \(66.72\) & \(60.12\) & \(60.36\) & \(53.37\) \\ \cline{2-8} & ✓ & Ours & \(78.07\) & \(76.30\) & \(\mathbf{73.85}\) & \(\mathbf{71.10}\) & \(\mathbf{61.28}\) & \(\mathbf{57.61}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Classification comparison when 50% of the modalities are corrupted with Gaussian noise i.e., zero mean with variance of \(\epsilon\). The best three results are in bold brown and the best results are highlighted in bold blue. Full results with standard deviation are in Appendix.
noise with zero mean and variance of 10.
**Reliability of QMF (Q3).** We calculate the fusion weights defined in Eq. 9 of different modalities in Table 3 on UPMC FOOD-101. It is observed that the fusion weights of QMF have the most effective perception of modal quality compared with other uncertainty estimation methods (in terms of correlation). This observation justifies our expectation of uncertainty-aware weights in Eq. 9.
**Ablation study (Q4).** We compare different combinations of components (i.e., uncertainty-aware weighting and the regularization term \(\mathcal{L}_{\text{reg}}\)). Here we also employ Gaussian noise on NYU Depth V2 in Table 2, and more results can be found in the Appendix C.1. It is easy to conclude that 1) adding \(\mathcal{L}_{\text{reg}}\) is beneficial to obtaining more reasonable fusion weights; 2) the best performance could be expected with the full QMF. Please refer to Table. 4 in Appendix C.1 for full results with standard deviation.
In summary, the empirical results can support our theoretical findings. These works identify the causes and conditions of performance gains of dynamic multimodal fusion methods. The proposed method can help to improve robustness on multiple datasets.
## 6 Limitations
Even though the proposed method achieves superior performance, there are still some potential limitations. Firstly, the fusion weights of QMF are based on uncertainty estimation, which can be a challenging task in the real world. For example, in our experiments, we can only achieve mild Pearson's \(r\) on NYU Depth V2 and SUN RGB-D dataset. Therefore, it is important and valuable to explore novel uncertainty estimation methods in the future work. Secondly, though we characterize the generalization error bound of the proposed method, our theoretical justifications are based on Assumption 1. However, previous work (Fang et al., 2022) reveals that OOD detection is not learnable under some scenarios. Thus it's still a challenging open problem to further characterize the generalization ability of dynamic multimodal fusion.
## 7 Conclusions and Future works
Introducing dynamics in multimodal fusion has yielded remarkable empirical results in various applications, including image classification, object detection and semantic segmentation. Many state-of-the-art multimodal models introduce dynamic fusion strategies, but the inductive bias provided by this technique is not well understood. In this paper, we provide rigorous analysis towards understanding when and what dynamic multimodal fusion methods are more robust on multimodal data in the wild. These findings demonstrate the connection between uncertainty learning and robust multimodal fusion, which further implies a principle to design novel dynamic multimodal fusion methods. Finally, we perform extensive experiments on multiple benchmarks to support our findings. In the work, the energy-based weighting strategy is devised, and other uncertainty estimation ways could be explored. Another interesting direction is proving the dynamic fusion under a more general setting.
## Acknowledgments
This work is partially supported by the National Natural Science Foundation of China (Grant No. 61976151) and A*STAR Central Research Fund. We gratefully acknowledge the support of MindSpore and CAAI. The authors would like to thank Zhipeng Liang (Hong Kong University of Science and Technology) for checking on math details and Zongbo Han, Huan Ma (Tianjin University) for their comments on writing. The authors also appreciate the suggestions from ICML anonymous peer reviewers.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{UAW} & \multirow{2}{*}{\(\mathcal{L}_{\text{reg}}\)} & \multicolumn{2}{c}{\(\epsilon=\mathbf{0.0}\)} & \multicolumn{2}{c}{\(\epsilon=\mathbf{5.0}\)} & \multicolumn{2}{c}{\(\epsilon=\mathbf{10.0}\)} & \multicolumn{2}{c}{\(\epsilon=\mathbf{20.0}\)} \\ & & Avg. & Worst. & Avg. & Worst. & Avg. & Worst. & Avg. & Worst. \\ \hline ✗ & ✗ & \(69.14\) & \(68.35\) & \(59.62\) & \(53.98\) & \(51.94\) & \(44.95\) & \(43.76\) & \(36.85\) \\ ✗ & ✓ & \(69.68\) & \(67.74\) & \(61.35\) & \(58.26\) & \(55.44\) & \(\mathbf{51.53}\) & \(47.32\) & \(42.97\) \\ ✓ & ✗ & \(70.06\) & \(\mathbf{69.11}\) & \(61.59\) & \(57.49\) & \(55.14\) & \(50.15\) & \(47.46\) & \(42.05\) \\ \hline ✓ & ✓ & \(\mathbf{70.09}\) & \(68.81\) & \(\mathbf{61.62}\) & \(\mathbf{58.87}\) & \(\mathbf{55.81}\) & \(51.07\) & \(\mathbf{48.26}\) & \(\mathbf{43.73}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Ablation study on NYU Depth V2. Full results with standard deviation are in Appendix C.1.
\begin{table}
\begin{tabular}{c c c c} \hline \hline & \(\epsilon=\mathbf{0.0}\) & \(\epsilon=\mathbf{5.0}\) & \(\epsilon=\mathbf{10.0}\) \\ \hline MSP & \(0.391\) & \(0.433\) & \(0.486\) \\ Energy score & \(0.272\) & \(0.429\) & \(0.510\) \\ Entropy & \(0.397\) & \(0.420\) & \(0.452\) \\ Evidence & \(0.157\) & \(0.136\) & \(0.265\) \\ \hline Ours & \(\mathbf{0.498}\) & \(\mathbf{0.652}\) & \(\mathbf{0.735}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Pearson correlation coefficient \(r\) between losses and fusion weights of test samples (a higher \(|r|\) indicates a better uncertainty estimation). |
2301.06291 | Optimal Network Robustness Against Attacks in Varying Degree
Distributions | In varying degree distributions, we investigate the optimally robust networks
against targeted attacks to nodes with higher degrees. In considering that a
network tends to have more robustness with a smaller variance of degree
distributions, we clarify the optimal robustness at random regular graphs in
their comprehensive discrete or random perturbations. By comparing robustness
measurements on them, we find that random regular graphs have the optimal
robustness against attacks in varying degree distributions. | Masaki Chujyo, Yukio Hayashi, Takehisa Hasegawa | 2023-01-16T07:41:15Z | http://arxiv.org/abs/2301.06291v1 | # Optimal Network Robustness Against Attacks
###### Abstract
In varying degree distributions, we investigate the optimally robust networks against targeted attacks to nodes with higher degrees. In considering that a network tends to have more robustness with a smaller variance of degree distributions, we clarify the optimal robustness at random regular graphs in their comprehensive discrete or random perturbations. By comparing robustness measurements on them, we find that random regular graphs have the optimal robustness against attacks in varying degree distributions.
## 1 Introduction
At the beginning of this century, it has been found that many real-world networks commonly have a scale-free structure in which the degree distribution follows a power-law [1]. Unfortunately, such networks are extremely vulnerable to targeted removals (attacks) of nodes with higher degrees [2]. However, our modern society is supported by scale-free networks, such as the Internet, communication networks, traffic systems, power grids, social networks, supply chain networks, protein-protein interaction networks, and metabolic networks. Therefore, overcoming the vulnerability of real-world networks has been an important issue.
We focus on varying the degree distribution for investigating the robustness of connectivity. As well-known results, scale-free networks with power-law degree distributions are more vulnerable to attacks than Erdos-Renyi random graphs with exponential degree distributions, while scale-free networks are more tolerant to random node removals (failures) than Erdos-Renyi random graphs [2]. Recently, it has been shown that random networks with smaller variances of degree distributions are more robust through numerical simulations in the range of power-law, exponential, and narrower degree distributions [3]. The continuously varying degree distributions are generated by the growing network model [4, 5, 6] and the inverse preferential model [7]. In addition, for discussing the pure effect of degree distributions on the robustness, the networks are randomized thought configuration models [8, 9]. In a special class of networks with multimodal distributions including power-law ones, bimodal networks with two kinds of degrees are the most robust for maximizing the sum of two critical thresholds of whole fragmentations by both failures and attacks [10]. In other words, the robustness against both failures and attacks increases, as the variance of degree distribution decreases. At pinpoints, in comparing the robustness of regular graphs, Erdos-Renyi random graphs [11], Watts-Strogatz models [12], and Barabasi Albert models [1], it has been numerically shown that regular graphs are the most robust [13]. Moreover, the degree distribution becomes narrower in maximizing the robustness index by random rewiring [13]. Although the
maximization changes the degree distribution to trimodal or tetramodal distributions with three or four degrees, this is not enough to say that regular graph with one degree is the most robust.
On the other hand, it is found that an onion-like network with positive degree-degree correlations [14] is the most robust under fixing a degree distribution [15, 10]. By increasing the degree-degree correlations, some rewiring methods have been proposed for improving the robustness [16, 17]. In addition, an incrementally growing method for constructing onion-like networks is proposed by enhancing interwoven long loops [18, 19]. The robust networks generated by this growing method have exponential degree distributions [18]. This result also suggest that more homogeneous degree distributions are crucial to increase the robustness.
Besides the degree-degree correlations, loops on networks have been getting attention for increasing the robustness [20, 19, 21]. The relation between loops and the robustness is supported by an asymptotically equivalence of network decycling and network dismantling [20]. Network decycling or feedback vertex set (FVS) is a minimum set of nodes that removal makes the network without loops, while network dismantling is a minimum set of nodes whose removal makes it a smaller size of connected components. Intuitively, networks without loops are easily fragmented by any node removals. As the importance of loops, it has numerically shown that networks with a larger size of FVS have more robustness against attacks in the incrementally growing onion-like networks [19]. Furthermore, loop-enhancing rewiring methods by increasing the size of FVS have been proposed [21]. When the loop-enhancing rewiring is applied to a network for increasing the robustness, the variance of degree distributions becomes smaller. It also shows that decreasing of the variance of degree distribution is strongly related to increasing the robustness.
These previous studies suggest that a network tends to be more robust against attacks with a smaller variance of degree distribution. Thus, a random regular graph with the minimum (zero) variance of degree distribution is predicted to have the optimal robustness. In this paper, we clarify the optimal robustness in varying degree distributions by comparing the robustness of random regular graphs and their perturbed ones by comprehensive discrete or random perturbations.
## 2 Surrounding of random regular graphs
A regular graph consists of all nodes with a constant degree. Thus, the variance of degree distributions is zero. We compare the robustness in random regular graphs and the perturbed ones around it. As the surroundings, we consider two types of networks with discrete and random perturbations. In Sec. 2.1 for discrete perturbations, we introduce bimodal networks with two types of degrees whose modality is the second minimum to the regular graph with only one degree. In Sec. 2.2 for random perturbations which include several modalities of degrees, we introduce modified networks by adding and removing links to the regular graphs uniformly at random.
### Discrete perturbations
As discrete perturbations of a random regular graph, we introduce bimodal networks with two degrees \(d_{1}<d_{2}\) under the average degree \(d\). For the bimodal networks, there are several combinations of degrees \(d_{1}\) and \(d_{2}\) in \(\Delta d=d_{2}-d_{1}\geq 2\).
For given degrees \(d_{1}\) and \(d_{2}\) for a bimodal network with \(N\) nodes and the average degree \(d\), the number of nodes \(N_{1}\) and \(N_{2}\) corresponding to degrees \(d_{1}\) and \(d_{2}\) are derived as follows. From the total number of nodes \(N\) and links \(M\),
\[N=N_{1}+N_{2}, \tag{1}\]
\[M=\frac{d\times N}{2}=\frac{d_{1}\times N_{1}}{2}+\frac{d_{2}\times N_{2}}{2}, \tag{2}\]
we obtain
\[N_{1}=\frac{d_{2}-d}{\Delta d}N, \tag{3}\]
\[N_{2}=\frac{d-d_{1}}{\Delta d}N. \tag{4}\]
Since \(N_{1}\) and \(N_{2}\) must be positive integers, \(N\) needs to be divisible by \(\Delta d\). As shown in Table 1, the combinations are \((d_{1},d_{2})=(d-1,d+\Delta d-1),(d-2,d+\Delta d-2),...,(d-\Delta d+1,d+1)\). Except for a star structure with \(d_{1}=1\) and \(d_{2}=N-1\) which is obviously vulnerable to attacks, Table 1 shows all possible combinations in the ranges of \(2\leq d_{1}\leq d-1\) and \(d+1\leq d_{2}\leq N-2\) for constant \(N\) and \(d\). Note that these combinations of degrees are comprehensive around a regular graph. For constructing the bimodal networks, we use a configuration model according to the degree distribution:
\[P(d_{1})=N_{1}/N,\]
\[P(d_{2})=N_{2}/N. \tag{6}\]
After randomizing them through the configuration model [8, 9], we can discuss the pure effect of degree distributions on the robustness.
We can easily calculate the variance of degree distribution of a bimodal network. From Eqs. (3) and (4), the variance is derived as follows:
\[\sigma^{2} = \langle k^{2}\rangle-\langle k\rangle^{2} \tag{7}\] \[= \frac{1}{N}(d_{1}^{2}\times N_{1}+d_{2}^{2}\times N_{2})-d^{2}\] \[= \frac{1}{N}\left(d_{1}^{2}\frac{d_{2}-d}{\Delta d}N+d_{2}^{2} \frac{d-d_{1}}{\Delta d}N\right)-d^{2}\] \[= \frac{d_{1}^{2}(d_{2}-d)+d_{2}^{2}(d-d_{1})}{d_{2}-d_{1}}-d^{2}\] \[= (d_{2}-d)(d-d_{1}).\]
When each of \(d_{1}\) or \(d_{2}\) is close to \(d\), the variance \(\sigma^{2}\) is small. In particular, the minimum variance is \(\sigma^{2}=1\) at \(d_{1}=d-1\) and \(d_{2}=d+1\). As shown in Fig. 1, the variances \(\sigma^{2}\) and \(\Delta d\) are proportional for a constant \(d_{1}\). By substituting \(\Delta d=d_{2}-d_{1}\) into Eq. (7), we obtain
\[\sigma^{2} = (d_{2}-d)(d-d_{1}) \tag{8}\] \[= (d_{1}+\Delta d-d)(d-d_{1})\] \[= (d-d_{1})\Delta d-(d-d_{1})^{2}.\]
The increasing of the variances \(\sigma^{2}\) is remarkable for smaller \(d_{1}\), e.g. green or res lines in Fig. 1. Note that \(\Delta d\) is a divisor of \(N\) from Eqs. (3)(4).
### Random perturbations
As random perturbations of regular graphs, we introduce modified networks by adding and removing links to random regular graphs. Here, \(0\leq p\leq 1\) is the ratio of removed links to the existing links. After removing, for fixing the average degree \(d\), the same number of \(Mp\) links are added at randomly chosen nodes in prohibiting multi-links and self-loops. At \(p=0\), all links are unchanged, while at \(p=1\), all links are rewired as Erdos-Renyi random graphs [11]. For \(0<p<1\), we can derive the degree distribution as follow.
\[P(k)=\sum_{k_{1}+k_{2}=k}\binom{d}{k_{1}}(1-p)^{k_{1}}p^{d-k_{1}}\frac{k^{k_{2 }}}{k_{2}!}e^{-\lambda}, \tag{9}\]
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \(\Delta d\) & \(d_{1}\) & \(d_{2}\) & \(\sigma^{2}\) & \(N_{1}/N\) \\ \hline
2 & 5 & 7 & 1 & 1/2 \\ \hline
3 & 5 & 8 & 2 & 2/3 \\ & 4 & 7 & 2 & 1/3 \\ \hline
4 & 5 & 9 & 3 & 3/4 \\ & 4 & 8 & 4 & 2/4 \\ & 3 & 7 & 3 & 1/4 \\ \hline
5 & 5 & 10 & 4 & 4/5 \\ & 4 & 9 & 6 & 3/5 \\ & 3 & 8 & 6 & 2/5 \\ & 2 & 7 & 4 & 1/5 \\ \hline \multicolumn{4}{c}{\(\vdots\)} \\ \hline \(\Delta d\) & \(d-1=5\) & \(d+\Delta d-1\) & \(\Delta d-1\) & \((\Delta d-1)/\Delta d\) \\ & \(d-2=4\) & \(d+\Delta d-2\) & \(2(\Delta d-2)\) & \((\Delta d-2)/\Delta d\) \\ & \(d-3=3\) & \(d+\Delta d-3\) & \(3(\Delta d-3)\) & \((\Delta d-3)/\Delta d\) \\ & \(d-4=2\) & \(d+\Delta d-4\) & \(4(\Delta d-4)\) & \((\Delta d-4)/\Delta d\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Two degrees \(d_{1}\) and \(d_{2}\), the variance \(\sigma^{2}\) of degrees, and the fraction \(N_{1}/N\) of nodes with degree \(d_{1}\) in bimodal networks with the average degree \(d=6\).
where \(k_{1}\) and \(k_{2}\) are the number of unremoved and added links to a node, respectively, and \(\lambda=2dp\). Fig 2 shows 100 averaged degree distributions for each \(p\). As \(p\) increases, the degree distributions become wider from a delta function \(P(k)=\delta_{k,d}\) of the regular graph to Poisson distribution. Even for \(p=0.005\) and \(0.01\), the degree distribution consists of five kinds of degrees, whose modality is much more than the trimodal or tetramodal distributions as in [13].
Figure 1: Variances \(\sigma^{2}\) of degree distributions versus \(\Delta d\) in bimodal networks for \(N=1260\) and \(d=6\). Green, red, violet, and brown lines with circle, square, inverted triangle, and diamond points show the results for \(d_{1}=2\), \(3\), \(4\), and \(5\), respectively. Each line is straight, since \(\sigma^{2}\) and \(\Delta d\) are proportional for a constant \(d_{1}\) in Eq. (8).
Figure 2: Degree distributions of random perturbations to random regular graphs with \(N=10000\) and \(d=6\). The degree distributions change from a delta function of regular graphs (\(p=0\)) to Poisson distributions (\(p=1\)).
## 3 Robustness analysis
In this section, we consider two measurements of the robustness index and the percolation threshold for comparing the robustness against attacks in regular graphs and perturbed ones. The robustness index [15] is defined as
\[R_{\mathrm{TA}}\stackrel{{\mathrm{def}}}{{=}}\frac{1}{N}\sum_{q=1}^ {N}S(q), \tag{10}\]
where \(q\) is the number of removed nodes by attacks, and \(S(q)\) is the fraction of nodes in the largest connected components. As attacks, nodes are removed one-by-one from the node with the highest degree. Note that the ranges of \(R_{\mathrm{TA}}\) are \([1/N,0.5]\).
The percolation threshold \(f_{c}\) is the fraction of remaining nodes (occupied nodes in percolation theory), where whole networks are fragmented. Below \(f_{c}\), there are no the largest connected components for the network in the thermodynamic limit (\(N\rightarrow\infty\)). As \(f_{c}\) decreases, the network becomes more robust, because more nodes need to be removed for fragmentation. We describe the method to estimate the percolation threshold \(f_{c}\) against attacks by using generating functions for any degree distributions [22, 23]. Remember that \(P(k)\) is the degree distributions. Here, we define the excess degree distribution \(Q(k)=(k+1)P(k+1)/(k)\), which means that the probability of reaching a node with \(k\) by following a randomly selected link. We consider the attacks that remove \(1-f\) fraction of nodes in order of higher degrees. Then, \(k_{\mathrm{cut}}\) denotes the highest degree in the remaining nodes, and \(\Delta f\) denotes the fraction of removed nodes with degree \(k_{\mathrm{cut}}\). The probability that randomly selected node is not removed is
\[f=\sum_{k=k_{\mathrm{min}}}^{k_{\mathrm{cut}}}P(k)-\Delta fP(k_{\mathrm{cut}}). \tag{11}\]
The probability that a node by following a randomly selected link is not removed is
\[\hat{f}=\sum_{k=k_{\mathrm{min}}}^{k_{\mathrm{cut}}}Q(k-1)-\Delta fQ(k_{ \mathrm{cut}}-1). \tag{12}\]
Let \(\hat{P}(k)\) be the degree distribution of the network remaining after the attacks, and \(\hat{Q}(k)\) be the excess degree distribution. By following [24, 23], we can derive
\[\hat{P}(k)=\frac{1}{f}\left[\sum_{k^{\prime}=k_{\mathrm{min}}}^{k_{\mathrm{ cut}}}P(k^{\prime})\binom{k^{\prime}}{k}\hat{f}^{k}(1-\hat{f})^{k^{\prime}-k}- \Delta fP(k_{\mathrm{cut}})\binom{k_{\mathrm{cut}}}{k}\hat{f}^{k}(1-\hat{f})^ {k_{\mathrm{cut}}-k}\right], \tag{13}\]
\[\hat{Q}(k)=\frac{1}{f}\left[\sum_{k^{\prime}=k_{\mathrm{min}}}^{k_{\mathrm{ cut}}}Q(k^{\prime})\binom{k^{\prime}}{k}\hat{f}^{k}(1-\hat{f})^{k^{\prime}-k}- \Delta fQ(k_{\mathrm{cut}})\binom{k_{\mathrm{cut}}-1}{k}\hat{f}^{k}(1-\hat{f} )^{k_{\mathrm{cut}}-1-k}\right]. \tag{14}\]
We consider the generating functions \(F_{0}(x)\) and \(F_{1}(x)\) for the distributions \(P(k)\) and \(Q(\hat{k})\) after attacks,
\[F_{0}(x)=\sum_{k}\hat{P}(k)x^{k}=\frac{1}{f}\sum_{k^{\prime}=k_{ \mathrm{min}}}^{k_{\mathrm{cut}}}P(k^{\prime})(\hat{f}x+1-\hat{f})^{k^{\prime} }-\frac{\Delta f}{f}P(k_{\mathrm{cut}})(\hat{f}x+1-\hat{f})^{k_{\mathrm{cut}}}, \tag{15}\] \[F_{1}(x)=\sum_{k}\hat{Q}(k)x^{k}=\frac{1}{f}\sum_{k^{\prime}=k_{ \mathrm{min}}-1}^{k_{\mathrm{cut}}-1}Q(k^{\prime})(\hat{f}x+1-\hat{f})^{k^{ \prime}}-\frac{\Delta f}{\hat{f}}Q(k_{\mathrm{cut}}-1)(\hat{f}x+1-\hat{f})^{k _{\mathrm{cut}}-1}. \tag{16}\]
The fractions of nodes belonging to the largest connected components after attacks is
\[s=f(1-F_{0}(u)), \tag{17}\]
where \(u\) is the solution of the self-consistent equation of \(u=F_{1}(u)\). We numerically obtain the fixed point \(u\). Here, \(s\) is equivalent to \(S(q)\) for \(q=1-f\) in Eq. (10). Since the existence of \(u<1\) solution is a condition for the appearance of a largest connected component (\(s>0\)), the percolation threshold \(f_{c}\) can be obtained under the following conditions,
\[F_{1}^{\prime}(1) = \sum_{k^{\prime}=k_{\mathrm{min}}-1}^{k_{\mathrm{cut}}-1}k^{\prime }Q(k^{\prime}) \tag{18}\] \[= \sum_{k=k_{\mathrm{min}}}^{k_{\mathrm{cut}}}\frac{k(k-1)P(k)}{ \langle k\rangle}-\Delta f\frac{k_{\mathrm{cut}}(k_{\mathrm{cut}}-1)P(k_{ \mathrm{cut}})}{\langle k\rangle}=1.\]
From Eq. (11), we numerically obtain \(f_{c}\) from Eq. 18 with \(k_{\rm cut}\) which satisfies the condition of Eq. (18). Note that \(f_{c}\) can be only applied on randomized networks like the configuration model. Here, we apply Eqs. (11-18) for the degree distributions of Eqs. (5, 6, 9) in bimodal networks and randomly perturbed networks from the regular graphs. random regular graphs and their perturbations.
## 4 Results
We compare these measurements \(R_{\rm TA}\) and \(1-f_{c}\) for the robustness against attacks in random regular graphs and perturbed ones with \(N=6300\) nodes and the average degree \(\langle k\rangle=4\) or 6. Note that for \(N=6300\), it is possible to generate regular graphs for \(d=4\) and \(6\). We use bimodal networks of all possible combinations of positive integers \(d_{1}\) and \(d_{2}\) with \(2\leq\Delta d\leq 10\), as shown in Table 1. In addition, we use modified networks by random perturbations for the ratios of \(p=0.005,0.01,0.1,0.25,0.5,0.75\), and \(1\).
Figs (a)a show \(R_{\rm TA}\) versus the variance \(\sigma^{2}\) of degree distributions for random regular graphs and perturbed ones with \(N=6300\) nodes and the average degree \(\langle k\rangle=4\) or 6. \(R_{\rm TA}\) is numerically calculated by the Newman-Ziff algorithm [25]. In both Figs (a)a, \(R_{\rm TA}\) increases as the variance \(\sigma^{2}\) decreases in both discrete and random perturbations (blue triangles and green hexagons). These results indicate that the smaller variance of the degree distribution tends to be more robust against attacks. Furthermore, the random regular graphs (red circles) have the highest robustness than bimodal and modified networks. Remember that the combinations of degrees for bimodal networks are comprehensive. Therefore, it is strongly suggested that random regular graphs have the optimal robustness against attacks.
For the theoretically estimated percolation threshold \(1-f_{c}\), similar results are obtained for the robustness index \(R_{\rm TA}\). Figures (b)b show the percolation threshold \(1-f_{c}\) versus the variance of degree distributions for random regular graphs and perturbed ones with \(N=6300\) nodes and the average degree \(\langle k\rangle=4\) or 6. In both Figs (b)b, \(1-f_{c}\) tends to increase as the variances \(\sigma^{2}\) decrease for both discrete and random perturbations (blue triangles and green hexagons). Furthermore, the random regular graphs (red circles) have the highest robustness. Thus, random regular graphs have the optimal robustness against attacks.
In the bimodal networks (blue triangles in Figs 3 or 4), there are similar values of \(R_{\rm TA}\) or \(1-f_{c}\) for different variances \(\sigma^{2}\), e.g. \(R_{\rm TA}\) takes about 0.25 for \(\sigma^{2}=2\) and 4 in Fig (A)A. To investigate a relation between the robustness and the variances in bimodal networks, we show the ratio \(S(q)\) of the largest connected components against attacks. Fig 5 show the ratio \(S(q)\) in bimodal networks with \(N=6300\) nodes and the average degree \(\langle k\rangle=4\). The inverse triangles show the analytical results by Eq. (17), while the solid lines show the results by numerical simulation using the Newman-Ziff algorithm [25]. In Fig 5, \(R_{\rm TA}\) is the area under the curve with respect to horizontal axis. Fig (A)A for \(d_{1}=2\) shows that \(R_{\rm TA}\) decreases as \(d_{2}\) increases. On the other hand, Fig (b)b for \(d_{1}=3\) shows that \(R_{\rm TA}\) decreases as \(d_{2}\) increases, although the decrease of \(R_{\rm TA}\) becomes smaller for \(d_{2}>6\). For example, there is a small difference between orange (\(d_{2}=8\)) and blue (\(d_{2}=9\)) lines. In particular, in Fig (b)b, the lines for \(d_{2}>6\) are not smooth and like multi-step. Such a phenomenon is observed for \(d_{1}=3\) close to the average degree \(\langle k\rangle=4\). However, the reason for this is not well understood. Similar results are obtained for \(\langle k\rangle=6\).
Figure 3: The robustness index \(R_{\rm TA}\) versus the variances \(\sigma^{2}\) of degree distributions. The results for for the average degree **(A)**\(\langle k\rangle=4\), and **(B)**\(\langle k\rangle=6\). In both figures, \(R_{\rm TA}\) becomes higher as the variance decreases. In particular, the random regular graphs (red points) are the most robust.
## 5 Conclusion
In this study, we find optimally robust networks against targeted attacks in varying degree distributions. In considering that a network tends to be more robust with the smaller variance of degree distributions, random regular graphs with the minimum variance are predicted to be optimally robust. Remember that it is insufficient to determine whether random regular graphs are optimally robust. We clarify the optimal robustness in varying degree distributions by comparing the robustness of random regular graphs and their comprehensive discrete or random perturbations which includes several modalities of degrees. By comparing the robustness index and percolation threshold on them, we find that random regular graphs are the highest robust in all bimodal and modified networks. Our results show that random regular graphs have the optimal robustness against attacks in varying degree distributions.
## Acknowledgements
This research is supported in part by JSPS KAKENHI Grant Number JP.21H03425.
Figure 4: The percolation thresholds \(1-f_{c}\) versus the variances \(\sigma^{2}\) of degree distributions. The results for the average degree **(A)**\(\langle k\rangle=4\), and **(B)**\(\langle k\rangle=6\). In both figures, \(1-f_{c}\) becomes higher as the variance decreases. Random regular graphs (red points) are the most robust. Note that a higher \(1-f_{c}\) means more robust.
Figure 5: The ratio of the largest connected components versus the fraction of remaining nodes on bimodal networks with \(N=6300\) and \(\langle k\rangle=4\). The results are shown for **(A)**\(d_{1}=2\) and **(B)**\(d_{1}=3\). |
2307.02379 | Machine learning at the mesoscale: a computation-dissipation bottleneck | The cost of information processing in physical systems calls for a trade-off
between performance and energetic expenditure. Here we formulate and study a
computation-dissipation bottleneck in mesoscopic systems used as input-output
devices. Using both real datasets and synthetic tasks, we show how
non-equilibrium leads to enhanced performance. Our framework sheds light on a
crucial compromise between information compression, input-output computation
and dynamic irreversibility induced by non-reciprocal interactions. | Alessandro Ingrosso, Emanuele Panizon | 2023-07-05T15:46:07Z | http://arxiv.org/abs/2307.02379v1 | # Machine learning at the mesoscale: a computation-dissipation bottleneck
###### Abstract
The cost of information processing in physical systems calls for a trade-off between performance and energetic expenditure. Here we formulate and study a computation-dissipation bottleneck in mesoscopic systems used as input-output devices. Using both real datasets and synthetic tasks, we show how non-equilibrium leads to enhanced performance. Our framework sheds light on a crucial compromise between information compression, input-output computation and dynamic irreversibility induced by non-reciprocal interactions.
What does a theory of computation at the mesoscopic scale look like? To begin to answer this question, we need to bridge the formalism of computation with a physical theory of systems whose energy scales are close to thermal fluctuations. Stochastic Thermodynamics (ST), by associating single stochastic trajectories with meaningful thermodynamic quantities [1; 2; 3; 4], exposes the deep relation between information and dissipation. One of the fundamental results of ST is that information and time irreversibility, as measured by the rate of Entropy Production (EP) [5; 6], are inherently related [7; 8; 9; 10; 11]. Thermodynamic Uncertainty Relations [12; 13; 14; 15] have been derived that describe fundamental precision-dissipation trade-offs, leading to a framework successfully applied to a variety of bio-chemical processes, such as chemo-sensing [16; 17; 18], copying [19], reaction networks [20; 21; 22], cascade models of synapses [22], among others.
To set the following discussion, we will refer to computation at the mesoscopic scale as the ability of a system to react to the environment - via physical interactions between its parts and external heat baths - in such a way that the modification of its state depends on some function of the environmental conditions. This transformation possibly leads the system far from equilibrium.
Encoding external signals in their entirety is one of such computations. Borrowing terminology from Machine Learning (ML), a mesoscopic system can be considered as an "autoencoder", thus focusing on its ability to sense, compress information and perform error correction capabilities [23; 24].
Full encoding, however, may be energetically wasteful when a computation regards a limited aspect of the environment: discarding non-relevant information allows to strike a balance between performance and energy expenditure, in a manner crucially dependent on the task at hand. We recognize this task-dependence of the performance/cost trade-offs as the critical ingredient of any physical theory of computation.
On one side of such trade-off lies dissipation, the study of which is starting to be addressed in many body systems [25; 26; 27; 14]. Irreversibility of macroscopic neural dynamics is also attracting attention [28; 29; 30; 31].
The system's computational performance, on the opposite side of the trade-off, can be formulated both in information theoretic terms and in the more practical lens of standard error metrics employed in ML. One recently emerging approach attempts to define a framework for irreversibility in formal models of computational systems [32; 33; 34], in a way that is agnostic to physical implementations.
Here, we consider generic parametrizations of mesoscopic systems whose stochastic transitions are induced by an environment, possibly out-of-equilibrium, so that resulting interactions may show non-reciprocity [35]. In particular, we focus on asymmetric spin models, which have been subject of intense study in the field of disordered systems [36; 37; 38; 39] and provide a bridge to classical models of neural computation [40; 41; 42; 43; 44].
In line with conventional formalism of neural networks, we consider the dynamics of these systems as producing internal representations of their inputs, the geometry and intrinsic dimensionality of which impact the ability to learn input-output relations. We show how entropy-producing non-reciprocal interactions [45; 46] are crucial to generate effective representations, in such a way that a fundamental trade-off emerges between expressivity and performance.
## I A computation-dissipation bottleneck
The stochastic dynamics of mesoscopic systems, usually described using continuous-time Markov processes, results from their interactions with thermal baths and external driving mechanisms. Let us consider a system \(\mathcal{S}\) with discrete states \(s\), driven by a time homogeneous input protocol \(x\). The evolution of the probability of state \(p\left(s,t\right)\) is given by a master equation with jump rates \(k_{s^{\prime}s}\) from state \(s\) to states \(s^{\prime}\). To facilitate the connection to ML, we take a set of parameters \(\theta\) that determine - rather abstractly - the jump rates.
We assume computation is performed on a timescale much longer than any initial transient. For each independent input \(x\), the system reaches a steady-state (SS) probability \(p(s|x)\), serving as internal representation of \(x\). At the (possibly non-equilibrium) SS, each input \(x\) is associated to an average EP rate, \(\Sigma\left(x\right)\), a measure of irreversibility at
the steady state and corresponding to the housekeeping heat. In Markovian systems with discrete states, the EP rate can be computed via the Schnakenberg formula [47; 48]:
\[\sigma=\frac{1}{2}\sum_{s,s^{\prime}}J_{ss^{\prime}}\log\frac{k_{ss^{\prime}}p \left(s^{\prime}\right)}{k_{s^{\prime}s}p\left(s\right)} \tag{1}\]
where \(J_{ss^{\prime}}=\left[k_{ss^{\prime}}p\left(s^{\prime}\right)-k_{s^{\prime}s} p\left(s\right)\right]\) are the steady state fluxes and we work in units where the Boltzmann constant \(\kappa_{B}=1\). Note that in our case \(\sigma=\sigma(x,\theta)\) through \(k_{ss^{\prime}},J_{ss^{\prime}}\) and \(p(s)\).
A supervised learning task is specified by a finite set \(\mathcal{D}=(x,y)\) of input-output pairs, so the EP rate averaged over the whole dataset is simply \(\Sigma(\theta)=\frac{1}{\left|\mathcal{D}\right|}\sum_{x}\sigma\left(x,\theta\right)\). Alternatively, the learning task could be defined by a distribution over the input space \(p\left(x\right)\) and an conditional output distribution \(p\left(y|x\right)\). The (average) EP rate is similarly \(\Sigma(\theta)=\sum_{x}p\left(x\right)\sigma\left(x,\theta\right)\).
The EP rate is a function of the dynamic process alone. How the resulting \(p(s|x)\) is able to disentangle and predict the output is a separate, task-specific factor.
There exist a number of ways to define a good measure of computational performance. One possible choice is the mutual information between the internal representations \(s\) and the output \(y\), i.e. \(I(s,y)\): such choice makes no assumption on the additional computational burden needed to extract the information about \(y\), possibly encapsulated in arbitrarily complex high-order statistics of the steady state distribution. A different path is to use a small subset of moments of \(p(s|x)\) as internal representations of the inputs, to be then fed to a simpler linear readout. This approach, closer to standard ML practice, allows us to use the Mean Square Error (MSE) or Cross Entropy (CE) loss functions. Both approaches and their limitations will be explored in the following.
Given a performance measure \(\mathcal{G}(\theta)\), the trade-off can be encapsulated in a quantity:
\[\mathcal{L}\left(\theta\right)=\mathcal{G}(\theta)-\alpha\,\Sigma\left(\theta \right), \tag{2}\]
where \(\alpha\) is a positive parameter that has units of time. We study the trade-off by optimizing \(\mathcal{L}\) over the interaction parameters \(\theta\) for different values of \(\alpha\): increasing \(\alpha\), the cost of dissipation with respect to performance is enhanced, with the \(\alpha\rightarrow\infty\) limit effectively constraining the system to be at equilibrium.
In this letter, we first use numerical methods to build a multi-spin system that performs two different classification tasks. We then employ an analytically solvable 2-spin model to investigate the enhanced expressivity of non-equilibrium systems with respect to equilibrium ones, and relate it to the structure of their computational tasks.
Figure 1: Schematic of a multi-spin system processing its inputs in a classification task. **A**: The system \(\mathcal{S}\) evolves in time in the presence of constant inputs (external fields) \(x_{i}\) and interactions couplings \(W\), until a non-equilibrium steady state (NESS) \(p\left(s|x\right)\) is reached. Time evolution is associated with an entropy production rate \(\Sigma\). Information about the output label is then extracted from \(p\left(s|x\right)\). In the simplest case, a linear readout \(W_{out}\) can be used on the averages \(m_{x}\). **B**: A subset of an input-output dataset \(\mathcal{D}\).
Multi-spin systems as stochastic recurrent networks
To exemplify the computation-dissipation trade-off, we use a spin-based model to perform an input-output computation. Specifically, a classification task where inputs \(x\) - schematically represented by the tape in Fig 1 - must be correctly associated with given output labels \(y\).
The system at hand is composed of two chains of size \(N\) with possibly asymmetric couplings. Spins of the two chains are driven by the same inputs \(x_{i}\), serving as constant external fields. Each spin \(s_{i}\) is subject to random flips with rates \(k_{s}^{(i)}\propto e^{-\beta s_{i}(Ws+x)_{i}}\). Interactions, encoded in the matrix \(W\), connect spins both along the same chain and across the two lines of spins, similarly to an implicit, stochastic version of a convolutional layer (see Appendix). When \(W\) is symmetric, the system relaxes to the equilibrium of a Hamiltonian \(\mathcal{H}=-\frac{1}{2}s^{T}Ws-x\) at inverse temperature \(\beta\). Non-reciprocal interactions (\(W\neq W^{T}\)) lead to non-equilibrium and a non-zero EP rate. After a transient, the system reaches a steady state \(p\left(s|x\right)\), with an average magnetization \(m_{x}=\langle s|x\rangle\) and an entropy production rate \(\sigma\left(x,\theta=W\right)\). For any input-output dataset, each \(W\) will thus be associated with both a different task performance and an average EP rate \(\Sigma\).
In close analogy with standard ML methods, we implement a final linear readout of the average magnetization \(W_{out}m_{x}\), with a learnable matrix \(W_{out}\). This allows us to separately consider the system's computation as a two-step process: (i) a highly non-linear deformation of the input space \(x\) into \(m_{x}\) induced by the dynamics of the process, akin to what occurs in a hidden layer of an artificial neural network; (ii) a separation in the \(m_{x}\) space to produce the output \(y\). Note that our formalism is a stochastic, mesoscopic generalization of the recently introduced implicit layers, which serve as building blocks of deep equilibrium deals [49; 50].
To minimize \(\mathcal{L}\), we coupled a standard Gillespie algorithm [51] for the simulation of the system's evolution with each input field \(x\) to a gradient-based optimization method. Due to the stochastic nature of the Gillespie trajectory and the high dimension of the \(W\) parameter space, we adopted a finite-difference method called Simultaneous Perturbation Stochastic Approximation (SPSA) [52] to compute an estimate of the gradient (see Appendix for details). The solutions at each value of \(\alpha\) allow us to construct an optimal front between \(G^{*}(\Sigma)\) and \(\Sigma^{*}(\alpha)\), where \({}^{*}\) denotes that optimal values of Eq. 2, as shown in Fig. 2A,C.
We showcase our approach with two different tasks. The first is MNIST-1D [53], a one-dimensional version of the classic digit-classification dataset MNIST. Each element has an input dimension \(N=40\) and belongs to one of 10 different classes, i.e. the digits. See an example of the input configurations in Fig. 2B. To enable multi-label classification, we apply a normalized exponential function \(SM\) (softmax) to the output to get a 10-dimensional probability vector \(\hat{y}=SM(W_{out}m_{x})\), and use the negative cross-entropy between actual labels and \(\hat{y}\), \(\mathcal{G}=-CE\left(\hat{y},y\right)\), as a measure of task performance.
Our results show an inverse relationship between task performance and entropy production at steady state, Fig. 2A. Enforcing the system to be at equilibrium (\(\alpha\rightarrow\infty\)) reduces performance by \(\approx 5\%\) and accuracy - defined as the percentage of output labels identified as most probable - by \(7\%\). This highlights how non-reciprocal interactions enhance the complexity of internal representations needed for learning, at the cost of higher dissipation.
The second task is a classic random input-output association [54; 55; 56], where input components \(x_{i}^{\mu}\) of each pattern \(\mu=1,...,M\) are drawn i.i.d. from a normal distribution, and labels are random \(y\in\{-1,+1\}\) with probability \(1/2\) (Fig. 2D). We measure the performance in this task by the mean squared error (MSE): \(\mathcal{G}=-MSE\left(\hat{y},y\right)\), where \(\hat{y}=W_{out}m_{x}\). For all random instances of this second task, we reproduce the front between entropy production and performance, Fig. 2C. While quantitative details differ slightly for different instances, the performance consistently increases with the amount of non-reciprocity in the optimal coupling matrix \(W\) and therefore with dissipation in the system.
## III A tractable 2-spin system
To exemplify a general formulation of the computation-EP bottleneck, let us study a specific case of the system we introduced in the previous section, which can be solved analytically. We consider a 2-spin system with asymmetric couplings \(\theta=(J_{s}+J_{a},J_{s}-J_{a})\), driven by constant two-dimensional inputs \(x\) that act as external fields, Fig. 3A. When \(J_{a}=0\), the system respects detailed balance and reaches an equilibrium state. Non-reciprocity in the coupling between the spins leads to non-negative \(\Sigma\).
The information-coding capabilities at steady state of such a system have been recently analyzed [18]. In turn, we treat such a mesoscopic network as an input-output device. In full generality, we prescribe a stochastic rule by a known conditional distribution \(p\left(y|x\right)\), with \(y\in\{0,1\}\) a binary output variable. This formulation encompasses classic Teacher-Student setup [57; 58; 59; 60] and mixture models [61; 62] employed in the theoretical study of feed-forward neural networks. At variance with the previous examples, we relax the assumption of a linear readout and ask how much
information \(I\left(s,y\right)\) about the output \(y\) is contained in the steady state probabilities \(p\left(s|x\right)\).
Let us consider a stochastic and continuous generalization of a parity gate, where the output is prescribed by \(p\left(y=1|x\right)=\text{sigmoid}\left(\eta x_{1}^{\phi}x_{2}^{\phi}\right)\), with \(x^{\phi}=R^{\phi}x\), \(R^{\phi}\) a rotation operator of angle \(\phi\). This defines a family of tasks with a controllable degree of asymmetry in input space. Examples of such tasks are shown in Fig. 3C. The additional parameter \(\eta\) affects the sharpness in the change of the output probability as a function of \(x\).
The mutual information \(I\left(s,y\right)=H\left(y\right)-H\left(s|y\right)\) can be computed easily using the conditional independence of \(y\) and \(s\) given \(x\). For \(\phi=0\), the optimal structure is an equilibrium system (\(J_{a}^{\ast}=0\)). As \(\phi\) increases, the optimal 2-spin network has asymmetric weights (\(J_{a}^{\ast}>0\)), implying a non-zero entropy production at steady state, Fig. 3B. Limiting the system to be at equilibrium thus results in performance degradation, down to a minimum of zero information when the rotation reaches \(\phi=\pi/4\).
For a given value of \(\phi\) and the free parameter \(\alpha\), one can define the computation-dissipation trade-off in the form of maximizing Eq. 2, now with \(\mathcal{G}=I\left(s,y\right)\). Note the analogy with the formulation of task-relevance of internal representations provided by the classic Information Bottleneck [63, 64, 65]. Here, instead of a compromise between input compression and retention of output information, we trade off the latter with dissipation.
We can compare the performance of an auto-encoding system, whose couplings \(\theta^{sx}=\{J_{s}^{sx},J_{a}^{sx}\}\) are chosen using \(\mathcal{G}=I(s,x\,|\,\theta)\), with that of a system with parameters \(\theta^{sy}\) optimizing \(\mathcal{G}=I(s,y\,|\,\theta)\), the information about \(y\). The optima corresponding to \(\alpha=0\) have finite non-reciprocal terms \(J_{a}\) - see Fig 4A,B - and therefore positive, but finite, EPs. For all values of \(\phi\) there exists a maximum dissipation rate above which performance degrades [66].
Fig 4C shows the computation-dissipation front for a task with \(\phi=0.5\), each point representing a different optimal compromise between input-output performance, measured by the mutual information \(I\left(s,y\right)\), and rate of entropy production at steady state. We chose a parameter regime where a non-equilibrium solution is optimal also for \(I(s,x)\)[18]. Crucially, a system that maximizes the information on the entire input \(I\left(s,x\right)\) performs worse than a system tailored to maximize the output information. This is a hallmark of optimization of task-relevant information.
This simple system allows us to explore the relation between the non-equilibrium steady state probability \(p(s|x)\) and the task. Fixing \(J_{s}=0\), the effect of increasing \(J_{a}\) resembles a rotation by an angle of \(\pi/4\) of \(p(s|x)\) in the region where \(|x|<J_{a}\), see Fig. 3D. An increasing amount of non-reciprocity in the system will thus align the steady-state probabilities \(p(s|x)\) with the rotation induced on the conditional output \(p\left(y|x\right)\) by the angle parameter \(\phi\).
Figure 2: Computation-dissipation bottleneck for a network solving a classification task at steady state. **A**: Cross Entropy Error (CE) vs normalized Entropy Production rate \(\Sigma/N\) on the MNIST-1D dataset (\(M=4000\) datapoints, \(10\) labels, \(N=40\)) of a system composed of two spin-chains with interactions up to the 2nd nearest neighbor. **B**: Schematic of the MNIST-1D task. **C**: Mean Square Error (MSE) vs Entropy Production Rate \(\Sigma/N\) normalized by the number of spins on the Random input-output task with \(M=100\) input patterns and \(2\) labels. Each curve is the minimum over \(10\) independent initialization seeds in a Gillespie-based optimization algorithm. \(10\) different realizations of the task are shown. **D**: Example realizations of the Random dataset.
Figure 4: **A**: Color plot of mutual information \(I\left(x,s\right)\) in the \(\left\{J_{s},J_{a}\right\}\)-plane. The optimal parameter set \(\theta^{sx}\) is shown for different values of \(\alpha\) (white: \(\alpha=0\), black: \(\alpha>0\)). **B**: Same as **B** for \(I\left(s,y\right)\) and the optimal parameter set \(\theta^{sy}\). **C**: Mutual information between input \(x\) and output \(y\) for \(\phi=0.5\) as a function of the entropy production rate at steady state \(\Sigma\) for both \(\theta^{sy}\) (black) and \(\theta^{xx}\) (grey). Inputs \(\left\{x_{1},x_{2}\right\}\) are Gaussian with correlations \(\rho=0.95\). Additional parameters: \(\beta=3\), \(\eta=3\).
Figure 3: **A**: Schematic of a 2-spin system driven by input external fields \(x_{1}\) and \(x_{2}\). **B**: Conditional probability \(p\left(y=1|x\right)\) for the family of input-output tasks described in the main text for different values of the rotation parameter \(\phi\) and \(\eta=2\). Inputs \(\left\{x_{1},x_{2}\right\}\) are extracted from two independent Normal distributions. **C**: Mutual information between input \(x\) and output \(y\) as a function of rotation angle \(\phi\). The optimal solutions for equilibrium (non-equilibrium) systems are shown with a dashed grey (continuous black) line. Parameters: \(\beta=3\), \(\eta=3\). **D**: Steady-state probability for the state \(s=\left(+1,+1\right)\) for \(J_{s}=0\) and increasing values of the non-reciprocity strength \(J_{a}\). The range is \(\left[-2.5,2.5\right]\) for both inputs \(x_{1}\) and \(x_{2}\).
Discussion
We introduced a framework to characterize a fundamental trade-off between computational capabilities and energetic expenditure in mesoscopic systems. We showcase how such systems can be used in supervised learning tasks and how limiting entropy production can degrade their performance, as measured either using standard loss functions in ML or with information theoretical methods.
Our results point to the general necessity to gauge encoding and task-relevance while considering energetic trade-offs. In a simple 2-spin system, we show how non-reciprocal interactions affect the capability of the system to solve different tasks optimally, independently from the encoding of the input signals: a simple modulation of the input-output task switches the optimal system configuration from an equilibrium to a highly non-equilibrium one.
Linear stochastic systems (Ornstein-Uhlenbeck processes) are another case for which one can derive an analytical expression for the computation-dissipation trade-off (see Appendix). The emerging trade-off between entropy production and output information is again controlled by the degree of asymmetry of the task in input space.
In this study, we concentrated on one-time statistics of the steady state distribution, leaving aside interesting properties of time-correlations. The study of both transient behavior and non-stationary protocols - where more general tasks can be formulated for instance by prescribing time-dependent average responses \(y\left(t\right)\) to multi-dimensional time-dependent signals \(x\left(t\right)\) - opens an interesting avenue to investigate general speed-dissipation-computation trade-offs within this framework. Special care must be used in such cases to distinguish between housekeeping and excess entropy production [67].
Studying the impact of hidden units is an important avenue for future work. In generative models, marginalization over hidden states is the crucial ingredient to induce higher-order interactions. This forms the basis for the attention mechanism in transformers [68] - arguably the most powerful ML models to date [69; 70] - as the recent works on modern Hopfield networks [71; 72; 73; 74; 75] have shown.
Drawing a bridge between ML, theoretical neuroscience and ST can prove fruitful in systematically studying how internal representations depend on the cost. Rate-distortion approaches have been used to study the impact of information compression on classification accuracy and maximal attainable rewards [76; 77; 78; 79; 80; 81], but a general theory is currently lacking. Our perspective is complementary: energetic costs are expected to have a strong impact on the complexity of internal representations, leading to different mechanisms for information processing.
## Acknowledgements
We wish to thank Antonio Celani, Roman Belousov and Edgar Roldan for fruitful discussions and for reading a preliminary version of this manuscript.
## Appendix A Steady State and Mutual Information in a Continuous Time Markov Chain
The evolution of the probability \(p\left(s,t\right)\) of state \(s\) is described by a master equation:
\[\frac{d}{dt}p\left(s,t\right)=\sum_{s^{\prime}}\left[k_{ss^{\prime}}\left(t \right)p\left(s^{\prime},t\right)-k_{s^{\prime}s}\left(t\right)p\left(s,t \right)\right], \tag{10}\]
with \(k_{ss^{\prime}}\left(t\right)\) the jump rate from state \(s^{\prime}\) to state \(s\), whose time dependence is due to a generic external protocol \(x\left(t\right)\). In our case with a constant-in-time protocol, the steady state \(p\left(s|x\right)\) can be obtained extracting the kernel of the matrix \(R_{ss^{\prime}}=k_{ss^{\prime}}-\delta_{s,s^{\prime}}\sum_{s^{\prime\prime}}k _{s^{\prime\prime}s}\). For systems of small size, this is viable numerically using Singular Value Decomposition (SVD).
The mutual information between the input \(x\) and the system state \(s\) at steady state can be easily computed using \(I\left(s,x\right)=H\left(x\right)-H\left(s|x\right)\), with \(H\) the Shannon entropy. As for \(I\left(s,y\right)=H\left(s\right)-H\left(s|y\right)\), the entropy term \(H\left(s|y\right)\) can be easily obtained by exploiting the conditional independence between \(y\) and \(s\), which implies that the joint distribution \(p\left(s,y\right)\) can be written as:
\[p\left(s,y\right)=\sum_{x,s,y}p\left(s,y|x\right)p\left(x\right)=\sum_{x,s,y} p\left(s|x\right)p\left(y|x\right)p\left(x\right). \tag{11}\]
Using Eq. 11, the posterior distribution \(p\left(s|y\right)\) is directly calculated using the Bayes theorem.
## Appendix B Training of a multi-spin systems
### Details on the system
We consider a system composed of two chains of size \(N\). Interactions connect spins up to the \(k^{th}\) neighbours, where we use \(k=2\). If we identify a spin by \((m,n)\) where \(1\leq m\leq N\) is the position in the chain and \(n=1,2\) the chain index, two spins \((m_{i},n_{i})\) and \((m_{j},n_{j})\) are connected if \(|m_{i}-m_{j}|\leq k\). The interaction parameter \(W_{ij}\) depends only on \(m_{i}-m_{j}\) and \(n_{i}-n_{j}\), so that the number of non-zero, fully independent parameters of \(W\) is \(8k-2\). The external input \(x\) is repeated such that it is the same for both chains. Such spin system at steady state implements a stochastic version of an implicit convolutional layer with two channels [49; 50].
### Datasets
MNIST-1D is a 1-dimensional version of size \(N=40\) of the classic MNIST handwritten digits dataset [53]. We used 4000 training samples, organized in 10 different classes, each containing roughly 400 samples. Data is available at [https://github.com/greydanus/mnist1d](https://github.com/greydanus/mnist1d), where a description of its generation from the original MNSIT dataset is given.
We generated instances of the Random Task by drawing \(M=100\) patterns \(x^{\mu}\) in dimension \(N=10\), with components \(x_{i}^{\mu}\) independently from a Normal distribution. The corresponding labels \(y^{\mu}\), drawn from \(\{-1,+1\}\) with probability \(1/2\), were randomly associated with each pattern.
### Details on Gillespie simulations
The Gillespie algorithm [82; 51] offers a remarkably simple method to generate stochastic trajectories by randomly selecting sequences of jumps between states. Let us consider a system with a discrete number of states \(s\) and transition rates \(k_{ss^{\prime}}\), which are constant in time. Given a current state \(s_{\text{start}}\), the Gillespie algorithm works by identifying both the time \(\tau\) and the final state \(s_{\text{end}}\) of the following jump.
As a first step, the total rate \(k_{\text{out}}=\sum_{s}k_{ss_{\text{start}}}\) of leaving state \(s_{\text{start}}\) is computed. The time \(\tau\) until the following jump is then drawn from an exponential distribution with mean \(1/k_{\text{out}}\). The landing state is selected with probability \(p(s)=k_{ss_{\text{start}}}/k_{\text{out}}\). The trajectory is thus constructed concatenating jumps.
First, the initial state \(s_{0}\) is chosen (in our case, at random) at time \(t=0\). A first jump \((\tau_{1},s_{1})\) is selected starting from \(s_{0}\), and then a second \((\tau_{2},s_{2})\) starting from \(s_{1}\). The process is repeated until one of two criteria is met, either a total time or a maximum number of steps. Average occupations can be computed considering that the system occupies state \(s_{i}\) exactly for a time \(\tau_{i}\) between jumps \(i\) and \(i+1\).
In our system, \(s\) is a vector of \(2N\) individual spins \(s_{i}\) taking values in \(\{-1,+1\}\). We will restrict the jumps to single spin flips. Given a state \(s\), an input \(x\) (external field) and a interaction matrix \(W\), the transition where the \(i\)th spin flips has a rate \(k_{s}^{(i)}\propto e^{-\beta s_{i}h_{i}}\), with \(h_{i}=\left(Ws+x\right)_{i}\). The actual proportionality term (identical for all spins), which determines the time scale of the jumps, is not relevant since we are only interested in steady state properties and average occupancy.
To measure the average magnetization \(m_{x}\) for each input \(x\), we first select a random state \(s_{0}\) and proceed to construct a trajectory up to a final time \(T_{max}=5000\) or, alternatively, a maximum number of jumps \(N_{max}=10000\). The average magnetization of individual spins \(m_{x}\) for that input is calculated after an initial transient time of \(T_{transient}=200\) is removed.
Since we only consider the steady-state, we can evaluate the entropy production rate by summing the quantity \(\Delta\sigma_{n}\equiv\log\frac{k_{n+1}^{(i)}}{k_{n+1}^{(i)}}=-2\beta s_{n,i} h_{n,i}\) for each jump \(s_{n}\to s_{n+1}\), consisting of a single spin flip, and dividing by the total time [83].
### Task performance and parameter optimization.
Given an input-output pair \((x^{\mu},y^{\mu})\) from the set \(\mathcal{D}=(x,y)\), we measure task performance by first computing \(m_{x^{\mu}}\) and then the error between the prediction \(\hat{y}^{\mu}\) of the final readout and the target \(y^{\mu}\). We use two different readouts, with their respective loss functions:
* _Cross Entropy loss_: the "logit" vector \(h^{\mu}=W_{out}m_{x^{\mu}}\) is passed through a Softmax function, thus getting the normalized estimated output probabilities \(p_{k}^{\mu}=\frac{e_{k}^{\mu}}{\sum_{k=1}^{K}e_{k}^{\mu T}}\), with \(K\) the number of output labels. The loss function then amounts at computing the cross-entropy with the targets \(y^{\mu}\): \(L=-\frac{1}{M}\sum_{\mu=1}^{M}\log p_{y^{\mu}}^{\mu}\);
* _MSE loss_: we compute the loss as \(L=\frac{1}{2M}\sum_{\mu=1}^{M}\left(y^{\mu}-W_{out}m_{x^{\mu}}\right)^{2}\).
The minimization of a loss \(L\) with respect to \(W_{out}\) was performed either via a linear solver (for MSE) or a multinomial classifier solver (for CE), using standard libraries in julia, which retrieve optimal \(W_{out}^{*}\) at fixed \(W\), for the full input set. We used MSE loss for the binary classification in the Random Task, whereas we employed the CE loss for multi-label classification in the MNIST-1D task.
#### a.1.1 Optimization of \(W\): Spsa
Due to the stochastic nature of the dynamics, the optimization of the interaction parameters \(W\) cannot be performed with standard gradient-based methods. Additionally, typical gradient evaluation through finite difference quickly becomes prohibitive as the number of independent parameters in \(W\) grows. To overcome this issue, we employ Simultaneous Perturbation Stochastic Approximation (SPSA) [84; 85], where the gradient is approximated via a single finite difference in a random direction of the parameter space.
To evaluate the gradient \(\nabla\mathcal{L}_{|W}\), a random vector \(\delta W\) is constructed at every update step. Two symmetrical parameters configurations are constructed: \(W^{\pm}=W\pm\delta W\). Independent dynamics are simulated to produce the average spin magnetizations \(m^{\pm}\) and measure entropy production rates \(\Sigma^{\pm}\). The average magnetizations \(m^{\pm}\) are thus used to compute the performance losses \(\mathcal{G}^{\pm}\). Finally the gradient approximation reads \(\nabla\mathcal{L}_{|W}\approx\left[\Sigma^{+}-\Sigma^{-}+\alpha(\mathcal{G}^{ -}-\mathcal{G}^{+})\right]\frac{\delta W}{2|\delta W|}\). To avoid being trapped into local minima, we performed several initializations for each value of \(\alpha\).
## Appendix C Details on 2-spin system
### Steady state
The stationary state can be computed by imposing the stationary condition in Eq. 10 and the normalization of \(p\), thus getting [18]:
\[p\left(s|x\right)=e^{-\beta\left(F+\delta F\right)}/Z, \tag{12}\]
where
\[F=x_{1}s_{1}+x_{2}s_{2}+J_{s}s_{1}s_{2} \tag{13}\]
and
\[\delta F=-\beta^{-1}\log\left[e^{\beta J_{a}s_{1}s_{2}}\frac{\cosh\left( \beta\left(x_{1}-2J_{a}s_{2}\right)\right)}{\cosh\beta x_{1}+\cosh\beta x_{2} }+e^{-\beta J_{a}s_{1}s_{2}}\frac{\cosh\left(\beta\left(x_{2}-2J_{a}s_{1} \right)\right)}{\cosh\beta x_{1}+\cosh\beta x_{2}}\right]. \tag{14}\]
## Appendix D Computation-dissipation bottleneck in linear systems
Let us consider a system whose dynamics, in the presence of a constant input \(x\), is described by a multi-dimensional Ornstein-Uhlenbeck process:
\[\dot{s}=Ws+x+\sigma_{s}\xi \tag{15}\]
with \(\left\langle\xi\xi^{T}\right\rangle=\delta\left(t-t^{\prime}\right)\mathcal{I}\), where \(\mathcal{I}\) is the identity matrix. The (generally non-equilibrium) steady state distribution \(p\left(s|x\right)\) is a Gaussian with mean \(m_{x}=W^{-1}x\) and whose covariance \(C\) solves the Lyapunov equation:
\[WC+CW^{T}+\sigma_{s}^{2}\mathcal{I}=0. \tag{16}\]
Let us consider a noisy linear function \(y=w_{0}^{T}x+\xi_{y}\), with \(\left\langle\xi_{y}\right\rangle=0\) and \(\sigma_{y}^{2}=\left\langle\xi_{y}^{2}\right\rangle\). Assuming \(x\) is a Gaussian with mean zero and covariance \(C_{x}\), one has \(\left\langle y^{2}\right\rangle=w_{0}^{T}C_{x}w_{0}+\sigma_{y}^{2}\) and \(C_{sy}=\left\langle sy\right\rangle=-W^{-1}\left\langle xy\right\rangle\).
To compute the mutual information, we use
\[I\left(s,y\right)=H\left(y\right)-H\left(y|s\right) \tag{10}\]
and the relation for the entropy of a zero-mean, \(d\) dimensional Gaussian variable \(z\) with covariance \(C_{z}\), \(H\left(z\right)=\frac{1}{2}\log\left(\left(2\pi e\right)^{d}\det C_{z}\right)\), to get:
\[I\left(s,y\right)=\frac{1}{2}\log\det\left(W^{-1}C_{x}W^{-T}+C\right)-\frac{1} {2}\log\det\left(W^{-1}C_{x|y}W^{-T}+C\right) \tag{11}\]
where we used that the covariance matrix \(C_{s}=\left\langle ss^{T}\right\rangle\), averaged over the entire input distribution, equals \(C_{s}=W^{-1}C_{x}W^{-T}+C\) and that \(C_{s|y}=C_{s}-C_{sy}C_{y}^{-1}C_{ys}\), with \(C_{s|y}\) the conditional covariance matrix of \(s\) given \(y\).
As shown in [86], the entropy production in the presence of a given input \(x\) can be computed in terms of an integral
\[\sigma=\int_{-\infty}^{+\infty}\frac{d\omega}{2\pi}\mathcal{E}\left(\omega\right) \tag{12}\]
where the density \(\mathcal{E}\left(\omega\right)\) is given by:
\[\mathcal{E}\left(\omega\right)=\frac{1}{2}\operatorname{Tr}\left[C\left( \omega\right)\left(C^{-1}\left(-\omega\right)-C^{-1}\left(\omega\right) \right)\right], \tag{13}\]
with \(C\left(\omega\right)\) the Fourier Transform of the steady state auto-correlation \(C\left(t-t^{\prime}\right)=\left\langle s\left(t\right)s^{T}\left(t^{\prime} \right)\right\rangle\).
The expressions derived thus far can be used to obtain the computation-dissipation bottleneck over any parametrization of the coupling matrix \(W\) by numerical optimization, for different values of the tradeoff parameter \(\alpha\). To exemplify the approach, the next section treats a 2-dimensional case where simple analytical expressions can be derived and a full enumeration of the parameter space is viable.
### An example of a computation-dissipation bottleneck in a 2-dimensional case
Let us then consider the case of a 2-particle system with an interaction matrix of the form:
\[W=\left(\begin{array}{cc}-1&J_{s}+J_{a}\\ J_{s}-J_{a}&-1\end{array}\right). \tag{14}\]
Stability is guaranteed for \(\Delta=1+J_{a}^{2}-J_{s}^{2}>0\). The solution of the Lyapunov Eq. 10 for an input noise with covariance \(\sigma_{s}^{2}\mathcal{I}\) is:
\[C=\frac{\sigma_{s}^{2}}{2\Delta}\left(\begin{array}{cc}1+J_{s}J_{a}+J_{a}^{ 2}&J_{s}\\ J_{s}&1-J_{s}J_{a}+J_{a}^{2}\end{array}\right). \tag{15}\]
The entropy production can be evaluated using Eq. 12 and the Fourier transform of the system's Green function:
\[G\left(\omega\right)=\left(i\omega-W\right)^{-1}=\frac{1}{\Delta-\omega^{2}+2 i\omega}\left(\begin{array}{cc}1+i\omega&J_{s}+J_{a}\\ J_{s}-J_{a}&1+i\omega\end{array}\right). \tag{16}\]
From the Fourier Transform of the steady-state auto-correlation \(C\left(\omega\right)=G\left(\omega\right)G^{\dagger}\left(\omega\right)\) we get for the entropy production density:
\[\mathcal{E}\left(\omega\right)=\frac{8\omega^{2}J_{a}^{2}}{\left|\left(1+i \omega\right)^{2}+J_{a}^{2}-J_{s}^{2}\right|^{2}}. \tag{17}\]
After integration in Eq. 13, and noting that \(C\) doesn't depend on \(x\), we get for a stable system:
\[\Sigma=2J_{a}^{2}. \tag{18}\]
We show in Fig. 1 the results for a system with \(\sigma_{s}=0.1\) tasked to compute a linear function \(y=w_{0}^{T}x+\xi_{y}\) with \(w_{0}=\left[\cos\left(\frac{\pi}{6}\right),\sin\left(\frac{\pi}{6}\right)\right]\), and \(\xi_{y}\) a zero-mean Gaussian variable with standard deviation \(\sigma_{y}=0.1\).
The trade-off between entropy production and output information is controlled by the degree of asymmetry in the entries of the vector \(w_{0}\). In a similar vein, each particle \(s_{i}\) can be used as a direct readout for the output \(y\). In such a case, the average squared deviation \(MSE_{i}=\left\langle\left(y-s_{i}\right)^{2}\right\rangle\) at steady state again shows a characteristic front with respect to entropy production.
|
2304.04258 | A Note on "Efficient Task-Specific Data Valuation for Nearest Neighbor
Algorithms" | Data valuation is a growing research field that studies the influence of
individual data points for machine learning (ML) models. Data Shapley, inspired
by cooperative game theory and economics, is an effective method for data
valuation. However, it is well-known that the Shapley value (SV) can be
computationally expensive. Fortunately, Jia et al. (2019) showed that for
K-Nearest Neighbors (KNN) models, the computation of Data Shapley is
surprisingly simple and efficient.
In this note, we revisit the work of Jia et al. (2019) and propose a more
natural and interpretable utility function that better reflects the performance
of KNN models. We derive the corresponding calculation procedure for the Data
Shapley of KNN classifiers/regressors with the new utility functions. Our new
approach, dubbed soft-label KNN-SV, achieves the same time complexity as the
original method. We further provide an efficient approximation algorithm for
soft-label KNN-SV based on locality sensitive hashing (LSH). Our experimental
results demonstrate that Soft-label KNN-SV outperforms the original method on
most datasets in the task of mislabeled data detection, making it a better
baseline for future work on data valuation. | Jiachen T. Wang, Ruoxi Jia | 2023-04-09T15:31:53Z | http://arxiv.org/abs/2304.04258v2 | # A Note on "Efficient Task-Specific Data Valuation for Nearest Neighbor Algorithms"
###### Abstract
Data valuation is a growing research field that studies the influence of individual data points for machine learning (ML) models. Data Shapley, inspired by cooperative game theory and economics, is an effective method for data valuation. However, it is well-known that the Shapley value (SV) can be computationally expensive. Fortunately, Jia et al. (2019) showed that for K-Nearest Neighbors (KNN) models, the computation of Data Shapley is surprisingly simple and efficient.
In this note, we revisit the work of Jia et al. (2019) and propose a more natural and interpretable utility function that better reflects the performance of KNN models. We derive the corresponding calculation procedure for the Data Shapley of KNN classifiers/regressors with the new utility functions. Our new approach, dubbed soft-label KNN-SV, achieves the same time complexity as the original method. We further provide an efficient approximation algorithm for soft-label KNN-SV based on locality sensitive hashing (LSH). Our experimental results demonstrate that Soft-label KNN-SV outperforms the original method on most datasets in the task of mislabeled data detection, making it a better baseline for future work on data valuation.
## 1 Introduction
Data valuation is an emerging research area that aims at measuring how much a given data source contributes to the process of training machine learning (ML) models. In the study of data marketplaces, data valuation is used for ensuring equitable compensation for each data owner. In the study of explainable ML, data valuation serves to identify the training examples that significantly impact certain model behaviors. Inspired by cooperative game theory and economic principles, Ghorbani and Zou (2019), Jia et al. (2019) initiated the study of using the Shapley value (SV) as a principled approach for data valuation, which is dubbed as "Data Shapley". Since then, many different variants of Data Shapley have been proposed (Jia et al., 2019, 2019, 2020, 2020, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 20222, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 20222, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 20222, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 222, 2022, 2022, 2022, 2022, 2022, 222, 2022, 222, 2022, 2022, 2022, 222, 2022, 222, 2022, 222, 2022, 2022, 2222, 2022, 222, 2022, 2222, 2022, 222, 2022, 2022, 2222, 2022, 222, 2022, 2222, 2022, 2022, 2222, 2022, 2222, 2022, 2222, 2022, 2222, 2022, 2222, 2022, 2222, 2022, 2222, 20222, 2222, 22222, 2222, 2222, 2222, 2222, 2222, 2222, 2222, 2222, 2222, 22222, 2222, 22222, 22222, 2222, 22222, 22222, 2222, 22222, 2222, 22222, 22222, 2222, 22222, 22222, 22222, 22222, 22222, 222
i.e., data points in the ML context. While there are many Monte Carlo-based Shapley value estimation algorithms have been proposed (Lundberg and Lee, 2017; Jia et al., 2019; Illes and Kerenyi, 2019; Okhrati and Lipani, 2021; Burgess and Chapman, 2021; Mitchell et al., 2022; Lin et al., 2022; Wang and Jia, 2023; Kolpaczki et al., 2023), these approaches still require at least thousands of utility function evaluations. For ML tasks, evaluating the utility function itself (e.g., computing the validation accuracy of the ML model trained on a given dataset) is already computationally expensive, as it requires training a model from scratch. Fortunately, Jia et al. (2019) have observed that computing the Data Shapley for K-Nearest Neighbors (KNN), one of the classic yet still popular ML algorithms, is surprisingly easy and efficient. Specifically, Jia et al. (2019) show that for unweighted KNN classifiers and regressors, the Shapley value of all \(N\) data points can be computed _exactly_ in an efficient way, without the complex calculation of a weighted average of all the data points' marginal contributions, as suggested by the original formula introduced in Shapley(1953). To the best of our knowledge, KNN is the only commonly-used ML model where the exact Data Shapley can be efficiently computed (dubbed as 'KNN-SV').
In this note, we revisit the work by Jia et al. (2019) and present a refined analysis for the Data Shapley of KNN classifiers and regressors, which we refer to as _soft-label KNN-SV_. Specifically, we propose a more natural and interpretable utility function that better reflects the performance of a soft-label KNN model, compared to the one considered in Jia et al. (2019). We then derive the corresponding calculation procedure for the Data Shapley calculation when the refined utility function is being used. The computation of Soft-label KNN-SV achieves the same time complexity as the one in Jia et al. (2019). Furthermore, we present an approximation algorithm for Soft-label KNN-SV based on Locality Sensitivity Hashing (LSH) similar to the one in Jia et al. (2019), which can further improve the computational efficiency.
Finally, we compare the performance of the newly proposed Soft-label KNN-SV with the original KNN-SV on the task of mislabeled data detection. We demonstrate that Soft-label KNN-SV outperforms the original method on most datasets, indicating that it serves as a better baseline for future work on data valuation. This also highlights the importance of choosing an appropriate utility function in Data Shapley methods.
## 2 Background of Data Shapley
In this section, we formalize the data valuation problem for ML and review the concept of the Shapley value.
Data Valuation for Machine Learning.We denote a dataset \(D:=\{z_{i}\}_{i=1}^{N}\) containing \(N\) data points. The objective of data valuation is to assign a score to each training data point in a way that reflects their contribution to model training. Denote \(\mathcal{I}:=\{1,\ldots,N\}\) as the index set. To analyze a point's "contribution", we define a _utility function_\(v:2^{N}\rightarrow\mathbb{R}\), which maps any subset of the training set to a score indicating the usefulness of the subset. \(2^{N}\) represents the power set of \(N\), i.e., the collection of all subsets of \(N\), including the empty set and \(N\) itself. For classification tasks, a common choice for \(v\) is the validation accuracy of a model trained on the input subset. Formally, for any subset \(S\subseteq\mathcal{I}\), we have \(v(S):=\texttt{acc}(\mathcal{A}(\{z_{i}\}_{i\in S}))\), where \(\mathcal{A}\) is a learning algorithm that takes a dataset \(\{z_{i}\}_{i\in S}\) as input and returns a model. \(\texttt{acc}\) is a metric function that evaluates the performance of a given model, e.g., the accuracy of a model on a hold-out test set. We consider utility functions with a bounded range, which aligns with the commonly used metric functions such as test classification accuracy. Without loss of generality, we assume throughout this note that
\(v(S)\in[0,1]\). The goal of data valuation is to partition \(U_{\text{tot}}:=v(\mathcal{I})\), the utility of the entire dataset, to the individual data point \(i\in\mathcal{I}\). That is, we want to find a score vector \((\phi_{i}(v))_{i\in\mathcal{I}}\) where each \(\phi_{i}(v)\) represents the payoff for the owner of the data point \(i\).
The Shapley Value.The SV (Shapley, 1953) is a classic concept in cooperative game theory to attribute the total gains generated by the coalition of all players. At a high level, it appraises each point based on the (weighted) average utility change caused by adding the point into different subsets. Given a utility function \(v(\cdot)\), the Shapley value of a data point \(i\) is defined as
\[\phi_{i}\left(v\right):=\frac{1}{N}\sum_{k=1}^{N}\binom{N-1}{k-1}^{-1}\sum_{S \subseteq\mathcal{I}\setminus\{i\},|S|=k-1}\left[v(S\cup\{i\})-v(S)\right] \tag{1}\]
When the context is clear, we omit the argument and simply write \(\phi_{i}\). The popularity of the Shapley value is attributable to the fact that it is the _unique_ data value notion satisfying the following four axioms (Shapley, 1953):
* Dummy player: if \(v\left(S\cup i\right)=v(S)+c\) for all \(S\subseteq\mathcal{I}\setminus i\) and some \(c\in\mathbb{R}\), then \(\phi_{i}\left(v\right)=c\).
* Symmetry: if \(v(S\cup i)=v(S\cup j)\) for all \(S\subseteq\mathcal{I}\setminus\{i,j\}\), then \(\phi_{i}(v)=\phi_{j}(v)\).
* Linearity: For any of two utility functions \(v_{1},v_{2}\) and any \(\alpha_{1},\alpha_{2}\in\mathbb{R}\), \(\phi_{i}\left(\alpha_{1}v_{1}+\alpha_{2}v_{2}\right)=\alpha_{1}\phi_{i}\left( v_{1}\right)+\alpha_{2}\phi_{i}\left(v_{2}\right)\).
* Efficiency: for every \(v\), \(\sum_{i\in\mathcal{I}}\phi_{i}(v)=v(\mathcal{I})\).
The difference \(v(S\cup i)-v(S)\) is often termed the _marginal contribution_ of data point \(i\) to subset \(S\subseteq\mathcal{I}\setminus i\). We refer the readers to (Ghorbani and Zou, 2019; Jia et al., 2019) and the references therein for a detailed discussion about the interpretation and necessity of the four axioms in the ML context.
## 3 Valuing Data for Soft-label KNN Classification
We consider the setting of supervised learning. Suppose we are given a set of training data points \(D=\{(x_{i},y_{i})\}_{i=1}^{N}\) and a validation set \(D_{\text{test}}=\{(x_{\text{test},i},y_{\text{test},i})\}_{i=1}^{N_{\text{test}}}\). We also denote \(\mathcal{I}=\{1,\ldots,N\}\) the index set for training set \(D\). Let \(S\subseteq\mathcal{I}\) be a subset of data indices. Denote \(\pi^{(S)}(i;x_{\text{test}})\) the index (in the full training set \(D\)) of \(i\)th closest data point in \(S\) to \(x_{\text{test}}\). The distance is measured through some appropriate metric, which we use \(\ell_{2}\) distance throughout this note. Thus, \((x_{\pi^{(S)}(1;x_{\text{test}})},x_{\pi^{(S)}(2;x_{\text{test}})},\ldots,x_{ \pi^{(S)}(n;x_{\text{test}})})\) is a reordering of the training instances in \(S\) with respect to their distance from \(x_{\text{test}}\). When the context is clear, we omit the argument \(x_{\text{test}}\) and simply write \(\pi^{(S)}(i)\).
As we mentioned earlier, the utility function \(v(\cdot)\) is usually defined as the validation accuracy of a model trained on the input subset. In Jia et al. (2019), for a given validation set \(D_{\text{test}}\), the utility function of a \(K\)NN classifier trained on \(S\) is defined as
\[v(S;D_{\text{test}}):=\sum_{(x_{\text{test}},y_{\text{test}})\in D_{\text{test }}}v(S;(x_{\text{test}},y_{\text{test}})) \tag{2}\]
where we overload the notation slightly and write
\[v(S;(x_{\text{test}},y_{\text{test}})):=\frac{1}{K}\sum_{j=1}^{\min(K,|S|)} \mathbbm{1}[y_{\pi^{(S)}(j)}=y_{\text{test}}] \tag{3}\]
The main result in Jia et al. (2019) shows the following:
**Theorem 1** (Jia et al. (2019)1).: _Consider the utility function in (3). Given the test data point \((x_{\text{test}},y_{\text{test}})\), assume that the input dataset \(D=\{(x_{i},y_{i})\}_{i=1}^{N}\) is sorted according to \(\|x_{i}-x_{\text{test}}\|\) in ascending order. Then, the Shapley value of each training point \(\phi_{i}\) can be calculated recursively as follows:_
Footnote 1: We state a more generalized version which does not require \(N\geq K\).
\[\phi_{N} =\frac{\mathbbm{1}[y_{N}=y_{\text{test}}]}{\max(K,N)}\] \[\phi_{i} =\phi_{i+1}+\frac{\mathbbm{1}[y_{i}=y_{\text{test}}]-\mathbbm{1} [y_{i+1}=y_{\text{test}}]}{K}\frac{\min(K,i)}{i}\]
We can see that the time complexity for computing all of \((\phi_{1},\dots,\phi_{N})\) is \(O(N\log N)\) (for sorting the training set). After the computing the Data Shapley score \(\phi_{i}\left(v(\cdot;(x_{\text{test}},y_{\text{test}}))\right)\) for each \((x_{\text{test}},y_{\text{test}})\in D_{\text{test}}\), one can compute the Data Shapley corresponding to the utility function on the overall validation set in (2) through the linearity axiom of the Shapley value, i.e.,
\[\phi_{i}\left(v(\cdot;D_{\text{test}})\right)=\sum_{(x_{\text{test}},y_{\text {test}})\in D_{\text{test}}}\phi_{i}\left(v(\cdot;(x_{\text{test}},y_{\text{ test}}))\right) \tag{4}\]
In Jia et al. (2019), the utility function in (3) is justified as the likelihood of a soft-label KNN-classifier in predicting the correct label \(y_{\text{test}}\) for \(x_{\text{test}}\). However, this justification is no longer true when \(|S|<K\). The actual likelihood in this case is \(\frac{1}{|S|}\sum_{j=1}^{|S|}\mathbbm{1}[y_{\pi^{(S)}(j)}=y_{\text{test}}]\) instead of \(\frac{1}{K}\sum_{j=1}^{|S|}\mathbbm{1}[y_{\pi^{(S)}(j)}=y_{\text{test}}]\). Therefore, in this note, we re-define the utility function here as
\[v(S;(x_{\text{test}},y_{\text{test}})):=\begin{cases}\frac{1}{C}&|S|=0\\ \frac{1}{\min(K,|S|)}\sum_{j=1}^{\min(K,|S|)}\mathbbm{1}[y_{\pi^{(S)}(j)}=y_{ \text{test}}]&|S|>0\end{cases} \tag{5}\]
where \(C\) is the number of classes for the corresponding classification task. This utility function corresponds to the prediction accuracy of an unweighted, soft-label KNN classifier. When \(|S|=0\), we set the utility as the accuracy of random guess. Using this utility function, we derive a similar procedure for calculating the SV of KNN classifier where the runtime stays the same as \(O(N\log N)\). We refer to the Data Shapley when using this utility function as soft-label KNN-SV.
**Theorem 2**.: _Consider the utility function in (5). Given the test data point \((x_{\text{test}},y_{\text{test}})\), assume that the input dataset \(D=\{(x_{i},y_{i})\}_{i=1}^{N}\) is sorted according to \(\|x_{i}-x_{\text{test}}\|\) in ascending order. Then, the Shapley value of each training point \(\phi_{i}\) can be calculated recursively as follows:_
\[\phi_{N} =\frac{1[N\geq 2]}{N}\left(\mathbbm{1}\left[y_{N}=y_{\text{test}} \right]-\frac{\sum_{i=1}^{N-1}\mathbbm{1}\left[y_{i}=y_{\text{test}}\right]}{N- 1}\right)\left(\sum_{j=1}^{\min(K,N)-1}\frac{1}{j+1}\right)+\frac{1}{N}\left( \mathbbm{1}\left[y_{N}=y_{\text{test}}\right]-\frac{1}{C}\right)\] \[\phi_{i} =\phi_{i+1}\] \[\quad+\frac{\mathbbm{1}[y_{i}=y_{\text{test}}]-\mathbbm{1}[y_{i+ 1}=y_{\text{test}}]}{N-1}\left[\sum_{j=1}^{\min(K,N)}\frac{1}{j}+\frac{ \mathbbm{1}[N\geq K]}{K}\left(\frac{\min(i,K)\cdot(N-1)}{i}-K\right)\right]\]
## 4 LSH-based Approximation for Soft-label KNN-SV
As we can see from Theorem 1 and 2, for every test data point, the exact computation of the soft-label KNN-SV requires sorting all of the training data points according to their distances to _every_ validation data point. Hence, similar to the original KNN-SV proposed in Jia et al. (2019), it takes \(O(N_{\text{test}}N\log(N))\) many operations to compute the soft-label KNN-SV for all training data points with respect to the full validation set. If both the training set size \(N\) and the test set size \(N_{\text{test}}\) are large, the \(NN_{\text{test}}\) factor in the computational complexity can be prohibitively large.
To accelerate the computation, Jia et al. (2019) propose an efficient approximation algorithm for the original KNN-SV based on the locality-sensitive hashing (LSH) technique. In this section, we extend the approximation algorithm to soft-label KNN-SV.
### Approximating Soft-label KNN-SV with \(K^{*}\) nearest neighbors only
Similar to Jia et al. (2019), we identify the following approximation for soft-label KNN-SV when one can only identify \(K^{*}\) nearest neighbors. Hence, instead of sorting the entire training set, one can instead aim for a slightly easier task of identifying the \(K^{*}\) nearest neighbors of the validation data point.
**Theorem 3**.: _Consider the utility function defined in (5). Suppose one can find the \(K^{*}\) nearest neighbors of \(x_{\text{test}}\) where \(K^{*}<N\). When \(N\geq\max(2,K)\), the approximation \(\widehat{\phi}\) defined as_
\[\widehat{\phi}_{i} =\frac{1}{N}\left(\frac{1}{2}-\frac{1}{C}\right)\quad\text{ for any }i\geq K^{*}\] \[\widehat{\phi}_{i} =\widehat{\phi}_{i+1}+\frac{\mathbbm{1}[y_{i}=y_{\text{test}}]- \mathbbm{1}\left[y_{i+1}=y_{\text{test}}\right]}{N-1}\left[\sum_{j=1}^{\min(K,N)}\frac{1}{j}+\frac{\mathbbm{1}[N\geq K]}{K}\left(\frac{\min(i,K)\cdot(N-1)} {i}-K\right)\right]\quad\text{ for }i<K^{*}\] \[\text{ satisfies }\left\|\widehat{\phi}-\phi\right\|_{\infty}\leq\frac{1}{N} \left(\sum_{j=2}^{K-1}\frac{1}{j+1}\right)+\frac{1}{\max(K^{*},K)}=O\left( \frac{\log K}{N}+\frac{1}{\max(K^{*},K)}\right).\]
Theorem 3 indicates that we only need to find \(K^{*}\) many nearest neighbors to obtain an approximation of soft-label KNN-SV with \(\ell_{\infty}\) error \(O\left(\frac{\log K}{N}+\frac{1}{\max(K^{*},K)}\right)\). When \(K^{*}\) is of the order of \(\Theta(K)\), the approximation error will be dominated by \(\frac{1}{\max(K^{*},K)}\).
### Efficiently Finding \(K^{*}\) Nearest Neighbors with LSH
Efficiently retrieving the nearest neighbors of a query in large-scale databases has been a well-studied problem. Various approximation approaches have been proposed to improve the efficiency
of the nearest neighbor search. Among these, Locality Sensitive Hashing (LSH) has been experimentally shown to provide significant speedup for the computation of the original KNN-SV method as reported in Jia et al. (2019).
The LSH algorithm has two hyperparameters: the number of hash tables \(L\) and the number of hash bits \(M\). The algorithm first creates \(L\) hash tables. Within each hash table, the algorithm converts each data point \(x\) into a set of hash codes using an \(M\)-bit hash function \(h(x)=(h_{1}(x),\ldots,h_{M}(x))\). The hash function must satisfy a locality condition, which means that any pair of data points that are close to each other have the same hashed value with high probability, while any pair of data points that are far away from each other have the same hashed values with low probability. A commonly used hash function for LSH is \(h(x)=\lfloor\frac{w^{T}x+b}{r}\rfloor\), where \(w\) is a vector with entries sampled from \(\mathcal{N}(0,1)\) and \(b\sim\text{Unif}([0,r])\)(Datar et al., 2004). It has been shown that
\[\Pr[h(x)=h(x_{\text{test}})]=f_{h}(\|x-x_{\text{test}}\|)\]
where \(f_{h}(y)=\int_{0}^{r}\frac{1}{y}f_{2}(\frac{z}{y})\left(1-\frac{z}{r}\right)dz\), and \(f_{2}\) is the probability density function of the absolute value of standard Gaussian distribution \(\mathcal{N}(0,1)\).
Algorithm 1 outlines the LSH-based approximation algorithm for Soft-label KNN-SV. At a high level, the algorithm preprocesses the dataset by mapping each data point to its corresponding hash values and storing them in hash tables. Then, for every validation data point \((x_{\text{test}},y_{\text{test}})\), the algorithm computes its hash value and gathers all training data points that collide with it in any of the \(L\) hash tables. By an appropriate choice of \(L\) and \(M\), we can ensure that with high probability, all of the \(K^{*}\) nearest neighbors are among the collided data points. Finally, the algorithm computes
an approximation of Soft-label KNN-SV based on Theorem 3.
```
input :\(L\) - number of hash tables, \(M\) - number of hash bits per table entry.
1// Preprocessing
2Sample a collection of hash functions \(\{h_{\ell,m}\}_{\ell=1,\dots,L,m=1,\dots,M}\) where each hash function is independently sampled and is of the form \(h(x)=\lfloor\frac{w^{T}x+b}{r}\rfloor,w\sim\mathcal{N}(0,I_{d}),b\sim\text{ Unif}([0,r])\).
3Initialize \(L\) hash tables \(\{H_{\ell}\}_{\ell=1,\dots,L}\).
4for\(\ell=1,\dots,L\)do
5for\(i\in\mathcal{I}\)do
6Compute \((h_{\ell,1}(x_{i}),\dots,h_{\ell,M}(x_{i}))\) and store in hash table \(H_{\ell}\).
7
8// Find Nearest Neighbors
9for\((x_{\text{test}},y_{\text{test}})\in D_{\text{test}}\)do
10neighbor\(\leftarrow\{\}\).
11for\(\ell=1,\dots,L\)do
12Compute \((h_{\ell,1}(x_{\text{test}}),\dots,h_{\ell,m}(x_{\text{test}}))\).
13 Add all elements in \(H_{\ell}\) that collide with \(x_{\text{test}}\) to neighbor.
14 Remove all repeated elements in neighbor.
15
16// Compute Approximate soft-label KNN-SV
17if\(|\texttt{neighbor}|\geq K^{*}\)then
18 Sort elements \(x\in\texttt{neighbor}\) by \(\|x-x_{\text{test}}\|\).
19 Compute the approximated soft-label KNN-SV according to Theorem 3 with \(K^{*}\) nearest neighbors found in neighbor.
20
21else
22Print "Fail".
```
**Algorithm 1**LSH-based approximation for Soft-label KNN-SV
We now present a theorem that characterizes the success rate of finding the \(K^{*}\) nearest neighbors based on the relationship between the training set and validation data points.2
Footnote 2: He et al. [2012] introduced a metric known as “relative contrast” and linked it to the runtime of LSH in finding the 1-nearest neighbor, which was further extended to the \(K\)-nearest neighbor by Jia et al. [2019a]. However, the original analysis in He et al. [2012]’s Theorem 3.1 is vacuous and only applies to imaginary datasets where all training data points have the same distance to every validation point. In this note, we present Theorem 4, which provides a refined analysis that corrects the mathematical issues found in He et al. [2012].
**Theorem 4**.: _For training set \(D\), denote \(\pi(j;x_{\text{test}}):=\pi^{(D)}(j;x_{\text{test}})\). With probability at least \(1-\delta\) over the choices of hash functions \(\{h_{\ell,m}\}_{\ell=1,\dots,L,m=1,\dots,M}\) where each hash function is independently sampled and is of the form \(h(x)=\lfloor\frac{w^{T}x+b}{r}\rfloor,w\sim\mathcal{N}(0,I),b\sim\text{Unif}([ 0,r])\), Algorithm 1 can find all of \(x_{\text{test}}\)'s \(K^{*}\) nearest neighbors with \(M=O\left(\frac{\log N}{\log(1/p_{\max})}\right)\) and \(L=O\left(N^{c}\log(N_{\text{test}}K^{*}/\delta)\right)\), where_
\[p_{1}(x_{\text{test}}) :=f_{h}\left(\big{\|}x_{\pi(K^{*};x_{\text{test}})}-x_{\text{test }}\big{\|}\right)\] \[p_{2}(x_{\text{test}}) :=f_{h}\left(\big{\|}x_{\pi(K^{*}+1;x_{\text{test}})}-x_{\text{ test}}\big{\|}\right)\] \[p_{\max} :=\max_{x_{\text{test}}}p_{2}(x_{\text{test}})\] \[c :=\max_{x_{\text{test}}}\frac{\log p_{1}(x_{\text{test}})}{\log p _{2}(x_{\text{test}})}\]
_In this setting, there are_
\[O(MLN)=O\left(N^{1+c}\log N\frac{\log(N_{\text{test}}K^{*}/\delta)}{ \log(1/p_{\text{max}})}\right)\]
_hash bits to store, and the expected number of collided data points to check and sort is_
\[O\left(N_{\text{test}}N^{c}K^{*}\log(N_{\text{test}}K^{*}/\delta)\right)\]
The ratio \(c\) in the above theorem determines the space and time complexity of the LSH algorithm. As \(f_{h}\) is monotonically decreasing, we know that \(c\leq 1\). Intuitively, \(c\) represents the difficulty of finding all \(K^{*}\) nearest neighbors for all data points in a validation set. A smaller \(c\) implies that the \((K^{*}+1)\)th nearest neighbor is likely to collide with the validation data, making it more challenging to differentiate between data points that are within \(K^{*}\) nearest neighbors and those that are farther.
It is worth noting that while Algorithm 1 reduces the runtime for finding nearest neighbors to each \((x_{\text{test}},y_{\text{test}})\) to sublinear in \(N\), the data preprocessing step increases to \(O(MLN)=\widetilde{O}(N^{1+c})\). Therefore, the total runtime becomes \(\widetilde{O}(N^{1+c}+N_{\text{test}}N^{c})\). Algorithm 1 provides speedups when \(N_{\text{test}}\gg N\).
## 5 Extension: Closed-form SV for Soft-label KNN Regression
We now extend Theorem 2 to unweighted soft-label KNN regression. In Jia et al. (2019), the utility function for KNN-regression is
\[v(S;(x_{\text{test}},y_{\text{test}}))=-\left(\frac{1}{K}\sum_{ j=1}^{\min(K,|S|)}y_{\pi^{(S)}(j)}-y_{\text{test}}\right)^{2} \tag{6}\]
Similar to Section 3, we also consider a more accurate and interpretable utility function for KNN Regression task in the following, and we derive a simple iterative procedure to compute the exact SV for it.
\[v(S;(x_{\text{test}},y_{\text{test}})):=\begin{cases}-y_{\text{ test}}^{2}&|S|=0\\ -\left(\frac{1}{\min(K,|S|)}\sum_{j=1}^{\min(K,|S|)}y_{\pi^{(S)}(j)}-y_{\text{ test}}\right)^{2}&|S|>0\end{cases} \tag{7}\]
**Theorem 5**.: _Consider the utility function in (7). Given the test data point \((x_{\text{test}},y_{\text{test}})\), assume that the input dataset \(D=\{(x_{i},y_{i})\}_{i=1}^{N}\) is sorted according to \(\|x_{i}-x_{\text{test}}\|\) in ascending order. Then, the Shapley value of each training point \(\phi_{i}\) can be calculated recursively as follows:_
\[\phi_{i}-\phi_{i+1}=\frac{1}{N-1}\left[(y_{i}-y_{i+1})^{2}A_{1}+ 2(y_{i}-y_{i+1})A_{2}-2y_{\text{test}}(y_{i}-y_{i+1})A_{3}\right] \tag{8}\]
_and_
\[\phi_{N}=\frac{1}{N}(*)+\frac{1}{N}\left[y_{\text{test}}^{2}-(y_{ N}-y_{\text{test}})^{2}\right] \tag{9}\]
_where_
\[A_{1} =\sum_{j=1}^{K}\frac{1}{j^{2}}+\frac{1}{K^{2}}\left(\frac{(N-1)\min(K,i)}{i}-K\right)\] \[A_{2} =\frac{1}{N-2}\left(\sum_{\ell\in\mathcal{I}\backslash\{i,i+1\}}^ {N}y_{\ell}\right)\left(\sum_{j=1}^{K-1}\frac{j}{(j+1)^{2}}\right)\] \[\quad+\frac{1}{K^{2}}\left(\left(\sum_{\ell=1}^{i-1}y_{\ell} \right)\left(\frac{(N-1)\min(K,i)\min(K-1,i-1)}{2(i-1)i}-\frac{(K-1)K}{2(N-2)}\right)\right.\] \[\qquad\qquad\left.+\sum_{\ell=i+2}^{N}y_{\ell}\left(\frac{(N-1) \min(K,\ell-1)\min(K-1,\ell-2)}{2(\ell-1)(\ell-2)}-\frac{(K-1)K}{2(N-2)}\right)\right)\] \[A_{3} =\left(\sum_{j=1}^{K}\frac{1}{j}\right)+\min(K,i)\frac{N-1}{iK}-1\]
_and_
\[(*) =\sum_{j=1}^{K-1}\left[\frac{2j+1}{j^{2}(j+1)^{2}}\left(\frac{j(j -1)}{(N-1)(N-2)}\left(\sum_{i=1}^{N-1}y_{i}\right)^{2}+\frac{j(N-j-1)}{(N-1)(N -2)}\sum_{i=1}^{N-1}y_{i}^{2}\right)\right.\] \[\qquad\qquad+\left(-\frac{2y_{N}}{(j+1)^{2}}-\frac{2y_{\text{test }}}{j(j+1)}\right)\frac{j}{N-1}\sum_{i=1}^{N-1}y_{i}\] \[\qquad\qquad+\left.\left(\frac{y_{N}}{j+1}-2y_{\text{test}} \right)\left(-\frac{y_{N}}{j+1}\right)\right]\]
## 6 Numerical Evaluation
We compare the effectiveness of Soft-Label KNN-SV and the Original KNN-SV on the task of mislabeled data detection. We use 13 standard datasets that are previously used in the data valuation literature as the benchmark tasks. The description for dataset preprocessing is deferred to Appendix B. We generate noisy labeled samples by flipping labels for a randomly chosen 10% of training data points. We use F1-score as the performance metric for mislabeling detection.
We consider two different detection rules for mislabeled data: **(1) Ranking** We mark a data point as a mislabeled one if its data value is less than 10 percentile of all data value scores. **(2) Cluster** We use a clustering-based procedure as the number of mislabeled data points and the threshold for detecting noisy samples are usually unknown in practice. Specifically, we first divide all data values into two clusters using the K-Means clustering algorithm and then classify a data point as a noisy sample if its value is less than the minimum of the two cluster centers. This detection rule is adapted from Kwon and Zou (2022).
The results are shown in Table 1. We set \(K=5\) for all results (we found that the performance is relatively robust against the choice of \(K\)). As we can see, Soft-label KNN-SV slightly outperforms the original KNN-SV on most of the datasets. This finding indicates that Soft-label KNN-SV could be considered as a baseline approach for future research on data valuation.
## 7 Conclusion
In this technical note, we present an improved version of KNN-SV which considers a more natural and interpretable utility function for soft-label KNN. We also present a similar LSH-based approximation algorithm for the new KNN-SV, and we provide a refined analysis for the algorithm which eliminates the flawed assumptions made in He et al. (2012). Moreover, we empirically show that the newly proposed soft-label KNN-SV consistently outperforms the original one. This note advocates using soft-label KNN-SV as a better baseline approach when developing data valuation techniques.
## Acknowledgments
We thank Yuqing Zhu at UC Santa Barbara for the helpful discussion on the theoretical analysis for "relative contrast" in He et al. (2012).
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**Dataset** & **Original (R)** & **Soft-Label (R)** & **Original (C)** & **Soft-Label (C)** \\ \hline
**MNIST** & 0.86 & 0.86 & 0.529 & 0.529 \\
**FMNIST** & 0.68 & 0.68 & 0.571 & 0.571 \\
**CIFAR10** & 0.18 & 0.18 & 0.043 & 0.043 \\
**Fraud** & 0.775 & 0.775 & 0.504 & 0.504 \\
**Creditcard** & 0.23 & 0.24 **(+0.1)** & 0.23 & 0.244 **(+0.014)** \\
**Vehicle** & 0.375 & 0.38 **(+0.05)** & 0.197 & 0.198 **(+0.001)** \\
**Apsfail** & 0.675 & 0.675 & 0.406 & 0.412 **(+0.06)** \\
**Click** & 0.18 & 0.19 **(+0.1)** & 0.2 & 0.206 **(+0.006)** \\
**Phoneme** & 0.535 & 0.545 **(+0.1)** & 0.509 & 0.516 **(+0.007)** \\
**Wind** & 0.475 & 0.48 **(+0.005)** & 0.416 & 0.425 **(+0.009)** \\
**Pol** & 0.685 & 0.7 **(+0.015)** & 0.438 & 0.438 \\
**CPU** & 0.66 & 0.665 **(+0.005)** & 0.57 & 0.604 **(+0.034)** \\
**2DPlanes** & 0.64 & 0.64 & 0.553 & 0.571 **(+0.018)** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of mislabel data detection ability of the seven data valuation methods on the 13 classification datasets. **(R)** denotes the Ranking detection rule, and **(C)** denotes the Cluster detection rule. |
2305.04171 | Regularity of the Siciak-Zaharjuta extremal function on compact Kähler
manifolds | We prove that the regularity of the extremal function of a compact subset of
a compact K\"ahler manifold is a local property, and that the continuity and
H\"older continuity are equivalent to classical notions of the local
$L$-regularity and the locally H\"older continuous property in pluripolential
theory. As a consequence we give an effective characterization of the
$(\Cc^\al, \Cc^{\al'})$-regularity of compact sets, the notion introduced by
Dinh, Ma and Nguyen. Using this criterion all compact fat subanalytic subsets
in $\bR^n$ are shown to be regular in this sense. | Ngoc Cuong Nguyen | 2023-05-07T03:03:15Z | http://arxiv.org/abs/2305.04171v3 | # Regularity of the Siciak-Zaharjuta extremal function on compact Kahler manifolds
###### Abstract.
We prove that the regularity of the extremal function of a compact subset of a compact Kahler manifold is a local property, and that the continuity and Holder continuity are equivalent to classical notions of the local \(L\)-regularity and the locally Holder continuous property in pluripotential theory. As a consequence we give an effective characterization of the \((\mathscr{C}^{\alpha},\mathscr{C}^{\alpha^{\prime}})\)-regularity of compact sets, the notion introduced by Dinh, Ma and Nguyen. Using this criterion all compact fat subanalytic subsets in \(\mathbb{R}^{n}\) are shown to be regular in this sense.
## 1. Introduction
_Background._ Let \((X,\omega)\) be a compact Kahler manifold of dimension \(n\). The extremal function of a Borel subset \(E\) in \(X\) is defined by
\[V_{E}(z)=\sup\{v\in PSH(X,\omega):v\leq 0\quad\text{on }E\}.\]
This is an analogue of the classical extremal function in \(\mathbb{C}^{n}\) introduced by Siciak [10] using the polynomials and characterized by Zaharjuta [11] via plurisubharmonic functions. Namely, for a bounded subset \(E\) in \(\mathbb{C}^{n}\) it is defined by
\[L_{E}(z)=\sup\{v(z):v\in\mathcal{L},\,v\leq 0\text{ on }E\}, \tag{1.1}\]
where \(\mathcal{L}\) denotes the class of plurisubharmonic functions on \(\mathbb{C}^{n}\) with logarithmic growth at infinity.
Since its introduction the extremal function has found numerous applications in pluripotential theory, approximation theory in \(\mathbb{C}^{n}\) and other areas. We refer to the reader to the textbook by Klimek [12, Chapter 5] for the detailed discussion and results up to 1990, and the surveys of Levenberg [13] and Plesniak [14] for more recent results. It often turns out that the geometrical properties of the compact subset \(E\) can be read from the regularity of its extremal function \(L_{E}\). It is well-known that \(E\) is non-pluripolar if and only if its upper semicontinuous regularization \(L_{E}^{*}\) is bounded. Also, the Bernstein-Wash inequality and Markov's inequality for \(E\) are consequences of the continuity and Holder continuity of \(L_{E}\), respectively (see op. cit. references)
In dimension \(n=1\), for a non-polar compact set \(E\subset\mathbb{C}\), the function \(L_{E}\) coincides with the Green function of the unbounded component of \(\mathbb{C}\setminus E\) with the pole at infinity. A classical result saying that the sequence of the probability counting measures for Fekete points of \(E\), called _Fekete's measures_, converges weakly to the equilibrium measure \(\mu_{\text{eq}}\) of \(E\) as the number of points goes to infinity (see, e.g [1]). The speed of convergence of the sequence can be also quantified for a compact domain whose boundary is smooth.
The analogous problems regarding the weak convergence and speed of convergence of the Fekete measures of a compact subset in \(\mathbb{C}^{n}\), \(n\geq 1\), had been open for a long time. The weak convergence was obtained in a deep work of Boucksom, Berman and Witt-Nystrom [1]. The speed of convergence was proved later in an important work of Dinh, Ma and Nguyen [14] for a large class of compact sets. Surprisingly, to prove the results one required a deep understanding on the extremal function and its weighted version on the projective space as a compact Kahler manifold with the Fubini-Study metric.
For a real-valued continuous function \(\phi\) on \(X\), we define the weighted extremal function
\[V_{E,\phi}(z)=\sup\{v(z):v\in PSH(X,\omega),\;v_{|_{E}}\leq\phi\}. \tag{1.2}\]
First, it is proved in [1] that for many applications the continuity of \(V_{E,\phi}\) with a continuous weight \(\phi\) is crucial, e.g., the Bernstein-Markov measure with respect to a plurisubharmonic weight. Next, the following class of compact sets is introduced in [14].
**Definition 1.1**.: A non-pluripolar compact subset \(K\subset X\) is said to be \((\mathscr{C}^{\alpha},\mathscr{C}^{\alpha^{\prime}})\)_-regular_, where \(0<\alpha,\alpha^{\prime}\leq 1\), if for every \(\alpha\)-Holder continuous weight \(\phi\), its weighted extremal function \(V_{K,\phi}\) is \(\alpha^{\prime}\)-Holder continuous.
Obviously, if \(K\) is \((\mathscr{C}^{\alpha},\mathscr{C}^{\alpha^{\prime}})\)-regular, then \(V_{K}\) is necessary Holder continuous for the weight \(\phi\equiv 0\). As shown in [14] one can obtain the speed of convergence of Fekete's measures of such a compact subset. There are more applications of such regular sets recently found in [11].
_Results._ Despite the fact that regularity (continuity/Holder continuity) of the extremal function in \(\mathbb{C}^{n}\) is a classical topic, its counterpart on general compact Kahler manifolds has not been studied systematically except for the set to be the whole manifold. The later one is the envelope which was well-understood thanks to the works [1], [10], [12], [13] and [14]. In this paper we study in great detail the regularity of a non-pluripolar compact subset in general. Our first result shows that the regularity at a given point can be localized to a neighborhood of that point.
**Theorem 1.2**.: _Let \(K\subset X\) be a non-pluripolar compact subset and \(a\in K\). Let \(B(a,r)\subset X\) denote a closed coordinate ball centered at \(a\) with small radius \(r>0\)._
1. \(V_{K}\) _is continuous at_ \(a\) _if and only if_ \(V_{K\cap B(a,r)}\) _is continuous at_ \(a\)_._
2. \(V_{K}\) _is_ \(\mu\)_-Holder continuous at_ \(a\) _if and only if_ \(V_{K\cap B(a,r)}\) _is_ \(\mu\)_-Holder continuous at_ \(a\)_._
Note that the sufficient conditions in (a) and (b) are easy consequences of monotonicity. Surprisingly, the necessary conditions also hold on compact manifolds. As compared with the ones on \(\mathbb{C}^{n}\) it was shown that the necessary condition in (a) does not hold in general (see [12] and [13]). The equivalence (a) was first obtained by Sadullaev in [12] for the special case \(X=\mathbb{P}^{n}\), where the proof relies on the facts that \(\mathbb{P}^{n}\) admits a large coordinate chart \(\mathbb{C}^{n}\) and on the relation between the extremal functions in \(\mathbb{C}^{n}\) and \(\mathbb{P}^{n}\). Our proof of the necessary condition in (a) is quite different and works for a general compact Kahler manifold. The item (b) seems to be not known before and it will be very useful.
We say that a compact subset \(K\) has _a uniform density in capacity_ if there exist constants \(q>0\) and \(\varkappa>0\) such that for every \(a\in K\),
\[\inf_{0<r<1}\frac{cap(K\cap B(a,r))}{r^{q}}\geq\varkappa, \tag{1.3}\]
where \(cap(\bullet)\) is the Bedford-Taylor capacity on \(X\). This is a local property and it holds for most of natural sets as we will see in Sections 5.2, 5.3, 5.4.
The characterization in Theorem 1.2 combined with Demailly's regularization theorem allows us to prove a general sufficient condition for the regularity of weighted extremal functions on compact Kahler manifolds.
**Corollary 1.3**.: _Let \(K\subset X\) be a compact non-pluripolar subset and \(\phi\in C^{0}(X,\mathbb{R})\)._
1. _If_ \(V_{K}\) _is continuous, then_ \(V_{K,\phi}\) _is continuous._
2. _Assume_ \(K\) _has a uniform density in capacity. If_ \(V_{K}\) _and_ \(\phi\) _are Holder continuous, then_ \(V_{K,\phi}\) _is Holder continuous._
In the statements like in the corollary it is often the case that the modulus of continuity or the Holder exponent of the weighted extremal function are weaker than the unweighted ones. Corollary 1.3-(b) can be considered as a characterization of the regularity in the sense of Definition 1.1. In other words, a compact subset \(K\) having a uniform density in capacity is \((\mathscr{C}^{\alpha},\mathscr{C}^{\alpha^{\prime}})\)-regular if and only if \(V_{K}\) is Holder continuous.
Observe that the regularity of the extremal function is a geometric property of the given compact set, i.e., it does not depend on the metric that we choose to define the function (Remark 2.5). By Theorem 1.2 the regularity at a given point of the compact subset can be verified by restricting to a holomorphic coordinated ball centered at the point (Remark 2.8). In this way we found that the regularity of the extremal function is equivalent to the classical notion in pluripotential theory.
**Theorem 1.4**.: _Let \(E\subset\mathbb{C}^{n}\subset\mathbb{P}^{n}\) a non-pluripolar compact subset. Then,_
1. \(V_{E}\) _is continuous if and only if_ \(E\) _is locally_ \(L\)_-regular._
2. \(V_{E}\) _is Holder continuous and_ \(E\) _has a uniform density in capacity if and only if_ \(E\) _has locally Holder continuous property of order_ \(q\)_._
The item (a) is essentially contained in [10]. However, our contribution is that from this one can draw the conclusion for sets on a general compact Kahler manifold. The sufficient condition in (a) has been used to show the continuity of the weighted extremal functions in [1, Proposition 1.5]. On the other hand, the item (b) is a new and effective criterion. This criterion can be applied to the previous works [10], [11] and [12], where the Holder continuity of the (weighted) extremal functions were proved directly. Note that the compact sets in \(\mathbb{C}^{n}\) having Holder continuous property (or HCP for short) have been widely studied after the beautiful work of Pawlucki and Plesniak [20]. The new input here is a precise estimate on the Holder norm/coefficient, which is the growth of sup-norm of \(V_{E\cap B(a,r)}\) like \(r^{-q}\) for \(q>0\) and \(r>0\) small in the item (b).
Thanks to Theorem 1.4-(b) and Corollary 1.3-(b) we are able to prove the \((\mathscr{C}^{\alpha},\mathscr{C}^{\alpha^{\prime}})\)-regularity for many new examples in Section 5. Among them the following subclass contained in the class of uniformly polynomial cuspidal sets. This was somehow conjectured by Zeriahi [10, page 562].
**Theorem 1.5**.: _A compact fat subanalytic subset in \(\mathbb{R}^{n}\) is \((\mathscr{C}^{\alpha},\mathscr{C}^{\alpha^{\prime}})\)-regular._
Here a compact subanalytic subset \(E\) of \(\mathbb{R}^{n}\) is fat if \(E=\overline{\operatorname{int}E}\), and in general, these sets admit a cusp singularity. These subsets are fundamental objects in real algebraic geometry [1]. Consequently, we obtain the speed of convergence of sequence of Fekete's measures of a compact fat subanalytic subset in \(\mathbb{R}^{n}\) (Theorem 6.2).
_Organization._ We recall basic properties of the extremal functions on a compact Kahler manifold in Section 2. Most of the results are well-known, except for Lemma 2.7, where we observe that for compact set contained in a holomorphic unit ball coordinate, the extremal function is well comparable with the relative extremal function in that coordinate chart. In Section 3 we study the Holder continuity of the extremal function and then we prove the characterization in Theorem 1.2. Next, we consider the weighted extremal functions in Section 4 and prove Corollary 1.3. Section 5 is devoted to study many examples of local \(L\)-regularity and local HCP compact sets in \(\mathbb{C}^{n}\). The equivalence in Theorem 1.4 is then proved. Theorem 1.5 is a consequence of Theorem 1.4-(b) and Corollary 5.13. In Section 6 we give some applications related to the speed of convergence for sequences of associated measures with Fekete points of fat analytic subset in \(\mathbb{R}^{n}\) and to the regularity of the extremal function in a big cohomology class.
_Acknowledgement._ I would like to thank W. Plesniak for providing me many copies of his papers including the one [20]. These have been very valuable resources for many questions investigated here. I would also like to thank S. Kolodziej for reading the manuscripts and giving many useful comments. The author is partially supported by the National Research Foundation of Korea (NRF) grant no. 2021R1F1A1048185.
_Notation._ Through out the note we denote
\[B(a,r)=\{z\in\mathbb{C}^{n}:|z-a|\leq r\} \tag{1.4}\]
the _closed balls_ with center at \(a\) and radius \(r>0\). Similarly, on a compact Kahler manifold \((X,\omega)\), we denote by \(B(a,r)\subset X\) the closed coordinate ball with center at \(a\) and of radius \(r\), i.e., it is biholomorphic to the closed ball \(B(0,r)\) in \(\mathbb{C}^{n}\). Moreover, for \(r>0\) small enough, we may take the closed coordinate ball as
\[B(a,r)=\{x\in X:\operatorname{dist}(x,a)\leq r\},\]
where \(\operatorname{dist}(\cdot,\cdot)\) is the distance function induced by the metric \(\omega\).
## 2. Preliminaries
In this section we recall basic results related to the global extremal function on a compact Kahler manifold. These results are the analogues of the classical results in [19, 21], [22] on the Siciak-Zaharjuta extremal function on \(\mathbb{C}^{n}\). The detailed proofs are contained in [1]. The first one is [1, Theorem 9.17].
**Proposition 2.1**.: _Let \(E\subset X\) be a Borel set. Then,_
* \(E\) _is pluripolar_ \(\Leftrightarrow\)__\(\sup_{X}V_{E}^{*}\equiv+\infty\)__\(\Leftrightarrow\)__\(V_{E}^{*}\equiv+\infty\)_._
* _If_ \(E\) _is not pluripolar, then_ \(V_{E}^{*}\in PSH(X,\omega)\)_. Moreover,_ \(V_{E}^{*}\equiv 0\) _in the interior of_ \(E\)_._
Next, we have [1, Proposition 9.19] for the basic properties.
**Proposition 2.2**.:
1. _If_ \(E\subset F\)_, then_ \(V_{E}\leq V_{F}\)_._
2. _If_ \(E\) _is an open subset, then_ \(V_{E}=V_{E}^{*}\)_._
3. _If_ \(P\subset X\) _is a pluripolar, then_ \(V_{E\cup P}^{*}=V_{E}^{*}\)_._
4. _Let_ \(\{E_{j}\}\) _be an increasing sequence of subset sets in_ \(X\) _and_ \(E:=\cup E_{j}\)_, then_ \(\lim_{j\to\infty}V_{E_{j}}^{*}=V_{E}^{*}.\)__
5. _Let_ \(\{K_{j}\}\) _be a decreasing sequence of compact sets in_ \(X\) _and_ \(K:=\cap K_{j}\)_, then,_ \(\lim_{j\to\infty}V_{K_{j}}=V_{K}\)_. Furthermore,_ \(\lim_{j\to\infty}V_{K_{j}}^{*}=V_{K}^{*}\) _a.e._
We will frequently need the following result for compact Kahler manifolds.
**Lemma 2.3**.: _If \(K\subset X\) is a compact subset, then \(V_{K}\) is lower semi-continuous._
Proof.: Let \(v\in PSH(X,\omega)\) and \(v\leq 0\) on \(K\). By Demailly's regularization theorem there exists a sequence of smooth function \(v_{j}\in PSH(X,\omega)\cap C^{\infty}(X)\) decreasing to \(v\) (see also [1]). Fix \(\delta>0\). By Hartogs' lemma we have for \(j\geq j_{0}\),
\[v_{j}-\delta\leq 0\quad\text{on }K.\]
Consequently, \(V_{K}\) is supremum of a family of continuous functions, and the result follows.
This gives a simple criterion to check the continuity of the extremal functions.
**Corollary 2.4**.: _If \(K\) is a compact subset and \(V_{K}^{*}\equiv 0\) on \(K\), then \(V_{K}\) is continuous on \(X\)._
Proof.: By Proposition 2.1-(a), \(K\) is not pluripolar and \(V_{K}^{*}\in PSH(X,\omega)\). It follows from the definition that \(V_{K}^{*}\leq V_{K}\). Therefore, \(V_{K}=V_{K}^{*}\) on \(X\). Combining with Lemma 2.3 we infer that \(V_{K}\) is continuous.
**Remark 2.5**.: The continuity of \(V_{K}\) is a property of the set itself, i.e., it is independent of reference Kahler metric \(\omega\). Indeed, assume \(\omega^{\prime}\) is another Kahler metric and \(K\) is regular with respect to \(\omega\). There exists \(A>0\) such that \(\omega^{\prime}\leq A\omega\). It follows that \(V_{\omega^{\prime};K}^{*}\leq AV_{\omega;K}^{*}.\) Similarly this also holds for the Holder continuity of \(V_{K}\) (see Lemma 3.1 below). For this reason if there is no confusion then we only write \(V_{K}\) for the extremal function with respect to a fixed Kahler metric.
There is a useful relation between the extremal function and the "zero-one" relative extremal function
\[h_{K}(z)=\sup\left\{v(z):v\in PSH(X,\omega),\;v\leq 1,\quad v_{|_{K}}\leq 0 \right\}.\]
Namely, for non-pluripolar compact set \(K\), we set \(M_{K}:=\sup_{X}V_{K}^{*}\), then
\[V_{K}^{*}\leq M_{K}h_{K}^{*}. \tag{2.1}\]
**Corollary 2.6**.: _For \(\varepsilon>0\) small denote_
\[K_{\varepsilon}=\{z\in X:\operatorname{dist}(z,K)\leq\varepsilon\}.\]
_Then, \(V_{K_{\varepsilon}}\) is continuous and \(\lim_{\varepsilon\to 0}V_{K_{\varepsilon}}=V_{K}\)._
Proof.: By Proposition 2.2-(d), it suffices to prove \(V_{K_{\varepsilon}}\) is continuous. Let \(x\in K_{\varepsilon}\) be fixed. There exists \(a\in K\) such that \(\operatorname{dist}(a,x)\leq\varepsilon\) and \(B(a,\varepsilon)\subset K_{\varepsilon}\). This implies
\[V_{K_{\varepsilon}}^{*}\leq V_{B(a,\varepsilon)}^{*}\leq M_{a}h_{B(a, \varepsilon)}^{*},\]
where \(M_{a}=\sup_{X}V_{B(a,\varepsilon)}\). Now we will show that \(h^{*}_{B(a,\varepsilon)}(x)=0.\) Let \(\Omega\subset X\) be a coordinate ball centered at \(a\) which contains \(B(a,\varepsilon)\). Without loss of generality (after shrinking \(\Omega\) and choosing \(\varepsilon\) small) we may assume that
\[\omega=dd^{c}\rho(z)\quad\text{ in }\Omega\]
for a smooth plurisubharmonic function \(\rho\). Then,
\[h^{*}_{B(a,\varepsilon)}(z)+\rho(z)\leq\widehat{h}^{*}_{B(a,\varepsilon), \rho}(z),\]
where the weighted relative extremal function \(\widehat{h}_{B(a,\varepsilon),\rho}\) is given by
\[\widehat{h}_{B(a,\varepsilon),\rho}(z)=\sup\left\{v(z)\in PSH(\Omega):v(z)\leq \rho(z)+1,v_{|_{B(a,\varepsilon)}}\leq\rho\right\}.\]
To complete the proof, we will show that \(\widehat{h}_{B(a,\varepsilon),\rho}(x)=\rho(x)\). For \(\varepsilon>0\) small \(B(a,\varepsilon)\) is smooth domain; hence it is locally \(L\)-regular at \(x\) (see Section 5.1). Therefore, \(\widehat{h}_{B(a,\varepsilon)\cap B(x,r)}(x)=0\) for every \(0<r<\varepsilon\). By the definitions of weighted and unweighted relative extremal functions we have
\[\widehat{h}_{B(a,\varepsilon),\rho}\leq\widehat{h}_{B(a,\varepsilon)\cap B(x,r)}+\sup_{B(x,r)}\rho.\]
This implies \(\widehat{h}_{B(a,\varepsilon),\rho}(x)\leq\sup_{B(x,r)}\rho\). Let \(r\to 0\) we get the desired inequality and the proof is completed.
We consider now a special case in which the given compact set is contained in a nice holomorphic coordinate chart. Let \(K\subset X\) be a non-pluripolar compact subset and \(a\in K\). Assume that there is a holomorphic coordinate ball \((\Omega,f)\) in \(X\) centered at \(a\) such that \(K\subset\subset\Omega\), where \(f:\Omega\to B:=B(0,1)\subset\mathbb{C}^{n}\) is a biholomorphic map with \(f(a)=0\). Suppose that \(\omega=dd^{c}\rho\) for \(\rho\in PSH(\Omega)\cap C^{\infty}(\overline{\Omega})\) satisfying that \(\rho\) attains its minimum \(\inf_{\Omega}\rho=\rho(a)=0\). (In general we can do this by shrinking \(\Omega\) and modifying \(\rho\) by a pluriharmonic function.) Therefore without loss of generality, we may also assume
\[0\leq\rho\leq 1\quad\text{ on }\overline{\Omega}.\]
Recall also the "zero-one" relative extremal function defined for a bounded set \(E\) in \(B\subset\mathbb{C}^{n}\) by
\[\widehat{h}_{E}(z)=\sup\left\{v(z):v\in PSH(B),\,v\leq 1,v_{|_{E}}\leq 0 \right\}. \tag{2.2}\]
**Lemma 2.7**.: _Let \(a\in K\) and \((\Omega,f)\) be as above. There exist two positive constants \(m,M\) depending only on \(K,\Omega\) and \(\rho\) such that on the unit ball \(B\),_
\[m\,\widehat{h}_{f(K)}(z)\leq(V_{K}+\rho)\circ f^{-1}(z)\leq M\,\widehat{h}_{f( K)}(z).\]
_Furthermore, if \(\Omega_{c}=\{V_{K}^{*}<c\}\subset\subset\Omega\) for a positive constant \(c\), then_
\[(V_{K}^{*}+\rho)\circ f=c\,\widehat{h}_{f(K)}\quad\text{in }f(\Omega_{c}).\]
Proof.: We follow closely the argument in [11, Proposition 5.3.3]. Since \(K\) is a non-pluripolar, we have \(M_{\Omega}=\sup_{\Omega}V_{K}^{*}<+\infty\). Hence, \(u:=(V_{K}+\rho)\circ f^{-1}\) is plurisubharmonic in \(B\) and \(0\leq u\leq M_{\Omega}+1=:M\). By definition,
\[u\leq M\widehat{h}_{f(K)}.\]
Therefore, the second inequality follows.
For the first inequality, notice that \(m:=\inf_{\partial\Omega}\rho>0\) by the assumption. Take \(0<\varepsilon<m\). It follows from the previous corollary that if \(\delta>0\) is small enough,
then \(V_{K_{\delta}}\) is a continuous \(\omega\)-psh function, and \(K_{\delta}=\{x\in X:\operatorname{dist}(x,K)\leq\delta\}\) is relatively compact in \(\Omega\). Let \(v\in PSH(B)\) be such that \(v\leq 0\) on \(f(K)\) and \(v\leq 1\) in \(B\). Define
\[\widetilde{v}=\begin{cases}\max\{(m-\varepsilon)v\circ f-\rho,V_{K_{\delta}} \}&\text{ in }\Omega,\\ V_{K_{\delta}}&\text{ in }X\setminus\Omega.\end{cases}\]
Since \(\limsup_{x\to\partial\Omega}[(m-\varepsilon)(v\circ f(x)-\rho(x)]\leq(m- \varepsilon)-\inf_{\partial\Omega}\rho\leq-\varepsilon\), we have \((m-\varepsilon)\,v\circ f-\rho\leq V_{K_{\delta}}\) near \(\partial\Omega\). Hence, \(\widetilde{v}\in PSH(X,\omega)\) and \(\widetilde{v}\leq 0\) on \(K\). Then,
\[(m-\varepsilon)v\circ f-\rho\leq V_{K}\quad\text{ in }\Omega.\]
By letting \(\varepsilon\) go to \(0\), we get the first inequality.
Next, to prove the next statement of the lemma we write \(D=f(\Omega_{c})\subset\subset B\). By the second inequality we have \((V_{K}^{*}+\rho)\circ f^{-1}\leq c\,\widehat{h}_{f(K)}\). To prove the opposite inequality, take \(v\in PSH(D)\) such that \(v\leq 0\) on \(f(K)\), and \(v\leq 1\). Define
\[\widetilde{v}=\begin{cases}\max\{c\,v\circ f-\rho,V_{K}^{*}\}&\text{ in } \Omega_{c},\\ V_{K}^{*}&\text{ in }X\setminus\Omega_{c}.\end{cases}\]
Clearly, \(\widetilde{v}\in PSH(X,\omega)\) and \(\widetilde{v}\leq V_{K}^{*}\) on \(K\). By [1] the negligible sets are pluripolar, the set \(\{V_{K}^{*}>0\}\cap K\) is pluripolar. It follows from [1, Theorem 12.5] that we can find \(\psi\in PSH(X,\omega)\) such that \(\psi=-\infty\) on this set and \(\psi<0\) on \(X\). Then, \((1-\varepsilon)\widetilde{v}+\varepsilon\psi\leq V_{K}\) in \(X\) for every \(0<\varepsilon<1\). Thus, in \(\Omega_{c}\),
\[cv\circ f-\rho\leq\widetilde{v}=\left(\sup_{0<\varepsilon<1}\,[1-\varepsilon )\widetilde{v}+\varepsilon\psi\right]\right)^{*}\leq V_{K}^{*}.\]
This gives the required inequality.
**Remark 2.8**.: The above lemma is an analogue of a classical result in pluripotential theory [1, Proposition 5.3.3]. Namely, for compact subsets \(E\) in a ball \(B\) the "zero-one" relative extremal and the extremal function \(L_{E}\) satisfies
\[m\,\widehat{h}_{E}\leq L_{E}\leq M\widehat{h}_{E}\quad\text{in }B,\]
where \(m,M\) are two positive constants depending only on \(E,B\). Since \(\rho(x)\) is Lipschitz continuous function in \(\bar{\Omega}\) and \(\rho(a)=0\), the continuity (resp. Holder continuity) of \(V_{K}\) at \(a\) is equivalent to the classical notions \(L\)-regularity (resp. the Holder continuous property) of the compact set \(f(K)\subset\mathbb{C}^{n}\) at \(f(a)=0\in f(K)\).
Next, we characterize the uniform density in capacity condition of a given compact set \(K\subset X\). First, we show that it is equivalent to the control of sup-norm of the extremal function on balls. We can normalize \(\omega\) so that
\[\int_{X}\omega^{n}=1.\]
Thus for every compact subset \(K\subset X\), \(cap(K)\leq 1.\) Recall that for a Borel set \(E\subset X\), the Bedford-Taylor capacity is defined by
\[cap(E)=\sup\left\{\int_{E}(\omega+dd^{c}v)^{n}:v\in PSH(X,\omega),\,-1\leq v \leq 0\right\}.\]
By [1, Lemmas 12.2, 12.3] we know that there exists a uniform constant \(A\) such that for every compact subset \(K\),
\[\frac{1}{[cap(K)]^{\frac{1}{n}}}\leq\sup_{X}V_{K}\leq\frac{A}{cap(K)} \tag{2.3}\]
Therefore, if \(K\) has a uniform density in capacity (1.3), then
\[\sup_{X}V_{K\cap B(a,r)}\leq\frac{A}{\varkappa r^{q}},\quad 0<r<1.\]
Conversely, if we have the control \(\sup_{X}V_{K\cap B(a,r)}\leq A/r^{q}\) for \(0<r<1\), where \(A,q>0\) are uniform constants, then \(K\) has uniform density in capacity, i.e.,
\[\frac{cap(K\cap B(a,r))}{r^{q/n}}\geq\frac{1}{A^{n}},\quad 0<r<1.\]
Secondly, since the uniform density with capacity is a local property, it can be verified by using the Bedford-Taylor capacity in a local coordinate. Without loss of generality we may assume that \(K\) is contained in the coordinate unit ball \(\Omega\) as above. Then, by considering its image in that chart, we may assume that
\[K\subset B(0,1/2)\subset\Omega:=B(0,1)\subset\mathbb{C}^{n}.\]
By equivalence between the global Bedford-Taylor capacity and the local one [11, Eq. (6.2)], \(K\) has a uniform density in capacity if and only if
\[\frac{cap^{\prime}(K\cap B(a,r),\Omega)}{r^{q}}\geq\varkappa,\quad 0<r<1, \tag{2.4}\]
where for a Borel subset \(E\subset\Omega\),
\[cap^{\prime}(E,\Omega)=\sup\left\{\int_{E}(dd^{c}u)^{n}:u\in PSH(\Omega),-1 \leq u\leq 0\right\}.\]
Then, by applying the comparison between two capacities [1] (see also [11, Theorem 2.7]), which is the local version of (2.3), the condition (2.4) is equivalent to the existence of uniform constant \(A,q^{\prime}>0\) such that for every \(a\in K\) and \(0<r<1\),
\[\sup_{\Omega}L_{K\cap B(a,r)}\leq\frac{A}{r^{q^{\prime}}}.\]
More precisely, if \(\sup_{\Omega}L_{K\cap B(a,r)}\leq A/r^{q}\), then
\[\frac{cap^{\prime}(K\cap B(a,r))}{r^{nq}}\geq\frac{1}{A}. \tag{2.5}\]
**Remark 2.9**.: To verify the uniform density in capacity in (1.3) or (2.4) it is enough to have a uniform \(0<r_{0}\leq 1\) satisfying
\[\inf_{0<r<r_{0}}\frac{cap(K\cap B(a,r))}{r^{q}}\geq\varkappa.\]
Then, by monotonicity of the capacity we get the whole range \(0<r<1\).
## 3. Holder continuity of the extremal function
In this section we study the continuity and Holder continuity of the extremal function. First, we will show that the modulus of continuity of \(V_{K}\) on \(X\) is equivalent to the one on \(K\) only. This is a generalization of a well-known result of Blocki [14, Proposition 3.5]. For simplicity we only prove it for Holder continuity case.
Let \(K\subset X\) be a compact subset and \(a\in K\). For \(0<\delta\leq 1\) the modulus of continuity of \(V_{K}\) at \(a\) is given by
\[\varpi_{K}(a,\delta):=\sup_{|z-a|\leq\delta}V_{K}(z). \tag{3.1}\]
Then the modulus of continuity of \(V_{K}\) on \(K\) is given by
\[\varpi_{K}(\delta)=\sup\{\varpi_{K}(a,\delta):a\in K\}. \tag{3.2}\]
**Lemma 3.1**.: _Let \(K\subset X\) be a non-pluripolar compact subset and \(0<\mu\leq 1\). Then, \(V_{K}\) is \(\mu\)-Holder continuous on \(X\) if and only if it is \(\mu\)-Holder continuous on \(K\), i.e., there exists \(0<\delta_{0}\leq 1\) such that for every \(0<\delta\leq\delta_{0}\),_
\[V_{K}(z)\leq C\delta^{\mu},\quad\mathrm{dist}(z,K)\leq\delta.\]
Proof.: The necessary condition is obvious. To prove the sufficient condition, we use the regularization theorem of Demailly. Notice the Holder continuity of \(V_{K}\) on \(K\) is equivalent to
\[\varpi_{K}(\delta)\leq C\delta^{\mu}\]
for every \(0<\delta\leq\delta_{0}\), where \(C\) is a uniform constant. In particular, \(V_{K}^{*}\equiv 0\) on \(K\). Therefore, \(V:=V_{K}\) is continuous. Consider the regularization \(\rho_{t}V\) of \(\omega\)-psh function \(V\) as in [10] and
\[V_{\delta,b}(z)=\inf_{[0,\delta]}\left(\rho_{t}V(z)+c_{1}t^{2}+c_{1}t-b\log \frac{t}{\delta}\right).\]
By the estimate in [10] and [1] (see also [11, Lemma 4.1]) we know that
\[\omega+dd^{c}V_{\delta,b}\geq-(c_{0}b+2c_{1}\delta)\omega,\]
and \(\rho_{t}V+c_{1}t^{2}\) is increasing in \(t\). Here \(c_{0},c_{1}\) are uniform constants depending only on \(X\) and \(\omega\).
Consider \(b=A\delta^{\mu}\) for \(A>0\) so that \(c_{0}b+2c_{1}\delta=\delta^{\mu}\). Then
\[V_{\delta}=\frac{V_{\delta,b}}{1+\delta^{\mu}}\in PSH(X,\omega). \tag{3.3}\]
Moreover, for \(a\in K\),
\[\begin{split}(1+\delta^{\mu})V_{\delta}(a)&\leq( \rho_{\delta}V(z)+c_{1}\delta^{2}+c_{1}\delta)\\ &\leq\sup_{|z-a|\leq\delta}V(z)+c_{1}\delta+c_{1}\delta^{2}\\ &\leq\varpi_{K}(\delta)+c_{1}\delta+c_{1}\delta^{2}\\ &\leq c_{2}\delta^{\mu},\end{split} \tag{3.4}\]
where in the last inequality we used the fact that \(V\) is \(\mu\)-Holder continuous on \(K\) and \(c_{2}\) is a uniform constant. Moreover, \(V\geq 0\) on \(X\), so we have \(V_{\delta}(z)\geq 0\). Therefore,
\[V_{\delta}(a)\leq c_{2}\delta^{\mu}\quad\text{for }a\in K.\]
By definition of \(V\) and (3.3),
\[V_{\delta}(z)\leq V(z)+c_{2}\delta^{\mu}\quad\text{for }z\in X. \tag{3.5}\]
At this point we can conclude the Holder continuity of \(V\) on \(X\) as in the argument in [10]. Since our setting is quite different, we give all details for the reader convenience.
Let us fix a point \(z\in X\), then minimum in the definition of \(V_{\delta,b}(z)\) is realized for some \(t_{0}=t_{0}(z)\). By (3.3) and (3.5) we have
\[(1+\delta^{\mu})(\rho_{t_{0}}V+c_{1}t_{0}^{2}+c_{1}t_{0}-b\log\frac{t_{0}}{ \delta}-V)\leq c_{2}\delta^{\mu}.\]
Since \(\rho_{t}V+c_{1}t^{2}+c_{1}t-V\geq 0\), we have
\[b(1+\delta^{\mu})\log\frac{t_{0}}{\delta}\geq-c_{2}\delta^{\mu}.\]
Combining this with \(b=A\delta^{\mu}\), one gets that
\[t_{0}(z)\geq\delta\kappa\quad\text{ for }\kappa=\exp\left(-\frac{2Ac_{2}}{(1+ \delta_{0}^{\mu})}\right),\]
where \(\delta_{0}\) is already fixed at the beginning, and \(\kappa\) is a uniform constant. Since \(t\mapsto\rho_{t}V+c_{1}t^{2}\) is increasing and \(t_{0}:=t_{0}(z)\geq\delta\kappa\),
\[\rho_{\kappa\delta}V(z)+K(\delta\kappa)^{2}+K\delta\kappa-V(z) \leq\rho_{t_{0}}V(z)+Kt_{0}^{2}+Kt_{0}-V(z)\] \[=V_{\delta,b}(z)-V(z)\] \[=\frac{\delta^{\mu}}{1-\delta^{\mu}}V_{\delta}+(V_{\delta}-V).\]
Combining this and (3.5) we get that
\[\rho_{\kappa\delta}V(z)-V(z)\leq C\delta^{\mu}.\]
The desired estimate follows by rescaling \(\delta:=\kappa\delta\) and increasing the uniform constant \(C\).
Let us prove the characterizations of the regularity in Theorem 1.2.
Proof of Theorem 1.2.: Since \(V_{E}^{*}\leq V_{F}^{*}\) for Borel sets \(E\subset F\), the sufficient conditions in (a) and (b) are obvious. In what follows we prove the necessary conditions.
(a) Assume \(V_{K}^{*}(a)=0\), we need to show that \(V_{K\cap B}^{*}(a)=0\), where \(B:=B(a,r)\) is the closed coordinate ball centered at \(a\) with radius \(r\). Indeed, let us consider the positive relative extremal function
\[h_{K\cap B}(z)=\sup\left\{v\in PSH(X,\omega):v_{|_{K\cap B}}\leq 0,v\leq 1 \right\}.\]
Then, \(h_{K\cap B}^{*}\in PSH(X,\omega)\) with \(0\leq h_{K\cap B}^{*}\leq 1\), and \(E\) is pluripolar if and only if \(h_{E}^{*}\equiv 1\) (see [1, page 620]).
Let \(0\leq\chi\leq 1\) be a smooth function on \(X\) such that \(\chi(a)=0\) and \(\chi\equiv 1\) on \(X\setminus B(a,r)\). We have
\[\|\chi\|_{C^{1}}\leq c_{1}/r,\quad\|\chi\|_{C^{2}}\leq c_{2}/r^{2}, \tag{3.6}\]
where \(c_{1},c_{2}\) are uniform constants independent of \(a\) and \(r\). Hence, there exists \(0<\varepsilon\leq 1/2\), which is a small multiple of \(r^{2}\), such that \(-\varepsilon\chi\) belongs to \(PSH(X,\omega/2)\). Given \(u\in PSH(X,\omega)\) satisfying \(u\leq 1\) and \(u\leq 0\) on \(K\cap B\), we define
\[\varphi(z)=\varepsilon u(z)-\varepsilon\chi(z).\]
Then, \(\varphi(z)\leq 0\) on \(K\) and it is \(\omega\)-psh. It follows from definition of \(V_{K}\) that \(\varphi(z)\leq V_{K}(z).\) Taking supremum over all such \(u\) we get
\[\varepsilon h_{K\cap B}^{*}(z)-\varepsilon\chi(z)\leq V_{K}^{*}(z). \tag{3.7}\]
Since \(V_{K}^{*}(a)=0=\chi(a)\), we have \(h_{K\cap B}^{*}(a)=0\). In particular, \(K\cap B\) is non-pluripolar. Denote \(M:=\sup_{X}V_{K\cap B}^{*}\). It follows from (2.3) that
\[1\leq M\leq\frac{A}{cap(K\cap B)}<+\infty\]
for a uniform constant \(A\).
Let \(v\in PSH(X,\omega)\) with \(v\leq 0\) on \(K\cap B\). Then, \(v\leq M\) on \(X\). It follows that \(v/M\leq h^{*}_{K\cap B}\). Taking supremum over all such \(v\) we get \(V^{*}_{K\cap B}\leq Mh^{*}_{K\cap B}\) and the proof of (a) follows.
(b) Assume \(V_{K}(z)\leq C\delta^{\mu}\) for every \(z\in X\) that \(\operatorname{dist}(z,a)\leq\delta\leq\delta_{0}\), where \(0<\delta_{0}\leq 1\) is fixed. This implies \(V^{*}_{K}(z)\leq C\delta^{\mu}\) for every \(z\) such that \(\operatorname{dist}(z,a)\leq\delta/2\). In particular, \(V_{K}\) is continuous. Since \(\chi(a)=0\), using (3.7), we have
\[\varepsilon h^{*}_{K\cap B}(z)\leq\varepsilon\chi(z)+C\delta^{\mu}\leq(c_{1} \varepsilon/r+C)\delta^{\mu},\]
where \(C,c_{1}\) do not depend on \(r\). Combining with (2.1) we get for \(\operatorname{dist}(z,a)\leq\delta/2\),
\[V^{*}_{K\cap B}(z)\leq M\left(\frac{c_{1}}{r}+\frac{C}{\varepsilon}\right) \delta^{\mu}. \tag{3.8}\]
The proof of (b) is completed.
The above proof indeed gives the following precise estimate
**Corollary 3.2**.: _Let \(K,a,B(a,r)\) be as in Theorem 1.2 and \(V_{K}\) is continuous._
* _Assume_ \(V_{K}\) _is_ \(\mu\)_-Holder continuous at_ \(a\)_. Then, for_ \(0<\delta\leq\delta_{0}\)_,_ (3.9) \[V_{K\cap B(a,r)}(z)\leq\frac{A}{cap(K\cap B(a,r))}\frac{\delta^{\mu}}{r^{2}}, \quad\operatorname{dist}(z,a)\leq\delta,\] _where_ \(A\) _is a uniform constant that is independent of_ \(a\) _and_ \(r\)_._
* _Conversely, if there exist uniform constants_ \(0<\mu,r_{0},\delta_{0}\leq 1\)_, and_ \(A>0\) _such that (_3.9_) hold for every_ \(a\in K\)_,_ \(0<r\leq r_{0}\) _and_ \(\operatorname{dist}(z,a)\leq\delta\leq\delta_{0}\)_, then_ \(V_{K}\) _is_ \(\mu\)_-Holder continuous._
Proof.: The inequality in (a) is a direct consequence of (3.8). The statement in (b) follows from a covering argument as follows. Let \(z\in X\) be a point that \(\operatorname{dist}(z,K)\leq\delta\). Let \(a\in K\) be such that \(\operatorname{dist}(z,K)=\operatorname{dist}(z,a)\). Given \(0<r\leq r_{0}\), we can cover \(K\) by finitely many \(B(a_{i},r/2)\), \(i\in I\). Then \(a\in B(a_{i},r/2)\) for some \(a_{i}\). Hence, \(B(a_{i},r/2)\subset B(a,r)\). Put \(c=\min_{i\in I}cap(K\cap B(a_{i},r/2))>0\). We have
\[cap(K\cap B(a,r))\geq cap(K\cap B(a_{i},r/2))\geq c.\]
It follows from monotonicity and the assumption that
\[V_{K}(z)\leq V_{K\cap B(a,r)}(z)\leq\frac{A}{c}\frac{\delta^{\mu}}{r^{2}}.\]
The proof is completed as \(r\) is fixed.
**Remark 3.3**.: For the projective space \(\mathbb{P}^{n}\) equipped with the Fubini-Study metric, Lemma 3.1 answered a question of Sadullaev [10, Problem 2.13].
## 4. Regularity of weighted extremal functions
Let \(\phi\) be a real-valued continuous function on \(X\). We consider the weighted extremal function
\[V_{K,\phi}(z)=\sup\{v(z):v\in PSH(X,\omega),\,v_{|_{K}}\leq\phi\}.\]
When \(K=X\), this weighted extremal function (or the envelope) is the well studied. For example, it is proved by Tosatti [16] that if \(\phi\) is smooth then its envelope has the optimal \(C^{1,1}\)-regularity (see also Berman [1]). More generally, if \(\phi\) is \(C^{0,\alpha}(X)\) for \(0\leq\alpha\leq 1\), then the same regularity of the envelope is proved in
[11]. However, the problem becomes very different for compact subsets which is important for applications, and this is our main focus.
If \(E\subset F\) be compact non-pluripolar subsets and \(\phi\leq\psi\) are continuous functions, then we have the following monotonicity
\[V_{F,\phi}\leq V_{E,\phi}\leq V_{E,\psi}. \tag{4.1}\]
Another useful property is the following one.
**Lemma 4.1**.: _Let \(K\subset X\) be a compact non-pluripolar subset and \(\phi\) a continuous function on \(X\)._
1. \(V_{K}^{*}+\inf_{K}\phi\leq V_{K,\phi}^{*}\leq V_{K}^{*}+\sup_{K}\phi.\)__
2. _Let_ \(\theta=\omega+dd^{c}\phi\) _and_ \(V_{\theta;K}=\sup\{v\in PSH(X,\theta):v_{|_{K}}\leq 0\}.\) _Then,_ \[V_{K\cap B,\phi}=V_{\theta;K\cap B}+\phi,\]
Proof.: It follows immediately from the definitions of \(V_{K}\) and \(V_{K,\phi}\).
Now we are ready to state the weighted version of Lemma 3.1.
**Proposition 4.2**.: _Let \(K\subset X\) be a compact non-pluripolar subset and \(\phi\in C^{0}(X,\mathbb{R})\)._
1. \(V_{K,\phi}\) _is continuous if and only if_ \(V_{K,\phi}^{*}\leq\phi\) _on_ \(K\)_._
2. _Assume_ \(\phi\) _is Holder continuous._ \(V_{K,\phi}\) _is Holder continuous on_ \(X\) _if and only if it is Holder continuous on_ \(K\)_, i.e., for every_ \(z\in X\) _with_ \(\operatorname{dist}(z,K)\leq\delta\leq\delta_{0}\)_,_ \[V_{K,\phi}(z)\leq\phi(z)+C\delta^{\mu},\] _where_ \(C\) _and_ \(0<\delta_{0}\leq 1\) _are uniform constants._
Proof.: Write \(V:=V_{K,\phi}\). Since \(K\) is compact, Demailly's regularization theorem implies also that \(V\) is lower semi-continuous. Therefore, (a) follows easily from the definition as \(V=V^{*}\).
Next, the proof of (b) is very similar to the one in Lemma 3.1, namely we consider the function \(\rho_{t}V\) regularization of \(\omega\)-psh function \(V\) and keep the notations as in the proof of that lemma. The equation (3.4) becomes, for \(a\in K\),
\[(1+\delta^{\mu})V_{\delta}(a) \leq(\rho_{\delta}V(a)+c_{1}\delta^{2}+c_{1}\delta)\] \[\leq\sup_{|z-a|\leq\delta}V(z)+2c_{1}\delta\] \[\leq\phi(a)+C\delta^{\mu},\]
where in the last inequality we used the fact \(V(z)\leq\phi(z)+C\delta^{\mu}\) for \(\operatorname{dist}(z,K)\leq\delta\) and \(\phi\) is \(\mu\)-Holder continuous (we may decrease \(\mu>0\) if necessary). Therefore, \(V_{\delta}(a)\leq\phi(a)+(\|V_{\delta}\|_{L^{\infty}}+C)\delta^{\mu}\). Now by the definition of \(V\),
\[V_{\delta}(z)\leq V(z)+c_{2}\delta^{\mu},\]
where \(c_{2}\) depends additionally on the sup-norm of \(V\). We implies that \(V\) is Holder continuous on \(X\) similar as in the proof of the lemma.
We are ready to prove the statements for weighted extremal functions.
Proof of Corollary 1.3.: (a) Assume \(V_{K}\) is continuous, we need to show that \(V=V_{K,\phi}\) is continuous. Thanks to Proposition 4.2-(a) we need to show that \(V^{*}\leq\phi\) on \(K\). In fact, let \(a\in K\) and \(\varepsilon>0\). We can choose \(r>0\) so small that \(\phi(x)\leq\phi(a)+\varepsilon\) in \(B(a,r)\). By monotonicity (4.1), we have
\[V\leq V_{K\cap B(a,r),\phi(a)+\varepsilon}=\phi(a)+\varepsilon+V_{K\cap B(a,r) }\quad\text{on }X.\]
Since \(V_{K}\) is continuous, \(V_{K\cap B(a,r)}\) is continuous at \(a\) by Theorem 1.2-(a). Hence \(V^{*}(a)\leq\phi(a)+\varepsilon\). Let \(\varepsilon\to 0\), we get \(V^{*}(a)\leq\phi(a)\). The proof is completed.
(b) Now assume that \(V_{K}\) and \(\phi\) are Holder continuous. We wish to show that \(V=V_{K,\phi}\) is Holder continuous. Without loss of generality, we may assume that \(\phi\) is Holder continuous with the same exponent \(0<\mu\leq 1\). Otherwise, we just take the minimum of two exponents. From (a) we have \(V\leq\phi\) on \(K\) and it is continuous. To get the Holder continuity of \(V\), by Proposition 4.2-(b), it is sufficient to prove
\[V(z)\leq\phi(z)+C\delta^{\mu},\quad\operatorname{dist}(z,K)\leq\delta\leq \delta_{0},\]
for uniform constants \(C\) and \(0<\delta_{0}\leq 1\). Indeed, fix such a point \(w\) and let \(a\in K\) be such that \(0<\operatorname{dist}(w,a)=\operatorname{dist}(w,K)\leq\delta\). Let \(r>0\) be small and its value will be determined later. By Holder continuity, \(|\phi(z)-\phi(a)|\leq c_{3}r^{\mu}\) on \(B(a,r)\). Using this and the monotonicity (4.1), we obtain
\[\begin{split} V&\leq V_{K\cap B(a,r),\phi(a)+c_{3}r^ {\mu}}\\ &=\phi(a)+c_{3}r^{\mu}+V_{K\cap B(a,r)}\\ &\leq\phi(w)+c_{3}\delta^{\mu}+c_{3}r^{\mu}+V_{K\cap B(a,r)}. \end{split} \tag{4.2}\]
By Corollary 3.2-(a) and uniform density in capacity of \(K\) we derive
\[V_{K\cap B(a,r)}(w)\leq\frac{A}{cap(K\cap B(a,r))}\frac{\delta^{\mu}}{r^{2}} \leq\frac{A\,\delta^{\mu}}{\varkappa\,r^{q+2}} \tag{4.3}\]
for uniform constants \(A,\varkappa,q\) which are independent of \(r\) and \(a\). Now, we can choose \(r=\delta^{\frac{\mu}{\mu+2+q}}\) to conclude \(V(w)\leq\phi(w)+C\delta^{\mu^{\prime}}\), where \(\mu^{\prime}=\frac{\mu^{2}}{\mu+2+q}\). Hence, \(V\) is Holder continuous on \(X\).
**Remark 4.3**.: The above proof showed that if \(V_{K}\) is \(\mu\)-Holder continuous, then \(V_{K,\phi}\) is \(\mu^{\prime}\)-Holder continuous for \(\mu^{\prime}=\mu^{2}/(\mu+2+q)\).
**Remark 4.4**.: The inverse directions of Corollary 1.3 hold for weights which are quasi-plurisubharmonic functions. In particular, they are valid for \(C^{2}\)-smooth weights. Since the proof is very similar to the ones of the corollary, we only sketch it here. By Remark 2.8 we only need to show that \(V_{A\omega;K}\) is continuous (resp. Holder continuous) for \(A>0\) large that \(\theta=A\omega+dd^{c}\phi\geq\omega\). Using the relation between the weighted and unweighted extremal functions in Lemma 4.1,
\[V_{K,\phi}=V_{\theta;K}+\phi,\]
it follows that the regularity of \(V_{K,\phi}\) is equivalent to the one of \(V_{\theta;K}\). Notice that \(\theta\) may not be smooth, we cannot simply find \(A^{\prime}\) so that \(\theta\leq A^{\prime}\omega\) and then conclude that \(V_{A^{\prime}\omega;K}\) is continuous (resp. Holder continuous). However, by the strict positivity of \(\theta\), the proof of the corollary shows that \(V_{\theta;K,-\phi}\) is continuous (resp. Holder continuous). Finally, using one more time the relation
\[V_{\theta;K,-\phi}=V_{A\omega;K}+\phi,\]
we get the continuity (resp. Holder continuity) of \(V_{K}\).
## 5. Regularity of compact sets
Thanks to the characterizations in Theorem 1.2 to study the regularity (continuity and Holder continuity) of the extremal functions we may assume that the compact set is contained in the holomorphic coordinate unit ball. By Remark 2.8, without loss of generality, we may restrict ourself to compact subset in \(\mathbb{C}^{n}\subset\mathbb{P}^{n}=:X\)
We first show that they are characterized by the well-known notions in pluripotential theory. Afterwards we provide a large number of examples of compact sets possessing these properties.
Let us denote \(\rho=\frac{1}{2}\log(1+|z|^{2})\) which is the local potential of the Fubini-Study metric \(\omega\) on \(\mathbb{C}^{n}\) of the projective space \(\mathbb{P}^{n}\). The Lelong class in \(\mathbb{C}^{n}\) is given by
\[\mathcal{L}=\left\{f\in PSH(\mathbb{C}^{n}):f(z)-\rho(z)<c_{f}\right\}. \tag{5.1}\]
The Siciak-Zaharjuta extremal function associated to a non-pluripolar set \(E\) in \(\mathbb{C}^{n}\) is given by
\[L_{E}=\sup\left\{f\in\mathcal{L}(\mathbb{C}^{n}):f_{|_{E}}\leq 0\right\}\]
and its weighted version for a (real-valued) continuous function \(\phi\) on \(K\)
\[L_{E,\phi}=\sup\left\{f\in\mathcal{L}(\mathbb{C}^{n}):f_{|_{E}}\leq\phi\right\}. \tag{5.2}\]
An immediate property of this function is
\[L_{E}^{*}+\inf_{E}\phi\leq L_{E,\phi}^{*}\leq L_{E}^{*}+\sup_{E}\phi. \tag{5.3}\]
Let us recall the following definition of Siciak [14, Definition 2.15].
**Definition 5.1**.: A compact subset \(K\subset\mathbb{C}^{n}\) is said to be
1. locally regular at a point \(a\in K\) if, for every \(r>0\), the extremal function \(L_{K\cap B(a,r)}\) is continuous at \(a\);
2. locally regular if it is locally regular at every point \(a\in K\).
This notion is stronger than the \(L\)-regularity (i.e., \(L_{K}\) is continuous) as shown in [14] and in [13]. A quantitative way to formulate it is as follows. For \(0\leq\delta\leq 1\) and a subset \(E\subset\mathbb{C}^{n}\) the modulus of continuity of \(L_{E}\) at \(a\in E\) is given by
\[\varpi^{\prime}_{E}(a,\delta)=\sup_{|z-a|\leq\delta}L_{E}(z).\]
Then, \(L_{E}\) is continuous at \(a\) if and only if \(\lim_{\delta\to 0}\varpi^{\prime}_{E}(a,\delta)=0\). Put
\[\varpi^{\prime}_{E}(\delta)=\sup\{\varpi^{\prime}_{E}(a,\delta):a\in E\}\]
which is the modulus of continuity of \(L_{E}\) over \(E\). It is well-known fact due to Blocki that the modulus of continuity of \(L_{E}\) on \(E\) controls the modulus of continuity of \(L_{E}\) on \(\mathbb{C}^{n}\). Namely,
\[|L_{E}(z)-L_{E}(w)|\leq\varpi^{\prime}_{E}(|z-w|),\quad z,w\in\mathbb{C}^{n},\;|z-w|\leq 1. \tag{5.4}\]
We are especially interested in the Holder continuity case.
**Definition 5.2**.: Let \(q\geq 0\) be an integer and \(0\leq\mu\leq 1\). We say that a compact subset \(K\subset\mathbb{C}^{n}\) has
1. locally Holder continuity property (local HCP for short) of order \(q\) at \(a\in K\) if there exist constants \(C>0\) and \(0<r_{0}\leq 1\) (both are independent of \(a\)) such that \[\varpi^{\prime}_{K\cap B(a,r)}(a,\delta)\leq\frac{C\delta^{\mu}}{r^{q}},\quad 0 <\delta\leq 1,\,0<r<r_{0};\]
2. local HCP of order \(q\) if it has local HCP of order \(q\) at every point \(a\in K\).
Here it is important to know both the Holder exponent and the Holder coefficient. A weaker property is the HCP which is well-studied in the literature, where it only requires the Holder continuity of \(L_{K}\). This property has been extensively studied by many authors with applications to the Markov inequality and approximation theory, see for example the surveys [10] and [11], and the most recent results on this property can be found in [1], [12]. We expect that many results from the study of HCP can be refined to obtain the local HCP and of some order.
The above definition implies an important property of the given compact set.
**Lemma 5.3**.: _Let \(K\subset\mathbb{C}^{n}\) be a non-pluripolar compact subset. If \(K\) has local_ HCP _of order \(q\), then it has a uniform density in capacity in the sense of (2.4)._
Proof.: Without loss of generality we may assume that \(K\subset\Omega:=B(0,1)\). By the assumption there exist uniform constants \(q>0\) and \(C,r_{0}>0\) such that for every \(a\in K\),
\[\varpi^{\prime}_{K\cap B(a,r)}(a,\delta)\leq\frac{C\delta^{\mu}}{r^{q}},\quad 0 <\delta\leq 1,\,0<r<r_{0}.\]
Let us fix \(\delta=1\). It follows that, for \(0<r<r_{0}\),
\[\sup_{\Omega}L_{K\cap B(a,r)}(z)\leq C/r^{q}.\]
It follows from the inequality (2.5) that
\[\frac{cap^{\prime}(K\cap B(a,r),\Omega)}{r^{nq}}\geq\frac{1}{C}.\]
This finished the proof.
It is a well-known fact that there is 1-1 correspondence between \(\mathcal{L}(\mathbb{C}^{n})\) and \(PSH(\mathbb{P}^{n},\omega)\) where \(\omega\) is the Fubini-Study metric (see e.g. [1]). Using this we can express the extremal function on the projective space \(\mathbb{P}^{n}\) on the local chart \(\mathbb{C}^{n}\) in term of a Siciak-Zaharjuta extremal function defined in the complex space \(\mathbb{C}^{n}\) as follows. Recall the local potential of \(\omega\) is given by
\[\omega=dd^{c}\rho\quad\text{on $\mathbb{C}^{n}$}.\]
By the very definition of two extremal functions, for every compact set \(E\subset\mathbb{C}^{n}\),
\[L_{E,\rho}(z)=V_{E}(z)+\rho(z),\quad z\in\mathbb{C}^{n}. \tag{5.5}\]
We are ready to prove the equivalences of notions for compact subset in \(\mathbb{C}^{n}\).
Proof of Theorem 1.4.: (a) Assume that \(E\) is locally \(L\)-regular at \(a\in E\). The argument in the proof of Corollary 1.3-(a) applied for \(L_{E}\) shows that the local \(L\)-regularity implies that \(L_{E,\rho}\) is continuous on \(\mathbb{C}^{n}\) which is first proved in [12, Proposition 2.16]. Hence, \(V_{E}\) is continuous by the identity (5.5) and Corollary 2.4.
Conversely, assume \(V_{K}\) is continuous. Let \(a\in K\) and \(B_{r}:=B(a,r)\subset\mathbb{C}^{n}\) be a closed ball of radius \(r>0\). By Theorem 1.2, we have \(V_{E\cap B_{r}}\) is continuous. It follows from the identity (5.5) that \(L_{E\cap B_{r},\rho}\) is continuous for every \(r>0\). Hence, by definition of the weighted extremal function in (5.2),
\[L_{E\cap B_{r},\rho}\leq\rho\quad\text{on $E$}.\]
Fix a ball \(B:=B(a,r_{0})\). Using (5.3) we have, for every \(0<r<r_{0}\),
\[L_{E\cap B}^{*}(a)\leq L_{E\cap B_{r}}^{*}(a)\leq L_{E\cap B_{r},\rho}^{*}(a) -\inf_{E\cap B_{r}}\rho\leq\rho(a)-\inf_{K\cap B_{r}}\rho.\]
Since \(\rho\) is continuous on \(\mathbb{C}^{n}\), letting \(r\to 0\) we conclude that \(L^{*}_{E\cap B}(a)=0\). This means that \(L_{E\cap B}\) is continuous at \(a\), and the proof of (a) completed.
(b) For the sufficient condition, let us assume \(E\) has local \(\mu\)-HCP of order \(q\). This means that for every \(a\in E\) and \(0<\delta\leq 1\),
\[L_{E\cap B(a,r)}(z)\leq\frac{C\delta^{\mu}}{r^{q}},\quad\text{dist}(z,a)\leq\delta. \tag{5.6}\]
Then, Lemma 5.3 implies that \(E\) has a uniform density in capacity. Furthermore, by the monotonicity, Holder continuity of \(\rho\) and (5.6) we infer
\[L_{E,\rho}(z)\leq L_{E\cap B(a,r),\rho(a)+c_{3}r}=\rho(a)+c_{3}r+L_{E\cap B(a, r)}\leq\rho(a)+c_{3}r+\frac{C\delta^{\mu}}{r^{q}},\]
where \(c_{3}\) is the Lipschitz norm of \(\rho\) on the ball \(B(0,R)\) containing \(E\). Hence, for \(\text{dist}(z,E)\leq\delta\) and \(0<r<r_{0}\),
\[L_{E,\rho}(z)\leq\rho(z)+2c_{3}r+\frac{C\delta^{\mu}}{r^{q}}.\]
Now, we can choose \(r=\delta^{\epsilon}\) with \(\epsilon=\mu/(q+1)\) to conclude that
\[L_{E,\rho}(z)-\rho(z)\leq C\delta^{\mu^{\prime}},\quad\mu^{\prime}=\frac{\mu^ {2}}{1+q}.\]
Hence, the Holder continuity of \(V_{E}\) follows from the identity (5.5).
Conversely, assume \(V_{E}\) is \(\mu\)-Holder continuous on \(\mathbb{P}^{n}\) and \(E\) has a uniform density in capacity. Let \(a\in E\) and fix a ball \(B=B(a,r_{0})\) as in (a). By the comparisons between the extremal functions (5.3) and (5.5), we have for \(0<r<r_{0}\),
\[L_{E\cap B}(z)\leq L_{E\cap B_{r},\rho}(z)-\inf_{E\cap B_{r}}\rho=V_{E\cap B_{ r}}(z)+\rho(z)-\inf_{E\cap B_{r}}\rho.\]
The right hand side can be estimated by Corollary 3.2-(a) and the uniform density in capacity as follows. For \(\text{dist}(z,a)\leq\delta\leq\delta_{0}\),
\[V_{E\cap B_{r}}(z)\leq\frac{A}{cap(K\cap B_{r})}\frac{\delta^{\mu}}{r^{2}} \leq\frac{A}{\varkappa}\frac{\delta^{\mu}}{r^{2+q}}.\]
Observe also that
\[\rho(z)-\inf_{E\cap B_{r}}\rho =\rho(z)-\rho(x)\] \[\leq c_{3}(|x-a|+|z-a|)\] \[\leq c_{3}(r+\delta),\]
where \(\rho(x)=\min_{B(a,r)}\rho\). Altogether we obtain for every \(0<r\leq r_{0}\),
\[L_{E\cap B}\leq\frac{A}{\varkappa}\frac{\delta^{\mu}}{r^{2+q}}+c_{3}(r+\delta).\]
Choosing \(r=r_{0}\delta^{\frac{\mu}{3+q}}\) we conclude that \(L_{K\cap B}(z)\leq C\delta^{\frac{\mu}{3+q}}/r_{0}^{2}\). Notice that the constant \(C=C(A,c_{3})\) is independent of the point \(a\) and \(r\). The proof of the necessary condition in (b) follows.
### Locally \(L\)-regular sets
In view of Theorem 1.4 and applications in [1] and [10] we make an effort to collect in this section many well-known examples of locally \(L\)-regular sets. This is a classical topic but the results are scattered in many different places.
**Example 5.4** (accessibility criterion).: This criterion is due to Plesinak [11]. It provides the following typical example. Let \(\Omega\subset\mathbb{C}^{n}\) be an open bounded subset with \(C^{1}\)-boundary. Then, \(\bar{\Omega}\) is locally \(L\)-regular (see also [13, Corollary 5.3.13] and [14]). This criterion has been generalized in [11] for subanalytic subset in \(\mathbb{C}^{n}\) and latter in [11] for more general setting.
**Example 5.5** (Siciak).: Let us denote \(\mathbb{K}\) for either \(\mathbb{K}=\mathbb{R}\) or \(\mathbb{K}=\mathbb{C}\). Let \(B(x,r)\) is the closed ball in \(\mathbb{K}^{n}\) with center at \(x\) and radius \(r>0\). Let \(E\subset\mathbb{K}^{n}\) be a compact subset.
1. _Cusps:_ let \(h:[0,1]\to\mathbb{K}^{n}\) and \(r:[0,1]\to\mathbb{R}^{+}\) be a continuous function such that \(r(0)=0\) and \(r(t)>0\), \(0<t\leq 1\). Let \(a=h(0)\). A cusp with vertex \(a\) is a compact subset of \(\mathbb{K}^{n}\) given by \[C(h,r):=\bigcup_{0\leq t\leq 1}B(h(t),r(t)).\] We say that \(E\) has a cusp \(C(h,r)\) at \(a=h(0)\in E\) if \(C(h,r)\subset E\). It is proved [18, Proposition 7.6] that if \(E\) has a cusp \(C(h,r)\) at \(a\in E\) such that \(r(t)=Mt^{m}\) and \(|h(t)-h(0)|\leq At^{q}\), \(0\leq t\leq 1\), where \(M,m,A\) and \(q\) are positive uniform constants, then \(E\) is locally \(L\)-regular at \(a\).
2. _Corkscrew:_\(E\) is said to have a corkscrew of order \(s>0\) at \(a\in E\) if there exists \(r_{0}\in(0,1)\) such that for every \(0<r<r_{0}\) we can find \(a^{\prime}\in\mathbb{K}\) for which \(B(a^{\prime},r^{s})\subset B(a,r)\cap E\). Let \(C(h,r)\) be a cusp at \(a=h(0)\). By the triangle inequality \[|x-h(0)|\leq|x-h(t)|+|h(t)-h(0)|,\] it follows that if \(r(t)=Mt^{m}\) and \(|h(t)-h(0)|\leq At^{q}\), then this cusp has a corkscrew of order \(s=m/\min\{m,q\}\geq 1\). The conclusion is that if \(E\) has a corkscrew of order \(s>0\) at \(a\in E\), then it is locally \(L\)-regular at \(a\)[18, Proposition 7.10].
**Remark 5.6**.: There is still missing a good characterization of locally \(L\)-regular compact sets in terms of capacity. The criterion in Theorem 1.4 may provide an approach from global pluripotential theory for this problem. It is worth to recall from Cegrell [12] that if \(E\subset\mathbb{R}^{n}\equiv\mathbb{R}^{n}+i\cdot 0\subset\mathbb{C}^{n}\) is a compact subset, then the \(L\)-regularity and local \(L\)-regularity coincide.
### Local \(\operatorname{HCP}\) sets
Here we require on the local \(\operatorname{HCP}\) with a precise estimate of the Holder coefficient. Let \(E\subset\mathbb{K}^{n}\) be a compact subset (\(\mathbb{K}=\mathbb{R}\) or \(\mathbb{K}=\mathbb{C}\)) and \(a\in E\). Following Siciak [18] we consider the following
**Definition 5.7** (Condition **(P))**.: For each point \(a=(a_{1},...,a_{n})\in E\) there exist compact connected subsets \(\ell_{1},...,\ell_{n}\subset\mathbb{C}\) and an affine non-singular mapping \(h:\mathbb{C}^{n}\to\mathbb{C}^{n}\) satisfying
1. \(a\in h(\ell_{1}\times\cdots\times\ell_{n})\subset E\);
2. \(\|\ell_{j}\|\geq d>0\), \(j=1,...,n\) ;
3. \(\|Dh\|\geq m>0\),
where \(\|\ell_{j}\|\) is the diameter of \(\ell_{j}\), the constants \(d\) and \(m\) do not depend on the point \(a\).
Note that \(Dh\) denotes the Frechet derivative of \(h\) which is a linear map of \(\mathbb{C}^{n}\) to \(\mathbb{C}^{n}\) and does not depend on the point \(a\). Hence, its norm \(\|Dh\|\) does not depend on the point. Roughly speaking the condition (P) for \(E\) means at each point \(a\in E\) there is an _affine cube_ of uniform size with vertex at \(a\) and it is contained in \(E\).
A basic result [14, Proposition 5.1] says that for compact subset \(E\subset\mathbb{C}^{n}\) satisfying the condition (P),
\[L_{E}(z)\leq\frac{4\sqrt{1+\|E\|}}{md}\delta^{\frac{1}{2}} \tag{5.7}\]
holds for every \(\operatorname{dist}(z,E)\leq\delta\) and \(0<\delta\leq 1\), where \(\|E\|\) is the diameter of \(E\).
This precise estimate of both the Holder coefficient and the Holder exponent allows us to study the local HCP of many classes of compact sets in \(\mathbb{K}^{n}\). Let us describe them again emphasizing on the locality.
**Example 5.8**.: Let \(\Omega\subset\mathbb{K}^{n}\) be a bounded domain. Let \(E\subset\mathbb{K}^{n}\) be a compact set.
1. _Lipschitz domain_: Assume \(\Omega\) has Lipschitz boundary and \(a\in\overline{\Omega}\). Then for every \(r>0\), the compact set \(\overline{\Omega}\cap B(a,r)\) satisfies the condition (P) at each point \(a\).
2. _Geometrical condition_: Assume there exists \(r>0\) such that for every \(a\in E\), there is a point \(a^{\prime}\in E\) for which the convex hull of the set \(\{a\}\cup B(a^{\prime},r)\) is contained in \(E\). Then, \(E\) is local HCP of order \(q=n\).
3. _Uniform interior sphere condition_: there exists \(r>0\) such that for each \(a\in\partial\Omega\) there is \(B(a^{\prime},r)\subset\Omega\) and \(B(a^{\prime},r)\cap(\mathbb{K}^{n}\setminus\Omega)=\{a\}\). Then, \(\overline{\Omega}\) is local HCP with the exponent \(\mu=1\) and of order \(q=1\) if \(\mathbb{K}=\mathbb{C}\); or with exponent \(\mu=1/2\) and of order \(q=2\) if \(\mathbb{K}=\mathbb{R}\).
Let us give the explanation of the above examples.
(a) For the bounded domains with Lipschitz boundary it satisfies the so called Property (\(\mathbf{H_{2}}\)) (see [1, page 166-167]) which is a local property at a point. Namely there exists a non-empty parallelepiped \(\pi_{0}\) such that each point \(a\in\partial\Omega\) is vertex of a parallelepiped \(\pi_{a}\) congruent with \(\pi_{0}\) (with respect to orthogonal transformation and translation) satisfying \(\pi_{a}\subset\overline{\Omega}\). This property clearly implies the condition (P) at \(a\in\partial\Omega\) for \(\overline{\Omega}\cap B(a,r)\). Together with the estimate (5.7) we conclude the local \(\frac{1}{2}\)-HCP of order \(q=1\) of such subsets.
(b) Next, by [14, Propostion 5.3] the geometric condition implies the condition (P) with \(\ell_{i}=[0,1]\subset\mathbb{C}\), \(i=1,...,n\), and
\[m=\Big{(}\frac{r}{n^{3}}\Big{)}^{n}\,\frac{1}{\|E\|^{n-1}}. \tag{5.8}\]
(c) Finally, the uniform interior sphere condition implies the geometric condition. However, we will give a proof of the improvement of the exponent and order in Lemma 5.10 below.
One important case of the geometrical condition is that it is satisfied for any convex compact set with non-void interior in \(\mathbb{K}^{n}\), where \(\mathbb{K}=\mathbb{R}\) or \(\mathbb{K}=\mathbb{C}\). Hence, by (5.7) and (5.8) (see also [14, Remark 5.4]) we have
**Corollary 5.9** (Siciak).: _A convex compact subset in \(\mathbb{K}^{n}\) satisfying the geometrical condition has local HCP with the (optimal) exponent \(\mu=1/2\) and of order \(q=n\)._
Let us give the proof of the statement in Example 5.8-(c). Notice that the uniform interior sphere condition is satisfied for all smooth bounded domains with \(C^{1,1}\)-boundary.
**Lemma 5.10**.: _Let \(\Omega\subset\mathbb{K}^{n}\) be a bounded domain satisfying the uniform interior sphere condition._
* _If_ \(\mathbb{K}=\mathbb{C}\)_, then_ \(\overline{\Omega}\) _has local_ \(\mathrm{HCP}\) _with the optimal Hoder exponent_ \(\mu=1\) _and of order 1._
* _If_ \(\mathbb{K}=\mathbb{R}\) _and_ \(\mathbb{R}^{n}\equiv\mathbb{R}^{n}+i\cdot 0\subset\mathbb{C}^{n}\)_, then_ \(\overline{\Omega}\) _has local_ \(\mathrm{HCP}\) _with the optimal exponent_ \(\mu=1/2\) _and of order_ \(q=2\)_._
Proof.: (a) Let \(r>0\) be fixed. The uniform interior sphere condition means that there exists a closed ball \(B(a^{\prime},r_{0})\) such that
\[B(a^{\prime},r_{0})\cap(\mathbb{C}^{n}\setminus\Omega)=\{a\},\]
where \(r_{0}>0\) is a uniform constant. In particular, \(|a^{\prime}-a|=r_{0}\). By decreasing \(r_{0}\) we may assume \(r/2\leq r_{0}\leq r\) and dilating this ball we may assume that \(B(a^{\prime},r_{0})\subset\Omega\cap B(a,r)\). Hence,
\[L_{\overline{\Omega}\cap L(a,r)}(z)\leq L_{B(a^{\prime},r_{0})}(z)=\max\{\log( |z-a^{\prime}|/r_{0}),0\},\]
where the second identity used the explicit formula of the extremal function (see, e.g. [10, Example 5.1.1]). For \(w\in\mathbb{C}^{n}\) with \(\mathrm{dist}(w,a)\leq\delta\) with small \(\delta\), we have
\[|w-a^{\prime}|\leq|w-a|+|a-a^{\prime}|\leq\delta+r_{0}.\]
It follows that
\[L_{\overline{\Omega}\cap B(a,r)}(w)\leq\log(1+\delta/r_{0})\leq\delta/r_{0} \leq 2\delta/r.\]
This means that \(L_{\overline{\Omega}\cap B(a,r)}\) is Lipschitz continuous at \(a\). Furthermore, the Lipschitz norm at that point is independent of the point. Thus, \(\overline{\Omega}\) has local \(\mathrm{HCP}\) with the exponent \(\mu=1\) and of order \(q=1\).
(b) Assume now \(\overline{\Omega}\subset\mathbb{R}^{n}+i\cdot 0\subset\mathbb{C}^{n}\). The proof goes along the same lines as above. Noticing that if \(a\in\overline{\Omega}\), then \(B(a,r)\cap(\mathbb{R}^{n}+i\cdot 0)\) is the real ball \(\widetilde{B}(a,r)\subset\mathbb{R}^{n}\) and we have an explicit formula (see e.g. [10, Theorem 5.4.6]) for such a ball via \(V_{\widetilde{B}(a,r)}(z)=V_{\widetilde{B}(0,1)}(f(z))\), where
\[V_{\widetilde{B}(0,1)}(z)=\frac{1}{2}\log\left(\mathbf{h}\left(|z|^{2}+| \left\langle z,\bar{z}\right\rangle-1|\right)\right),\quad f(z)=(z-a)/r,\]
and \(\mathbf{h}(x)=x+(x^{2}-1)^{\frac{1}{2}}\) for \(x\geq 1\).
Notice that there are another kind of examples arising from Example 5.5 for cusp and corksrcew sets, because it is possible to derive precise estimate on the Holder coefficient there. We refer the reader to [11, Proposition 6.5] for more detail.
### Uniformly polynomial cuspidal sets
In this section we follow the method of Pawlucki and Plesniak [14, Theorem 4.1] to study the uniformly polynomial cuspidal sets. The improvement is a precise estimate on the Holder coefficient. These sets contain, for example, all bounded convex sets in \(\mathbb{K}^{n}\), where \(\mathbb{K}=\mathbb{R}\) or \(\mathbb{K}=\mathbb{C}\), with non-void interior, and all bounded domains in \(\mathbb{K}^{n}\) with Lipschitz boundary (see Example 5.8 and also [14, page 469]).
Now we are focusing on the compact sets with cusps in \(\mathbb{R}^{n}\) considered as a natural subset in \(\mathbb{C}^{n}\). A compact subset \(E\subset\mathbb{R}^{n}\) is called _uniformly polynomial cuspidal
(UPC for short) if there exists positive constants \(M\), \(m\) and a positive integer \(d\) such that for each \(x\in E\), one may choose a polynomial map
\[h_{x}:\mathbb{R}\rightarrow\mathbb{R}^{n},\quad\deg h_{x}\leq d\]
satisfying
\[h_{x}(0)=x\quad\text{and}\quad h_{x}([0,1])\subset E;\] \[\text{dist}(h_{x}(t),\mathbb{R}^{n}\setminus E)\geq Mt^{m}\quad \text{for all }x\in E,\text{ and }t\in[0,1],\]
An important property of the UPC sets is that if \(a\in E\), then
\[E_{a}=\bigcup_{0\leq t\leq 1}D(h_{a}(t),Mt^{m})\subset E, \tag{5.9}\]
where \(D(p,r)=\{x\in\mathbb{R}^{n}:|x_{1}-p_{1}|\leq r,...,|x_{n}-p_{n}|\leq r\}\) denotes the closed cube.
**Remark 5.11**.:
* Without loss of generality we may assume that the exponent \(m\) is a positive integer. Otherwise we will choose the smallest positive integer larger than \(m\) instead.
* It is not clear from the definition that the norm of coefficients \(\sum_{\ell=0}^{d}||h_{x}^{(\ell)}(0)|\) of \(h_{x}\) is uniformly bounded on \(E\). Fortunately, in many interesting examples of UPC sets this assumption is satisfied.
**Theorem 5.12**.: _Let \(E\subset\mathbb{R}^{n}\) be a compact_ UPC _subset such that \(\sum_{\ell=0}^{d}\|h_{x}^{(\ell)}(0)\|\) is uniformly bounded on \(E\). Then, \(E\) is local_ HCP _of some order \(q\)._
It is worth to emphasize that by [10, Corollary 6.6, Remark 6.5], we have all compact fat subanalytic subsets in \(\mathbb{R}^{n}\) satisfying the additional assumption. Thus, we obtain
**Corollary 5.13**.: _A compact fat subanalytic subset in \(\mathbb{R}^{n}\) has local HCP of order \(q\geq 0\)._
This corollary combined with Theorem 1.4-(b) gives the proof of Theorem 1.5.
Now, to proceed with the proof of Theorem 5.12 we need the following fact
**Lemma 5.14**.: _Let \(E\subset\mathbb{C}^{k}\) be a compact subset and \(h:\mathbb{C}^{k}\rightarrow\mathbb{C}^{n}\) be a complex valued polynomial mapping of degree \(d\). Then, for every \(w\in\mathbb{C}^{k}\),_
\[L_{h(E)}(h(w))\leq d\cdot L_{E}(w).\]
Proof.: Since \(E\) is compact, so is \(h(E)\). Now, let \(v\in\mathcal{L}(\mathbb{C}^{n})\) be such that \(v\leq 0\) on \(h(E)\). Since \(\deg h\leq d\),
\[\lim_{|w|\rightarrow+\infty}(v\circ h(w)-d\log|w|) \leq\lim_{|w|\rightarrow+\infty}(v\circ h(w)-\log|h(w)|)+c_{h}\] \[\leq c_{v}+c_{h},\]
where the second inequality used the assumption of \(v\in\mathcal{L}(\mathbb{C}^{n})\). Hence, \(v\circ h/d\in\mathcal{L}(\mathbb{C}^{k})\) and it is negative on \(E\). It follows that \(v\circ h\leq d\cdot L_{E}\) and this finishes the proof.
Proof of Theorem 5.12.: In what follows the space \(\mathbb{R}^{n}\) is identified with the subset set \(\mathbb{R}^{n}+i\cdot 0\) of \(\mathbb{C}^{n}\). Let \(a\in E\) be fixed and denote by \(D(a,r)\) a closed polydisc. Observe first that the set \(E_{a}\) defined in (5.9) satisfies
\[E_{a}=\{x\in\mathbb{R}^{n}:x=h(t)+Mt^{m}\left(x_{1}^{m},...,x_{n}^{m}\right),t \in[0,1],|x_{i}|\leq 1,i=1,...,n\}.\]
Let \(S\subset\mathbb{R}\times\mathbb{R}^{n}\) be the pyramid
\[S=\{(t,tx_{1},...,tx_{n})\in\mathbb{R}\times\mathbb{R}^{n}:t\in[0,1],|x_{i}|\leq 1,i=1,...,n\}.\]
This is a convex set (with non-void interior in \(\mathbb{R}^{n+1}\)) which implies that it has HCP. The crucial observation is that our cusp is the image of this set under the following polynomial projection \(p(t,z):\mathbb{C}\times\mathbb{C}^{n}\rightarrow\mathbb{C}^{n}\) given by
\[p(t,z)=h(t)+M(z_{1}^{m},...,z_{n}^{m}).\]
Clearly, \(p(S)=E_{a}\) and \(p(0,0)=h(0)=a\).
To show the _local_ HCP of some order at \(a\) we need to shrink a bit that pyramid. We claim that for each \(0<r\leq 1\), we can find \(0<r^{\prime}\leq r\) such that a smaller pyramid
\[S(r^{\prime}):=\{(t,tx)\in\mathbb{R}\times\mathbb{R}^{n}:t\in[0,r^{\prime}],|x_ {1}|\leq r^{\prime},...,|x_{n}|\leq r^{\prime}\}\]
satisfies
\[S(r^{\prime})\subset p^{-1}(E_{a}\cap D(a,r)). \tag{5.10}\]
Indeed, for \((t,tv)\in S(r^{\prime})\subset S\), the point \(x=h(t)+Mt^{m}\cdot v^{m}\in E_{a}\). Moreover,
\[|x-a| =|h(t)+Mt^{m}\cdot v^{m}-h(0)|\] \[\leq|h(t)-h(0)|+Mt^{m}|v|^{m}\] \[\leq\left(\sum_{\ell=0}^{q}\|h^{(\ell)}(0)\|\right)r^{\prime}+nMr ^{\prime},\]
where we use the fact that \(m\) is a positive integer. Thus we can choose
\[r^{\prime}=\frac{r}{\sum_{\ell=0}^{q}\|h^{(\ell)}(0)\|+nM}. \tag{5.11}\]
This is the sole place we need the uniform bound for the sum \(\sum_{\ell=0}^{n}\|h^{(\ell)}(0)\|\) that does not depend on the point \(a\). Otherwise, the Holder norm of \(L_{E\cap B(a,r)}\) would depend on \(a\).
Since \(S(r^{\prime})\) contains a ball of radius \(\tau_{n}r^{\prime}\) in \(\mathbb{R}^{n+1}\) with a numerical constant \(\tau_{n}\), it follows from Corollary 5.9 that \(S(r^{\prime})\) has local \(\frac{1}{2}\)-HCP of order \(q=n+1\), i.e.,
\[L_{S(r^{\prime})}(t,v)\leq\frac{C\delta^{\frac{1}{2}}}{r^{\prime n+1}} \tag{5.12}\]
for every \((t,v)\in S_{\delta}(r^{\prime}):=\{\zeta\in\mathbb{C}^{n+1}:\operatorname{ dist}(\zeta,S(r^{\prime}))\leq\delta\}\) and \(C\) does not depend on \(r^{\prime}\) and \(\delta\).
Moreover, for all \(0<\delta\leq r^{\prime}\), we have
\[P(\delta)=\{(t,z)\in\mathbb{C}\times\mathbb{C}^{n}:|t|\leq\delta,|z_{i}|\leq \delta,i=1,...,n\}\subset S_{\delta}(r^{\prime}).\]
Then, for such a small \(\delta\), the following inclusions hold
\[B(a,M\delta^{m})\subset p(\{0\}\times D(0,\delta))\subset p(P(\delta))\subset p (S_{\delta}(r^{\prime})). \tag{5.13}\]
Now we are ready to conclude the local HCP of \(\overline{E}_{a}\). Let \(z\in\mathbb{C}^{n}\) be such that \(\operatorname{dist}(z,a)\leq M\delta^{m}\). By (5.13) we have \(z=p(t,v)\in\mathbb{C}^{n}\) for some \((t,v)\in S_{\delta}(r^{\prime})\).
Furthermore, by (5.10) we have \(F:=p(S(r^{\prime}))\subset\overline{E}_{a}\cap D(a,r)\). Combining these facts with (5.12) we obtain
\[L_{\overline{E}_{a}\cap D(a,r)}(z) \leq L_{F}(z)\] \[=L_{F}(p(t,v))\] \[\leq\max(d,m)\cdot L_{S(r^{\prime})}(t,v)\] \[\leq\frac{C\delta^{\frac{1}{2}}}{r^{\prime n+1}},\]
where for the third inequality we used Lemma 5.14, and the last constant \(C\) does not depend on \(r^{\prime}\) and \(a\). Rescaling \(\delta:=M\delta^{m}\leq r^{\prime}\), we obtain
\[L_{\overline{E}_{a}\cap D(a,r)}(z)\leq\frac{C\delta^{\frac{1}{2m}}}{r^{\prime n +1}}\]
for every \(\operatorname{dist}(z,a)\leq\delta\), where \(0<\delta\leq r^{\prime}\). Notice that \(r\) and \(r^{\prime}\) are comparable by (5.11). Hence, \(\overline{E}_{a}\) is local HCP at \(a\) with the exponent \(\mu=1/2m\) and of order \(q=n+1\) and so does \(E\supset E_{a}\). This finishes the proof of the theorem.
### Compact sets slided transversally by analytic half-disc
In this section we provide another class of compact subsets (of Lebesgue measure zero but admitting geometric structure) that have local HCP and of some order. We will see later that they contain generic submanifolds as important examples.
Let
\[\operatorname{U}_{+}=\{\tau\in\mathbb{C}:|\tau|\leq 1,\;\operatorname{Im} \tau\geq 0\}\]
denote the (closed) upper half of the closed unit disc and we denote for \(0<\delta\leq 1\)
\[\operatorname{U}(\delta)=\{\tau\in\mathbb{C}:|\tau|\leq\delta\}.\]
Motivated by [10] and [21] we consider the following class of sets.
**Definition 5.15**.: Let \(E\subset\mathbb{C}^{n}\) be a closed set and \(a\in E\). Assume that there are uniform constants \(M,m\) and \(\delta_{0}>0\) (do not depend on \(a\)) such that for every \(0<\delta<\delta_{0}\) and \(x\in\mathbb{C}^{n}\) with \(\operatorname{dist}(x,a)\leq M\delta^{m}\) we can find a holomorphic map
\[f_{a}:\overset{\circ}{\operatorname{U}}_{+}\to\mathbb{C}^{n}\]
satisfying:
1. \(f_{a}\) is continuous on \(\operatorname{U}_{+}\);
2. \(f_{a}(0)=a\) and \(f_{a}([-1,1])\subset E\);
3. \(x\in f_{a}(\operatorname{U}_{+}\cap\operatorname{U}(\delta))\).
For the sets satisfying the definition we say that \(E\) can be _slid transversally by analytic half-disc at \(a\in E\)_. Moreover, \(E\) is said to be slid transversally by analytic half-disc if \(E\) can be slid transversally at every point \(a\in E\).
The condition (b) is to say that the analytic half-disc \(f:\operatorname{U}_{+}\to\mathbb{C}^{n}\) is attached to \(E\) along the interval \([-1,1]\). The condition (c) means the analytic half-disc \(f_{a}:\operatorname{U}_{+}\to\mathbb{C}^{n}\) meets transversally with \(E\) so that \(f_{a}(\operatorname{U}_{+}\cap\operatorname{U}(\delta))\) contains the complex line segment joining \(a\) and \(x\) (with distance \(M\delta^{m}\)).
Note that the idea of analytic disc attached to a generic submanifold in \(\mathbb{C}^{n}\) is classical in CR-geometry (see [1]). However, the above definition emphasizes on the quantitive estimates. Also, in this definition it is important to require that the constants \(M,m\) and \(\delta_{0}\) are independent of the point \(a\). For the applications
later we will need this independency of all point in the compact set. Geometrically it says that at each point we can attach transversally to the set a closed half-disc of uniform radius, consequently the family of analytic half-disc fills a neighborhood of the given point (in the ambient space). This will be clearly seen in the examples below.
**Remark 5.16**.: Suppose \(E\) satisfies (a), (b) and (c) above at \(a\in E\) with an analytic half-disc \(f_{a}\) whose the \(\mu\)-Holder coefficient at \(0\), where \(0<\mu\leq 1\),
\[A=A(\mu):=\sup_{\tau\in[-1,1]}\frac{|f_{a}(\tau)-f_{a}(0)|}{|\tau|^{\mu}}<C\]
for a uniform constant \(C\) which does not depend on the point \(a\). Then, for every small \(r>0\), the set \(E\cap B(a,r)\) can be slid transversally at \(a\) by an analytic half-disc \(g_{a}:\mathrm{U}_{+}\to\mathbb{C}^{n}\) given by
\[g_{a}(\tau)=f_{a}\left((r/A)^{\frac{1}{\mu}}\tau\right).\]
Moreover, if \(M,m,\delta_{0}\) are uniform constants satisfying \((c)\) for \(E\) at \(a\), then the constants \(M(r/A)^{\frac{r}{\mu}},m,\delta_{0}\) satisfy (c) for \(E\cap B(a,r)\) at \(a\).
In our setting the half-disc is attached along the real axis which is slightly different from [11] and [12]. However, we can use a simple conformal map to convert the results obtained there to our setting as we are only interested near the origin. Also our definition seems to be natural as in the following examples show.
**Example 5.17**.:
1. The simplest example is \(E=\mathbb{R}\subset\mathbb{C}\), where we can choose \(M=m=1\), \(\delta_{0}=1\) and the analytic half-disc \[f_{a}(z)=\begin{cases}z-a&\quad\text{if }\operatorname{Im}z>0,\\ -z+a&\quad\text{if }\operatorname{Im}z<0.\end{cases}\] Furthermore, \[A=\sup_{\tau\in[-1,1]}\frac{|f_{a}(\tau)-f_{a}(0)|}{|\tau|}=1.\] Next, consider \(E:=[-1,1]\subset\mathbb{R}\subset\mathbb{C}\). Then, we can choose \(M=1\), \(m=2\), \(\delta_{0}=1\). In fact, for \(f(z)=z^{2}\), we have \(f([-1,1])=[0,1]\). Consider the analytic function \(f_{a}:\mathrm{U}_{+}\to\mathbb{C}\) given by \[f_{a}(z)=\begin{cases}a+z^{2}&\text{for }a\in[-1,0],\\ a-z^{2}&\text{for }a\in[0,1].\end{cases}\] Moreover, \(x\in f_{a}(\mathrm{U}_{+}\cap\mathrm{U}(\delta))\) for every \(x\in\mathbb{C}\) with \(|x-a|\leq\delta^{2}\), where \(0<\delta<\delta_{0}=1\). This implies that \(E\) is slid transversally by analytic half-disc. We can also see that \[A=\sup_{\tau\in[-1,1]}\frac{|f_{a}(\tau)-f_{a}(0)|}{|\tau|}\leq 1.\]
2. In a similar fashion we can see that a bounded domain with \(C^{2}\)-boundary or compact cube in \(\mathbb{R}^{n}=\mathbb{R}^{n}+i\cdot 0\subset\mathbb{C}^{n}\) are the sets satisfies Definition 5.15. Using this we get another proof of the local HCP with the exponent \(\mu=1/2\) and of order \(q=2\) (see Lemma 5.10).
We will need the following fact about the harmonic measure.
**Lemma 5.18**.: _Denote \(E=[-1,1]\subset\mathbb{R}\subset\mathbb{C}\). Let \(h\) be the harmonic extension of \(1-\mathbf{1}_{E}\) from \(\partial\mathrm{U}_{+}\) into \(\mathrm{U}_{+}\). Then,_
\[h(z)=\sup\left\{v\in SH(\overset{\circ}{\mathrm{U}_{+}})\cap C(\mathrm{U}_{+} ):v\leq 1-\mathbf{1}_{E}\text{ on }\partial\mathrm{U}_{+}\right\},\]
_and \(h\) is Lipschitz continuous at \(0\in E\) (or on a proper compact subinterval)._
Proof.: The harmonicity of the envelope is a classical result. Moreover, \(h(z)\) is given by an explicit formula
\[h(z)=\frac{2}{\pi}\arg\left(\frac{1+z}{1-z}\right).\]
This function is clearly Lipschitz near \(0\).
It is the classical fact that \(L_{\mathbb{R}}(z)\) is Lipschitz continuous, and \(L_{[-1,1]}(z)\) is \(\frac{1}{2}\)-Holder continuous. Both results are optimal because we have explicit formulas for these extremal functions. We obtain the following generalization.
**Proposition 5.19**.: _Assume that \(E\subset\mathbb{C}^{n}\) is a compact set satisfying the conditions (a), (b) and (c) in Definition 5.15 at every point \(a\in E\). Then, \(L_{E}\) is \(1/m\)-Holder continuous._
Proof.: It is enough to prove that \(L_{E}\) is \(\frac{1}{m}\)-Holder continuous on \(E\) by Blocki's result (5.4) that there exists \(C>0\) such that for every \(a\in E\),
\[L_{E}(z)\leq C|z-a|^{\frac{1}{m}}\]
for \(|z-a|\leq\delta.\) In fact, for \(z\in\mathbb{C}^{n}\) such that \(\mathrm{dist}(z,a)\leq M\delta^{m}\) there exists \(\tau\in\mathrm{U}_{+}\cap\mathrm{U}(\delta)\) such that \(z=f_{a}(\tau)\). Therefore,
\[L_{E}(z)=L_{E}(f_{a}(\tau))\]
Observe that \(v:=L_{E}\circ f_{a}\) is subharmonic in the interior of \(\mathrm{U}_{+}\) and \(v\in C(\mathrm{U}_{+})\). It satisfies \(\sup_{E}v\leq C\) for a constant depending only on \(E\), and \(v\equiv 0\) on \([-1,1]\). In particular, \(v/C\) is a candidate for the left hand side of the envelope in Lemma 5.18. Hence,
\[v(\tau)\leq Ch(\tau)\leq C\delta,\]
where \(h(\tau)\) is the harmonic measure with respect to the interval \([-1,1]\subset\partial\Delta_{+}\) and we used the fact \(|h(\tau)|\leq c_{1}|\tau|\leq c_{1}\delta\). Combining with the above identity and rescaling we conclude that \(L_{E}(z)\leq C\delta^{\frac{1}{m}}\).
**Remark 5.20**.: Suppose that the \(\mu\)-Holder coefficient/norm, \(0<\mu\leq 1\), of the analytic half-disc \(f_{a}\) at \(0\) is bounded by a constant \(A\) that is independent of the point \(a\), then from the above proof we can easily get the local Holder regularity at \(a\in E\) with the exponent \(\mu=1/m\) and of order \(q=1/\mu\), i.e.,
\[L_{E\cap B(a,r)}(z)\leq\frac{CA^{\frac{1}{\mu}}\delta^{\frac{1}{m}}}{r^{\frac {1}{\mu}}},\quad\mathrm{dist}(z,a)\leq\delta,\]
where \(C\) is independent of \(r\) and the point \(a\).
It is proved by Vu [25, Proposition 2.5] and by Sadullaev and Zeriahi [17] that a \(C^{2}\)-smooth generic submanifold in \(\mathbb{C}^{n}\) (or in a complex manifold) can be slid transversally by analytic half-disc together with uniform control of Lipschitz norm at \(0\) of \(f_{a}\). As a consequence the extremal functions of these generic submanifolds are Lipschitz continuous. Furthermore, these manifolds have local HCP with the exponent \(\mu=1\) and of order \(q=1\). It will be interesting to obtain more examples of sets satisfying Definition 5.15.
## 6. Applications
### Equidistribution speed for Fekete points
Let \(\mathcal{P}_{d}(\mathbb{C}^{n})\) be the set of complex valued polynomials of degree at most \(d\). Then its dimension is \(N_{d}=\binom{n+d}{n}\), and let \(\{e_{1},...,e_{N_{d}}\}\) be an ordered system of all monomials \(z^{\alpha}:=z_{1}^{\alpha_{1}}\cdots z_{n}^{\alpha_{n}}\) with \(|\alpha|=\alpha_{1}+\cdots\alpha_{n}\leq d\), where \(\alpha_{i}\in\mathbb{N}\). For each system \(x^{(d)}=\{x_{1},...,x_{N_{d}}\}\) of \(N_{d}\) points of \(\mathbb{C}^{n}\) we define the generalized Vandermondian \(\operatorname{VDM}(x^{(n)})\) by
\[\operatorname{VDM}(x^{(d)}):=\det[e_{i}(x_{j})]_{i,j=1,...,N_{d}}.\]
Let \(K\subset\mathbb{C}^{n}\) be a non-pluripolar compact subset. Following [14] we say that a _Fekete configuration of order \(d\)_ for \(K\) is a system \(\xi^{(d)}=\{\xi_{1},....,\xi_{N_{d}}\}\) of \(N_{d}\) points of \(K\) that maximizes the function \(|\operatorname{VDM}(x^{(d)})|\) on \(K\), i.e.,
\[\left|\operatorname{VDM}(\xi^{(d)})\right|=\max\left\{\left| \operatorname{VDM}(x^{(d)})\right|:x^{(d)}\subset K\right\}.\]
Given a Fekete configuration \(\xi^{(d)}\) of \(K\), we consider the probability measure on \(\mathbb{C}^{n}\) defined by
\[\mu_{d}:=\frac{1}{N_{d}}\sum_{j=1}^{N_{d}}\delta_{\xi_{j}},\]
where \(\delta_{x}\) denotes the Dirac measure concentrated at the point \(x\). It is called Fekete's measure of order \(d\) by [14, Definition 1.4]. It is known for \(n=1\) that \(\{\mu_{d}\}\) converges weakly to the equilibrium measure \(\mu_{\text{eq}}\) of \(K\) as \(d\) goes to infinity. In a fundamental paper Berman, Boucksom and Witt Nystrom [1] proved the generalization of this result for \(n\geq 2\). Namely,
\[\lim_{d\to\infty}\mu_{d}=\mu_{\text{eq}},\qquad\text{where }\mu_{\text{eq}}= \frac{(dd^{c}L_{K}^{*})^{n}}{\int_{\mathbb{C}^{n}}(dd^{c}L_{K}^{*})^{n}}, \tag{6.1}\]
in the weak topology of measures. This result coincides with the classical one for \(n=1\) and it was listed as an open problem in [18, 15.3] and [17, Problem 3.3].
In fact this problem is considered as a special case of \(K\subset\mathbb{C}^{n}\subset\mathbb{P}^{n}\) of a very general reformulation in the framework of a big line bundle over a complex manifold in [1]. It is possible by observing that the space \(\mathcal{P}_{d}(\mathbb{C}^{n})\) is isomorphic to the space of homogeneous polynomials of degree at most \(d\) in \((n+1)\)-variable \(\mathcal{H}_{d}(\mathbb{C}^{n+1})\). The latter space can be identified the space of global holomorphic section \(H^{0}(\mathbb{P}^{n},\mathcal{O}(d))\), where \(\mathcal{O}(d)\) is the \(d\)-th tensor power of the tautological line bundle \(\mathcal{O}(1)\) over \(\mathbb{P}^{n}\). We refer the readers to [10] for a self-contained proof which derived from [1] and [1] using only (weighted) pluripotential theory in \(\mathbb{C}^{n}\).
The next basic question is to estimate the speed of convergence in (6.1). The general case was obtained by Dinh, Ma and Nguyen [17] (see also [11] for a special case \(K=X\) a compact projective manifold and strictly and smooth plurisubharmonic weighted), where the crucial \((\mathscr{C}^{\alpha},\mathscr{C}^{\alpha^{\prime}})\)-regularity in Definition 1.1 was introduced.
Note again that the regularity of weighted extremal functions is local and it is invariant under biholomorphic maps. Without loss of generality we restrict ourself to the compact subset in \(\mathbb{C}^{n}\) as in Section 5. Hence, an immediate consequence of the characterization in Theorem 1.4 and Corollary 1.3 is
**Lemma 6.1**.: _All local \(\alpha\)-HCP compact subsets of order \(q\) in \(\mathbb{C}^{n}\subset\mathbb{P}^{n}\) are \((\mathscr{C}^{\alpha},\mathscr{C}^{\alpha^{\prime}})\)-regular, where \(\alpha^{\prime}\) is explicitly computed in terms of \(\alpha\) and \(q\)._
By Remark 4.3 and the proof of Theorem 1.4 we can compute the exponent \(\alpha^{\prime}=\alpha^{\prime\prime 2}/(\alpha^{\prime\prime}+2+q)\) where \(\alpha^{\prime\prime}=\alpha^{2}/(1+q)\). This combined with [17, Theorem 1.5] shows that we have a large number of new examples from Sections 5.2 5.3, 5.4 for which the following estimate for speed of convergence holds.
**Theorem 6.2**.: _Let \(K\subset\mathbb{C}^{n}\) be a local \(\alpha\)-HCP compact subset of order \(q\). Then, there exists a constant \(C=C(E,\alpha)\) such that for every test function \(v\in\mathscr{C}^{\alpha}\) (the Holder space on \(\mathbb{C}^{n}\)) and for every Fekete configuration \(\xi^{(d)}\) of \(K\),_
\[|\left\langle\mu_{d}-\mu_{\mathrm{eq}},v\right\rangle|\leq C\|v\|_{\mathscr{C }^{\alpha}}d^{-\alpha^{\prime}}.\]
Notice that the holomorphic maps which preserve HCP are characterized in [10]. It is likely that the criterion can be extended to the local HCP case. If it is true, then we will obtain compact sets, via nice holomorphic maps, which are \((\mathscr{C}^{\alpha},\mathscr{C}^{\alpha^{\prime}})\)-regular.
### Weighted extremal functions in a big cohomology class
In this section we consider the extremal functions with respect to a big form. Let \(\theta\) be a real smooth \((1,1)\)-form on \(X\) such that its cohomology class is big, i.e., there is a quasiplurisubharmonic function \(\rho\) and a constant \(c>0\) such that
\[\theta+dd^{c}\rho\geq c\;\omega.\]
Without lost of generality we may assume that \(c=1\) and \(\sup_{X}\rho=0\) in what follows. Denote for a (non-pluripolar) compact subset \(K\subset X\),
\[V_{\theta;K}(z)=\sup\{v(z):v\in PSH(X,\theta),\;v_{|_{K}}\leq 0\},\]
and for a continuous function \(\phi\),
\[V_{\theta;K,\phi}(z)=\sup\{v(z):v\in PSH(X,\theta),\;v_{|_{K}}\leq\phi\}.\]
In contrast to the Kahler case, we do not know if these extremal functions are lower semi-continuous. It may have poles and in general the polar sets \(\{V_{\theta;K}=-\infty\}\) and \(\{V_{\theta;K}=-\infty\}\) are non-empty.
The first observation is that a Borel set \(P\subset X\) is (locally) pluripolar if and only if it is contained in the polar set of a \(\theta\)-plurisubharmonic function. In fact, assume \(P\) is pluripolar, then \(P\subset\{u=-\infty\}\) for some \(u\in PSH(X,\omega)\) by [11, Theorem 12.5]. Hence, \(P\) is contained in the polar set of \(v:=u/2+\rho/2\in PSH(X,\theta)\). Conversely, for \(u\in PSH(X,\theta)\) the set \(\{u=-\infty\}\) is locally pluripolar.
**Proposition 6.3**.: _Let \(E\subset X\) be a Borel set._
* \(E\) _is non-pluripolar if and only if_ \(V_{\theta;E}^{*}\in PSH(X,\theta)\)
_._
2. _Let_ \(P\) _be a pluripolar set, then_ \(V_{\theta;E}^{*}=V_{\theta;E\cup P}^{*}\)_._
Proof.: (a) Assume \(E\subset\{\psi=-\infty\}\) for some \(\psi\in PSH(X,\theta)\) with \(\sup_{X}\psi=0\). Then, for every \(c\in\mathbb{R}\), the function \(\psi+c\) belongs to \(PSH(X,\theta)\) and is negative on \(E\). It follows that \(\psi+c\leq V_{\theta;E}\) for every \(c\in\mathbb{R}\). So, \(V_{\theta;E}=+\infty\) on \(X\setminus\{\psi=-\infty\}\). Conversely, assume \(E\) is non-pluripolar, then \(V_{\omega;E}^{*}\in PSH(X,\omega)\). Let \(A>0\) be a constant such that \(\theta\leq A\omega\). We easily have \(V_{\theta;E}^{*}\leq V_{A\omega;E}^{*}=AV_{\omega;E}^{*}\). Hence, \(V_{\theta;E}^{*}\in PSH(X,\theta)\).
(b) By the monotonicity (4.1) of \(V_{\theta;E}^{*}\), it is enough to show that \(V_{\theta;E}^{*}\leq V_{\theta;E\cup P}^{*}\). Indeed, from above observation we may assume \(P\subset\{\psi=-\infty\}\) for some \(\psi\in PSH(X,\theta)\). Let \(v\in PSH(X,\theta)\) be such that \(v\leq 0\) on \(E\). For every \(\varepsilon>0\), we have \(v_{\varepsilon}:=(1-\varepsilon)v+\varepsilon\psi\in PSH(X,\theta)\) and \(v_{\varepsilon}\leq 0\) on \(E\cup P\). So, \(v_{\varepsilon}\leq V_{E\cup P}\) and letting \(\varepsilon\to 0\) we get
\[v\leq V_{E\cup P}\quad\text{on }X\setminus\{\psi=-\infty\}.\]
The set \(\{\psi=-\infty\}\) is pluripolar, so it has zero measure. Hence, \(V_{E}^{*}\leq V_{EUP}^{*}\) everywhere.
As an application we give a sufficient condition for the regularity of the pair \((E,\phi)\) as in [1, Definition 1.4]. Consequently, by the characterization (Theorem 1.4-(a)) we have many more regular pairs from Section 5.1.
**Lemma 6.4**.: _If \(V_{K}\) is continuous, then \(V_{\theta;K,\phi}\) is upper-semicontinuous for every continuous function \(\phi\)._
Proof.: Here the extremal function is defined with respect to a given Kahler form \(\omega\). Let \(\phi\) be a continuous function on \(X\) and assume \(\theta\leq\widetilde{\omega}\) for some Kahler form \(\widetilde{\omega}\). Then, \(V_{\theta;K,\phi}^{*}\leq V_{\widetilde{\omega};K,\phi}^{*}\). Furthermore, by the assumption and Remark 2.5 we have \(V_{\widetilde{\omega};K}\) is continuous. Applying Corollary 1.3 and Proposition 4.2 we infer that \(V_{\widetilde{\omega};K,\phi}\leq\phi\). Hence, \(V_{\theta;K,\phi}^{*}\leq\phi\) on \(K\), and the upper semicontinuity easily follows from the definition.
Conversely, if \(V_{\theta;K,\phi}\) is upper semicontinuous for every continuous function \(\phi\), then an adaption from the argument in [1, Proposition 6.1] shows that \(K\) is locally \(L\)-regular. Hence, \(V_{K}\) is continuous. Thus, the converse of the above lemma is also true. However, it is not known whether the upper semicontinuity of \(V_{\theta;K}\) is independent of the big form as in the Kahler case.
|
2310.07816 | Quantitative analysis of MoS$_2$ thin film micrographs with machine
learning | Isolating the features associated with different materials growth conditions
is important to facilitate the tuning of these conditions for effective
materials growth and characterization. This study presents machine learning
models for classifying atomic force microscopy (AFM) images of thin film
MoS$_2$ based on their growth temperatures. By employing nine different
algorithms and leveraging transfer learning through a pretrained ResNet model,
we identify an effective approach for accurately discerning the characteristics
related to growth temperature within the AFM micrographs. Robust models with up
to 70% test accuracies were obtained, with the best performing algorithm being
an end-to-end ResNet fine-tuned on our image domain. Class activation maps and
occlusion attribution reveal that crystal quality and domain boundaries play
crucial roles in classification, with models exhibiting the ability to identify
latent features beyond human visual perception. Overall, the models
demonstrated high accuracy in identifying thin films grown at different
temperatures despite limited and imbalanced training data as well as variation
in growth parameters besides temperature, showing that our models and training
protocols are suitable for this and similar predictive tasks for accelerated 2D
materials characterization. | Isaiah A. Moses, Wesley F. Reinhart | 2023-10-11T18:59:03Z | http://arxiv.org/abs/2310.07816v2 | # Quantitative Analysis of MoS\({}_{2}\) Thin Film Micrographs with Machine Learning
###### Abstract
Isolating the features associated with different materials growth conditions is important to facilitate the tuning of these conditions for effective materials growth and characterization. This study presents machine learning models for classifying atomic force microscopy (AFM) images of thin film MoS\({}_{2}\) based on their growth temperatures. By employing nine different algorithms and leveraging transfer learning through a pretrained ResNet model, we identify an effective approach for accurately discerning the characteristics related to growth temperature within the AFM micrographs. Robust models with up to 70% test accuracies were obtained, with the best performing algorithm being an end-to-end ResNet fine-tuned on our image domain. Class activation maps and occlusion attribution reveal that crystal quality and domain boundaries play crucial roles in classification, with models exhibiting the ability to identify latent
features beyond human visual perception. Overall, the models demonstrated high accuracy in identifying thin films grown at different temperatures despite limited and imbalanced training data as well as variation in growth parameters besides temperature, showing that our models and training protocols are suitable for this and similar predictive tasks for accelerated 2D materials characterization.
Keywords:MoS2 thin film, Morphological features, Machine learning, Transfer learning, Explainable AI
## 1 Introduction
Material properties are significantly influenced by conditions experienced during synthesis [1, 2, 3, 4, 5]. A systematic way of isolating the properties associated with different conditions is essential to enable growth of materials with predefined properties on demand. We particularly seek approaches that eliminate intuition-based experimentation with different process variables, replacing them with data-driven approaches which are more efficient with time, effort, and other resources.
Several studies on the thin film MoS2 have revealed a number of growth parameters that determine the morphological features and properties of the grown materials. Instances include the shape evolution of monolayer MoS2 crystals grown by chemical vapor deposition (CVD) [6]. Domain shape variation from the triangular to hexagonal geometries has been shown to depend on the Mo:S ratio of the precursors [6]. Similarly, a MoS2 domain shapes of mainly round, nearly round and hexagonal, truncated triangles, and triangles are observed at the temperatures of the MoO3 precursor of 760\({}^{\circ}\)C, 750\({}^{\circ}\)C, 730\({}^{\circ}\)C, and 710\({}^{\circ}\)C, respectively [7].
The domain density and size have also been shown to decrease with the temperature [7, 8], with a random orientation of the MoS2 domain associated with the growth temperature below 850\({}^{\circ}\)C [9] or at a much higher temperature [10]. In the former, the authors linked the phenomenon to inability to attain thermodynamically stable state at the lower temperature,
and in the latter, the inferred culprit is the step edges and step edge meanderings of substrate (sapphire) surface.
The grain size and crystal coverage of the the MoS\({}_{2}\) have also been shown to be tunable with the growth time [7]. The authors showed that the grain size increased when the growth time was increased from 20 minutes to 30 minutes. With the materials grown for 45 minutes, the grains merged to form a continuous MoS\({}_{2}\)[7]. Similarly, an increased in growth temperature [8] and O\({}_{2}\) flow rate [11] were shown to result in larger thin film crystal coverage.
In designing high throughput on-demand materials, deployment of data-based screening approaches have become more critical [12, 13, 14, 15, 16, 17]. Data-driven approaches are being explored for materials characterization [18, 19, 20, 21, 22] and serve to provide greater clarity when searching the synthesis condition space compared to intuition-based experimentation [23, 24, 25, 26, 27]. With the use of the existing data consisting of the conditions and the corresponding materials properties, models that predict what conditions are necessary for a given properties can be developed. As observed, a number of these conditions play similar and intertwined roles in the materials properties. For instance, time, temperature, and O\({}_{2}\) flow rate, all determine the MoS\({}_{2}\) thin film crystal coverage [7, 8, 11]. It will be interesting to use machine learning to isolate the distinct latent features associated with the different growth parameters. Additionally, identifying distinct latent features for these different growth parameters would result in the capability to classify material samples based on their growth conditions.
The Lifetime Sample Tracking (LiST) is a database hosted by the Penn State's 2D Crystal Consortium (2DCC) facility, consisting of experimentally grown thin film transition metal chalcogenides materials, among others. Among the characterization methods stored in LiST is the Atomic Force Microscopy (AFM). AFM images of 2D MoS\({}_{2}\) and their corresponding synthesis condition are a set of data among other categories in LiST [28, 10, 29]. To accelerate the synthesis of MoS\({}_{2}\) with the desired properties, we deploy different machine learning (ML) models to classify AFM images of the material based on their growth temperature. Despite the limited data available for the training, up to 71% test accuracy was obtained on the image
classification. Most importantly, this study presents a simple approach that could help isolate underlying morphological features associated with different growth conditions for a broad range of materials, paving the way for rapid and cost effective materials development.
## 2 Methods
### Data Preparation
Raw spm files of MoS\({}_{2}\) were retrieved from LiST [30].These 262 AFM height maps were processed into greyscale images and either resized or randomly cropped to the common size of \(224\times 224\), depending on the augmentation method adopted, as discussed below. Training computer vision models on such a small dataset requires transfer learning, a common approach that utilizes CNN models pretrained on one image domain to extract features from a new image domain [31, 32, 33]. Many popular pretrained CNNs, such as the VGG [34], ResNet [35], and Inception model [36, 37] architectures were trained on the ImageNet dataset [38]. ImageNet contains millions of color images of natural objects from thousands of categories. Using the size of the model architecture as the main basis for our choice, because of the small data volume in our characterization problem, ResNet18 architecture pretrained on the ImageNet is used for the transfer learning.
However, our data distribution is very different than the ImageNet data. To evaluate the effect of the pretraining domain, we consider pretraining on micrographs contained in the MicroNet dataset [39], which should be more similar to our image domain. The MicroNet dataset has been shown to give better performance on micrographs, indicating that the proximity of the two image domains should enhance the model performance [39]. We have therefore additionally used ResNet18 pretrained on the MicroNet dataset. This will enable us to compare how the same model architecture pretrained on different datasets perform on our characterization task. Features were extracted from the pretrained models for our shallow ML models. The pretrained convolutional models were also fine-tuned for the CNN
model in our study (1).
### Data Augmentation
The dataset consists of 262 instances of AFM height maps across 3 growth temperatures (Figure 2). In addition to the limited data, there is a significant imbalance among the different classes with the 900\({}^{\circ}\)C, 950\({}^{\circ}\)C, and 1000\({}^{\circ}\)C making up 11%, 50%, and 39% respectively (Table 1).
The effect of limited and imbalanced data on the model performance can be partially mitigated with data augmentation approaches. Different data augmentation policies were therefore deployed to determine which method works best for our small, imbalanced dataset.
Figure 1: An overview of the transfer learning approach. (top) A ResNet CNN model is trained on a different image domain with a large number of images. The task may be unrelated to the present task – all that matters is that convolutional filters are learned that can extract information (e.g., texture, color, shapes) from the images. (middle) The filters from the pretrained model can be used directly to extract relevant image features, which are interpreted in a supervised manner by a shallow model to predict a new label, such as the growth temperature. (bottom) Alternatively, the filters from the pretrained model can be fine-tuned on the new image domain to better capture relevant information for the task at hand.
The first was to randomly crop a common size of \(224\times 224\) from each of the original images. Multiple croppings were carried out, depending on the class of the image, in order to obtain a balanced representation of the different classes. This augmentation policy is termed _Aug1_ (Table 1). Another augmentation policy examined is that developed by Cubuk, et al [40], which we referred to as _Aug2_ hereafter. The authors used a search algorithm to find the best policy, which is a combination of many sub-policies consisting of functions such as the translation, rotation, or shearing, and the probabilities and magnitudes with which the functions are applied, that give the best validation accuracy on a target dataset. Interestingly, they observed that the learned policy in a given dataset is transferable to another. We therefore examined how transferable the policy learned on ImageNet is to our present data domain. The third augmentation method used is a weighted random sampler or oversampling to correct the imbalance in the training set (_Aug3_). For _Aug4_, there is no biased augmentation applied to the data and only in CNN models do we have random rotations between 0 to 1000\({}^{\circ}\)C and 1000\({}^{\circ}\)C. The results are shown in Figure 2. The results are shown in Figure 3. The results are shown in Figure 4. The results are shown in Figure 4.
180\({}^{\circ}\), horizontal and vertical flipping at 50% probability applied to the train and validation set on the fly.
### Machine learning
A 10-fold cross-validation training scheme was used to train and evaluate the models. The data were shuffled into train and validation sets 10 times using different random states to ensure a different split each time. In each instance, the training set was used to train the model parameters while the validation set was used to determine the performance for hyperparameter tuning using grid search. A held-out test set (not involved in the shuffle-split procedure) was then used to evaluate the model performance in general.
Nine different ML models were considered: support vector classifier (SVC) [41, 42], kernel ridge classifier (KRC) [43], radius neighbors classifier (RNN) [44], Gaussian process classifier (GPC) [45], k-nearest-neighbors classifier (KNN) [44], decision tree classifier (DTC) [46], gradient boost classifier (GBC) [47], multilayer perceptron (MLP) [48], and convolutional neural network (CNN) [49, 50]. The shallow models were developed using the scikit-learn library [51] and the MLP and CNN were implemented in pytorch [52].
Using AFM images of 2D MoS\({}_{2}\) grown with MOCVD, we developed models to predict
\begin{table}
\begin{tabular}{c|c c c|c c c|c c c|c} \hline & \multicolumn{3}{c|}{**900\({}^{\circ}\)C**} & \multicolumn{3}{c|}{**950\({}^{\circ}\)C**} & \multicolumn{3}{c|}{**1000\({}^{\circ}\)C**} & \multicolumn{3}{c|}{**Total**} \\ & train & validation & test & train & validation & test & train & validation & test & \\ \hline _Aug1_ & 207 & 27 & 3 & 212 & 22 & 13 & 215 & 24 & 11 & 726 \\ _Aug2_ & 207 & 27 & 3 & 208 & 24 & 13 & 210 & 23 & 11 & 734 \\ _Aug3_ & 105 & 3 & 3 & 105 & 12 & 13 & 105 & 9 & 11 & 342 \\ _Aug4_ & 23 & 3 & 3 & 105 & 12 & 13 & 83 & 9 & 11 & 262 \\ \hline \end{tabular}
\end{table}
Table 1: Data augmentation policies and the corresponding data sets for the different classes, 900\({}^{\circ}\)C, 950\({}^{\circ}\)C, and 1000\({}^{\circ}\)C. In _Aug1_, multiple random cropping of image size \(224\times 224\) is used to obtain balanced instances among the different classes, _Aug2_ is augmentation policy learned on ImageNet [40], and in _Aug3_ weighted random sampler and oversampling are used to correct the imbalance in train set for CNN and other models, respectively. _Aug4_ is without biased augmentation. In CNN models, random rotations between 0 to 180\({}^{\circ}\), horizontal and vertical flipping at 50% probability were additionally used on the train and validation set on the fly.
the growth temperature (one of \(900^{\circ}\)C, \(950^{\circ}\)C, or \(1000^{\circ}\)C). We considered framing the task in several different ways to evaluate the efficacy of each: nominal classification, ordinal classification, and regression. Here nominal classification means the three growth temperatures were considered as distinct classes with no ordering. Unless otherwise specified, results are for nominal classifiers.
For ordinal classification, we implement NNRank [53] to account for ordering within the classes; the targets \(900\), \(950\), and \(1000^{\circ}\)C are transformed into the vectors \([1,0,0]\), \([1,1,0]\), and \([1,1,1]\), respectively. At inference time, a threshold of \(>0.5\) is applied to the prediction and the values are counted from left to right, which provides the class label. Note that this scheme is only applied to the NN models (MLP and CNN). Finally, we perform regression by simply using the growth temperatures as continuous labels and evaluating the MSE. The class labels are obtained by binning the predicted growth temperature (e.g., \(925-975^{\circ}\)C belongs to the \(950^{\circ}\)C class).
## 3 Results and Discussion
### Depth of Image Features
\begin{table}
\begin{tabular}{l|c c|c c|c c|c c} \hline & \multicolumn{2}{|c|}{**Block 2**} & \multicolumn{2}{|c|}{**Block 3**} & \multicolumn{2}{|c|}{**Block 4**} & \multicolumn{2}{|c|}{**Pooling**} & \multicolumn{2}{|c}{**Fine-Tuned**} \\ Channels & \multicolumn{2}{|c|}{\(100352\)} & \multicolumn{2}{|c|}{\(50176\)} & \multicolumn{2}{|c|}{\(25088\)} & \multicolumn{2}{|c}{\(512\)} & \multicolumn{2}{|c}{\(100\)} \\ \hline CEV & 85\% & 99\% & 85\% & 99\% & 85\% & 99\% & 85\% & 99\% & - \\ Features & 190 & 235 & 156 & 235 & 94 & 219 & 28 & 142 & 100 \\ \hline SVC & 66\(\pm\)6 & 59\(\pm\)7 & 64\(\pm\)5 & 62\(\pm\)8 & 77\(\pm\)6 & 58\(\pm\)5 & 78\(\pm\)5 & 71\(\pm\)6 & 80\(\pm\)7 \\ KRC & 45\(\pm\)5 & 57\(\pm\)4 & 48\(\pm\)11 & 55\(\pm\)5 & 58\(\pm\)11 & 52\(\pm\)5 & 57\(\pm\)7 & 58\(\pm\)12 & 71\(\pm\)7 \\ RNN & 21\(\pm\)4 & 15\(\pm\)2 & 35\(\pm\)11 & 15\(\pm\)3 & 42\(\pm\)9 & 20\(\pm\)5 & 57\(\pm\)7 & 39\(\pm\)7 & 70\(\pm\)9 \\ \hline \end{tabular}
\end{table}
Table 2: Validation accuracy (in %) based on the features extracted from the different layers of the pretrained model (ResNet18 pretrained on ImageNet). Channels is the total size of raw feature vectors extracted from each block of the ResNet. PCA was applied to these channels, and then cumulative explained variance (CEV) of the components from PCA was used to determine the size of the input features for the listed shallow models. Separately, the dense layers of the pretrained model were replaced with fewer neurons and fine-tuned (last column).
We first determined the best location in the pretrained model from which to extract image features for our models. Different portions ("blocks") of the ResNet were considered, providing filters with different levels of abstraction. Due to the large number of channels in the pretrained model (see Table 2), Principal Component Analysis (PCA) was applied to reduce the dimension of input features to the shallow models, ideally reducing overfitting and thus improving predictive performance.[14, 54, 55] Cumulative explained variance thresholds of 85% and 99% were used to determine the number of features to keep for inference. We found that within a block, using fewer features gave better performance in 9 of 12 cases despite lower explained variance, likely because we had few training data compared to the size of the feature vectors. Depending on the model architecture and number of features used, minimal or significant deviations in model performance could be obtained from any of the ResNet blocks (e.g., 66%, 64%, 77%, and 78% accuracy from subsequent blocks, with typical standard deviation \(\pm\)6%).
Separately, the dense layers of the pretrained model was replaced with new ones with fewer neurons and then fine-tuned on our training data. Finally, 100 features were extracted from the first dense layer. The performance of the selected classifiers on the different features shows that the features extracted from the fine-tuned dense layer gives the best performance overall, with 80%, 71%, and 70% accuracy using SVC, KRC, and RNN, respectively. Training the dense layer on a pretrained convolutional backbone might therefore be a better approach for extracting a low-dimensional image feature vector compared to PCA. These tuned features are therefore used in all of the following analysis.
### Data Augmentation
We then evaluated the effect of different data augmentation policies using the SVC, KNN, and CNN models (Table 1 and Figure 3). Significantly worse performances are obtained with _Aug1_ and _Aug2_, especially in the shallow models, compared to _Aug3_ and _Aug4_. Meanwhile, the performance observed between _Aug3_ and _Aug4_ is statistically indistinguishable.
Figure 3: Accuracy obtained from different augmentation policies across three different model types. Bars report averages over 10 folds, while error bars indicate standard deviation. Some models were trained with increased data size to have a balanced classes using different augmentation approaches, as indicated in Table 1.
The poor performance observed in the _Aug1_ and _Aug2_ might be related to the properties of the images learned by the models. While in the case of the natural images, activation of different classes are typically associated with unique features of the classes,[56, 57, 58] the class activation in the models for the different synthesis conditions will be more likely due to differences in magnitude of the same feature, such as the domain size and thickness.[3, 4] These relevant features of the AFM images may be disrupted by shearing, zooming, and resizing associated with _Aug1_, and the features location in the image might be omitted due to the cropping in _Aug2_.
Although _Aug3_ and _Aug4_ present about the same accuracy, _Aug3_ has the desirable property of oversampling less represented classes. This should help mitigate systematic error related to class imbalance, a feature which is typical of distributions in materials synthesis, especially when exploring different growth conditions (e.g., poorly performing conditions will probably be undersampled). Therefore, the _Aug3_ augmentation policy is selected for the rest of this study.
### Pretraining Domain
We next seek to quantify how transfer learning from the ResNet18 model pretrained on the ImageNet data domain compares with the same model architecture pretrained on the seemingly more relevant MicroNet data domain. We therefore compared the performance of each pretrained model on the same nominal classification task across a wide range of
\begin{table}
\begin{tabular}{l|c c c c c c c c} Models & SVC & KRC & RNN & GPC & KNN & DTC & GBC & MLP & CNN \\ \hline MicroNet & **73\(\pm\)6** & 65\(\pm\)10 & 63\(\pm\)9 & 52\(\pm\)12 & 59\(\pm\)10 & 71\(\pm\)9 & 71\(\pm\)9 & 65\(\pm\)8 & 63\(\pm\)8 \\ ImageNet & 80\(\pm\)7 & 71\(\pm\)7 & 70\(\pm\)9 & 59\(\pm\)10 & 67\(\pm\)12 & 78\(\pm\)4 & 78\(\pm\)11 & **86\(\pm\)6** & 70\(\pm\)6 \\ \hline Difference & +10\% & +9\% & +11\% & +13\% & +14\% & +10\% & +10\% & +32\% & +11\% \\ \hline \end{tabular}
\end{table}
Table 3: Validation accuracy (in %) over 10 folds obtained for the feature extraction (shallow and MLP models) or end-to-end learning (CNN) with ResNet18 pretrained on ImageNet and MicroNet. Values are reported as mean \(\pm\) standard deviation. Difference is the fractional change in the average score between MicroNet and ImageNet. Best model performance in each row is shown in bold.
predictive model types. In these experiments, we used the fine-tuned features from Table 2 in all cases except CNN, which was simply fine-tuned in an end-to-end manner using the original ResNet18 architecture (i.e., with a three-way classification layer attached to the end in place of the original classification layer). Based on the results shown in Table 3, the ImageNet model gives conclusively better performance than MicroNet, with at least 9% improvement and up to 32% improvement in the case of MLP (compared to a typical uncertainty of about 6%).
While standard deviations for individual observations are high, the fact that none of the nine model types shows a negative difference is compelling, especially because MicroNet was trained on greyscale micrographs of materials while ImageNet was trained on color images of macroscale objects. Previous work has suggested that ImageNet relies more heavily on texture rather than shape [59], while MicroNet was designed primarily for segmentation tasks. We speculate that this focus on texture gives ImageNet filters that can be used for identifying distinguishing textures in the AFM height maps. The results presented here suggest that ImageNet may be surprisingly well suited for out-of-domain materials characterization data whose information content is primarily texture. All following results are based on transfer learning from the ImageNet pretraining since its features are strictly superior to MicroNet.
### Model Performance
We next investigate the performance of different algorithms in greater detail. As before, we rely on the features extracted from the fine-tuning procedure above, with additional shallow models trained on these static feature vectors of each image. The CNN model is the one exception to this, as it uses the original ResNet18 architecture and is fine-tuned on this task without modification to feature size. The classification accuracy across 10 different model instances of each type is shown in Figure 4. Overfitting is observed across all model types, with training performance over 90% being typical, while validation typically only reaches around 60-85%. The greatest overfitting, in terms of the gap between train and validation
Figure 4: The average train, validation (_val_), and test accuracy over 10 models for the different algorithms. The data were shuffled 10 times with different random seeds to obtain 10 different train and validation splits. Hyperparameters were tuned to obtain a trained model for each of the 10 splits. The trained models were tested with the test set.
performance, is seen in KRC and GPC, while SVC, DTC, and MLP exhibit the least. The best performing models in terms of validation performance is the MLP, with SVC coming in second but exhibiting training and validation scores one standard deviation below the MLP.
To understand how well the models can generalize to classifying images outside of the training data, we additionally examine their performance on a held-out test set (i.e., not used for training or hyperparameter selection). In this regard, MLP again showed the highest accuracy, with GBC and GPC appearing within one standard deviation. It is reassuring to see that MLP gave the highest scores in both validaiton and testing, inspiring confidence in its performance overall.
To understand the model performance on the different growth temperatures in greater detail, and particularly to check if the underrepresented classes have comparable accuracy, average confusion matrices of the held-out test set on 10 models are reported in Figure 5. To focus the discussion, only the highly performant GBC and MLP models and the end-to-end CNN are examined in this regard. It is notable that the performance within each class does not vary substantially between different model types, as the overall accuracy are similar. For instance, the GBC, MLP, and CNN predict about the same number of samples grown at 950\({}^{\circ}\)C and 1000\({}^{\circ}\)C correctly (about 70% and 75% respectively). The samples grown at
Figure 5: The average confusion matrix for the test set predictions of production models trained on the 10 folds train data. Values indicate the number of samples in each bin. This is based on nominal classification.
\(900^{\circ}\)C are found to have the lowest in-class accuracy. This seems to be partially an artifact of under-representation in the test set; as shown in Table 1, classes are significantly imbalanced in the data, with the \(900^{\circ}\)C classes having the least number of samples.
There is also some consistency among the models in misclassifying the \(900^{\circ}\)C as \(950^{\circ}\)C and not as \(1000^{\circ}\)C. Similarly, \(1000^{\circ}\)C is rarely misclassified as \(900^{\circ}\)C. On the contrary, \(950^{\circ}\)C is about equally likely to be misclassified as \(900^{\circ}\)C as it is as \(1000^{\circ}\)C by the MLP and CNN. This seems to suggest that the proximity of the growth temperature, which is expected to be reflected in the image features, makes it more likely for the model to group them together. Recall that this is for nominal classification, so this proximity is not reflected in the loss function. This could imply a fundamental bias in the data where the image feature learned by the models for a given temperature are more similar to that for the adjacent temperatures.
To further understand the classification fidelity of our models, we examine images that are correctly and incorrectly classified by the CNN in Figure 6. Visual inspection suggests significantly different image features among the same growth temperature, demonstrating how difficult this classification task is. Some images grown at \(950^{\circ}\)C show larger crystal domains typically associated with \(1000^{\circ}\)C. Conversely, some images grown at \(1000^{\circ}\)C show poor crystal formation and very small domain sizes exhibited mostly by the \(900^{\circ}\)C growth temperature. Therefore, these wrongly classified images may be exceptional among the target class and would likely confuse even a human expert. However, they offer some preliminary insight into which features the classifier attributes to each growth temperature.
More fundamentally, other growth variables are not entirely fixed across the samples. For instance, the growth time varies significantly among the different samples (Figure 6(b)). While the least growth time in the test set is as low as about 100 s, some samples are grown at much longer time, with up to 1650 s. Also, while most of the samples are grown on c-plane sapphire substrate, we also have some that are grown on A- and M-plane sapphire (Figure 6(c)). These inconsistent growth parameters might have accounted for the significant differences observed among the samples grown at the same temperature and might have also
Figure 6: Samples grown at 900\({}^{\circ}\)C (1-3), 950\({}^{\circ}\)C (4-16), and 1000\({}^{\circ}\)C (17-27) in the test set. The predicted class by end-to-end CNN is shown at the bottom (yellow) for each image. (b) and (c) are the samples with their growth time and substrate orientation, respectively. The AFM # in (a) corresponds to the sample # in (b) and (c).
resulted in some classification errors (e.g images 2, 5, 15, and 18). However, we do not observe any obvious trend in these growth parameters that leads to consistent misclassification, once again demonstrating how challenging this classification task is.
### Ordinality
The preceding results were all based on nominal classification, without any notion of ordering. However, the classes consisting of the growth temperatures would appear to be ordered due to their continuous nature (i.e., ranging from 900 to 1000\({}^{\circ}\)C). We therefore further quantify the effect of ordinal treatment of the class labels on model accuracy. In accounting for ordinality in shallow (i.e., non-NN-based) models, we adopted a simple approach based on training a regressor and then binning the results into classes. For the NN-based models, we further implemented the NNrank ordinal classification scheme. The results of this study are given in Table 4.
While results vary for each model type, some general trends emerge. Accounting for ordinality in model training leads to improvement in the test accuracy in only one of the shallow models (KR), but matches or degrades the performance for all others. Most of these are statistically indistinguishable, with only SVM, GP, and MLP exhibiting significant decreases. Overall, nominal classification gave superior performance over regression, with the top performing shallow models GP and GB giving 66% accuracy.
For the NN models, MLP outperformed CNN overall, with statistically indistinguishable accuracy using nominal classification and ordinal classification. While the end-to-end CNN
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline Models & SVM & KR & RNN & GP & KNN & DT & GB & MLP & CNN \\ \hline Classification (\%) & 62\(\pm\)4 & 54\(\pm\)4 & 64\(\pm\)3 & 66\(\pm\)2 & 64\(\pm\)3 & 57\(\pm\)8 & 66\(\pm\)4 & **69\(\pm\)5** & 64\(\pm\)10 \\ NNRank (\%) & - & - & - & - & - & - & - & **71\(\pm\)4** & 68\(\pm\)3 \\ Regression (\%) & 50\(\pm\)8 & 60\(\pm\)5 & 64\(\pm\)4 & 48\(\pm\)0 & 58\(\pm\)6 & 54\(\pm\)6 & **64\(\pm\)7** & 42\(\pm\)8 & 61\(\pm\)7 \\ \hline RMSE (\({}^{\circ}\)C) & 31\(\pm\)2 & **26\(\pm\)1** & - & 36\(\pm\)9 & 32\(\pm\)3 & 38\(\pm\)5 & 28\(\pm\)3 & 62\(\pm\)8 & 34\(\pm\)4 \\ \hline \end{tabular}
\end{table}
Table 4: Performance of nominal and ordinal treatment of class labels, expressed as accuracy on held-out test data in % for classification and \({}^{\circ}\)C for regression. Best model performance in each row is shown in bold.
performed significantly better than the MLP on the regression task, the performance on regression was the worst of the three schemes for each model, making it somewhat irrelevant. Somewhat counterintuitively, slightly higher accuracy could be obtained by binning the output of the GB regressor (64%) which had a higher RMSE compared to the KR regressor (\(28\pm 3^{\circ}\)C versus \(26\pm 1^{\circ}\)C). This suggests that least-squares regression may be placing too much weight on outliers, which are less influential in the case of ordinal classification. It is even possible that the growth temperatures are not really ordinal after all, perhaps with 950\({}^{\circ}\)C representing a value close to optimal while 900\({}^{\circ}\)C and 1000\({}^{\circ}\)C could be a similar distance away from optimal.
The best-performing model across any type or scheme was the MLP NNrank ordinal classifier with an accuracy of 71%. For the NNRank applied to the MLP and CNN, the average test accuracy of the CNN and MLP improved minimally with +2% and +4%, repectively, over the nominal classification. This improvement is accounted for mainly in reduced classification errors of the 1000\({}^{\circ}\)C images from 75% to 82% accuracy (Figure 7).
In an effort to explain the surprising trend observed in the ordinal treatment of the data, we obtained the first 2 principal components of the data using principal component analysis (PCA) [60]. The image classes are embedded in the 2 components shown in Figure 8. The figure
Figure 7: The average confusion matrix of the 3 classes of temperature (900, 950, and 100\({}^{\circ}\)C) on the test set for the 9 different model architectures. This is based on ordinal classification with regression used for the GBR and NNrank for the MLP and CNN.
shows overlap of all three classes and more significantly between neighboring classes, with very poor separation visible in the first two components. We visualize the micrographs in the PCA space in Figure 9, indicating variations in the domain size (PC1) and density (PC2). Because these features vary significantly even within the same temperature class (e.g., see Figure 8), the image feature vectors likely do not show consistent trends from 900\({}^{\circ}\)C to 950\({}^{\circ}\)C to 1000\({}^{\circ}\)C, leading to no advantage in the ordinal treatment of growth temperature.
### Model Explanations
Beyond the capacity of the ML models to isolate the morphological features associated with the different growth temperature of the thin film MoS\({}_{2}\) based on their AFM images, we want to understand what features of the images the models used in the classification. Class activation maps (CAM) of the different classes are therefore obtained following the implementation by Zhou, et al. [61]. The feature maps of the last convolutional layer are summed and then normalized by dividing by the maximum value to obtain a heatmap with the same
Figure 8: The first two principal components of the image features showing the temperature class distribution in the reduced dimensional representation from the principal component analysis. Significant overlap is observed among the different classes in the embedding space.
Figure 9: The first two principal components of the image features showing the sample images, in the reduced dimensional representation from the principal component analysis. The embedding shows that the first dimension (PC1) is associated with the domain size, while the second dimension (PC2) seems to indicate the domain density.
dimensions as the layer. The bright yellow spot on the class activation maps represent the region with the highest activation which the model used for the classification.
Additionally, we obtained the occlusion attribution; the probability of a class of image as a function of an occluder object,[62] using the implementation in Captum library.[63] To achieve this, we iteratively set a patch of the image to be zero-pixel values and then obtain the probability of the class. Stride size of \(5\times 5\) and the patch size of \(15\times 15\) were used. The probability is visualized as a 2D heat map. Both positive and negative attributions, indicating that the presence and absence of the area, respectively, increases the prediction scores are shown on the heat map. The occlusion attribution is applied to four sample images, for each class, correctly predicted by the CNN model. Green regions on the image have positive attributions while red regions have negative attribution.
The CAM and occlusion attribution in Figure 10 show substantial agreement in identifying the activation region, with the latter giving more specific spatial attribution. The activation features are easier to perceive in images with bigger domain sizes, especially those grown at higher temperature. For some of the images from samples grown at higher temperature and which show clearly defined domains, some domain boundaries are highlighted, indicating the model's reliance on the boundaries in identifying such images. Also, regions with clean multi-steps crystals are shown to be important for the model in the classification (Figure 10c), while the messy crystals post adverse effect to class attribution, as shown in the occlusion attribution.
From the experimental observations, the samples grown at higher temperatures are expected to exhibit greater domain sizes.[3, 4] However, in the data used in training our models, there is significant variation in the quality of the samples, such that most images grown at higher temperature do not necessarily have greater domain sizes (Figures 2 and 6). Additionally, if the model depends on domain size in identifying the images, it will be difficult to visually identify such features in images with less defined domains, and the only difference among the classes would only be the magnitude of the same feature. This is unlike the natu
Figure 10: Class activation maps (CAM) and occlusion attribution showing different regions of the images the model used for the classification. (a), (b), and (c) are for samples of images grown at 900\({}^{\circ}\)C, 950\({}^{\circ}\)C and 1000\({}^{\circ}\)C, respectively. (i), (ii), and (iii) are the original AFM images, CAM, and AFM images overlaid with CAM, respectively.
ral images where activation of different classes are typically associated with unique features of the classes that can be visually identified [56, 57, 58]. The models have therefore shown to be capable of identifying image features even beyond human visual capability.
## 4 Conclusion
This study focuses on the development of ML models for the classification of AFM images of thin film MoS2 based on the growth temperatures of their samples. Many different strategies were explored for feature extraction, including use of features from different depths in a pretrained ResNet, different pretraining image domains, and the use of different transfer learning approaches including feature extraction, fine-tuning of convolutional filters followed by shallow learning, and end-to-end fine-tuning. Different augmentation strategies from the literature were evaluated to determine their effect on overall model performance. Beyond these pretraining schemes, nine different ML algorithms were evaluated to determine the most suitable approach for identifying morphological features associated with different growth temperatures.
The study also examined the impact of considering the ordinality of the classes on the accuracy of the models in identifying AFM images grown at different temperatures. We found that accounting for ordinality (i.e., by switching from classification to regression loss functions) improved the accuracy of some algorithms while decreasing performance for others. For instance, the best model overall was obtained using an NNrank ordinal classifier, but some nominal classifier were nearly as accurate. Furthermore, some algorithms had equivalent accuracy regardless of whether the data was treated as nominal classes or ordinal. Thus, there seems to be no clear advantage to using least-squares regression here, despite the data appearing in the form of continuous, ordered growth temperatures, which is a counterintuitive result.
To address class imbalance, weighted random sampling and oversampling techniques were
employed, and robust ML models that generalize well to out-of-sample data were developed using model ensembles. The best-performing algorithms, MLP and end-to-end CNN, achieved classification accuracy of about 70% on held-out test data. The high accuracy obtained demonstrates the effectiveness of ML in accurately identifying thin films grown at different temperatures, despite the limitations of other inconsistent growth parameters and imbalances in the training data.
This study also sought to understand the features utilized by the ML models for classification by obtaining class activation maps and occlusion attribution. These strategies revealed that images from samples grown at higher temperatures, exhibiting well-defined domains, had the highest activation at the domain boundaries, aligning with experimental observations. Moreover, the models demonstrated the capability to identify latent features beyond human visual perception, accurately classifying images with varying domain sizes that would be challenging for human experts. Future work may explore the relationship between these image features and additional attributes of the samples; the robustness of these features across growth chambers, characterization instruments, and even repeatability over time may be interesting ways to utilize the quantitative capability of deep learning to unlock new insights into challenging materials synthesis problems.
This study is based upon research conducted at The Pennsylvania State University Two-Dimensional Crystal Consortium - Materials Innovation Platform (2DCC-MIP) which is supported by NSF cooperative agreement DMR-2039351.
## 2 Data Availability
The raw data required to reproduce these findings are available to download from [30]. The processed data required to reproduce these findings are available to download from [64]. |
2303.01387 | Leveraging Symbolic Algebra Systems to Simulate Contact Dynamics in
Rigid Body Systems | Collision detection plays a key role in the simulation of interacting rigid
bodies. However, owing to its computational complexity current methods
typically prioritize either maximizing processing speed or fidelity to
real-world behaviors. Fast real-time detection is achieved by simulating
collisions with simple geometric shapes whereas incorporating more realistic
geometries with multiple points of contact requires considerable computing
power which slows down collision detection. In this work, we present a new
approach to modeling and simulating collision-inclusive multibody dynamics by
leveraging computer algebra system (CAS). This approach offers flexibility in
modeling a diverse set of multibody systems applications ranging from human
biomechanics to space manipulators with docking interfaces, since the geometric
relationships between points and rigid bodies are handled in a generalizable
manner. We also analyze the performance of integrating this symbolic modeling
approach with collision detection formulated either as a traditional overlap
test or as a convex optimization problem. We compare these two collision
detection methods in different scenarios and collision resolution using a
penalty-based method to simulate dynamics. This work demonstrates an effective
simplification in solving collision dynamics problems using a symbolic
approach, especially for the algorithm based on convex optimization, which is
simpler to implement and, in complex collision scenarios, faster than the
overlap test. | Simone Asci, Angadh Nanjangud | 2023-03-02T16:15:14Z | http://arxiv.org/abs/2303.01387v1 | # Leveraging Symbolic Algebra Systems to Simulate Contact Dynamics in Rigid Body Systems
###### Abstract
Collision detection plays a key role in the simulation of interacting rigid bodies. However, owing to its computational complexity current methods typically prioritize either maximizing processing speed or fidelity to real-world behaviors. Fast real-time detection is achieved by simulating collisions with simple geometric shapes whereas incorporating more realistic geometries with multiple points of contact requires considerable computing power which slows down collision detection. In this work, we present a new approach to modeling and simulating collision-inclusive multibody dynamics by leveraging computer algebra system (CAS). This approach offers flexibility in modeling a diverse set of multibody systems applications ranging from human biomechanics to space manipulators with docking interfaces, since the geometric relationships between points and rigid bodies are handled in a generalizable manner. We also analyze the performance of integrating this symbolic modeling approach with collision detection formulated either as a traditional overlap test or as a convex optimization problem. We compare these two collision detection methods in different scenarios and collision resolution using a penalty-based method to simulate dynamics. This work demonstrates an effective simplification in solving collision dynamics problems using a symbolic approach, especially for the algorithm based on convex optimization, which is simpler to implement and, in complex collision scenarios, faster than the overlap test.
## I Introduction
Computer simulation of collision/contact dynamics is a germane research topic of engineering science, particularly within the mechanics [1] and robotics [2] communities. Current research on computational contact dynamics focuses on the underlying numerical optimisation routines and addresses handling multiple contacts for real-time simulation [3]. While important strides have been made here, this research is limited in its employment of predefined models (e.g., 6-DOF manipulators, humanoids, wheeled robots, quadrupeds), that do not generalize to other domains such as spacecraft dynamics and control [4]. In this paper, we present a new approach to simulating collision dynamics multibody systems by integrating existing collision detection algorithms with computer symbolic modeling for rigid body dynamics [5]; compared with traditional approaches, the modeling capability facilitates the application of the algorithm to uncommon and complicated shapes, enabling accurate results in complex scenarios. Further, the symbolic approach is compatible with a variety of contact dynamics models [6], which are appropriately formulated to describe the system behavior through symbolic equations of motion. In our work, we exploit an elastic-plastic contact model, commonly used in space manipulator research [7, 8].
## II Symbolic Approach to Model Collision
### _Symbolic Simulation Framework_
The proposed symbolic modeling approach utilizes SymPy, a widely used computer algebra system (CAS) implemented in Python. We specifically make use of a submodule that derives the symbolic equations of motion (EoMs) of multibody systems [5]. The modular design of the framework consists of two parts: the first models the dynamic system and generates the EoMs in symbolic form by means of an automated routine; and the second piece converts symbolic EoMs to their numerical equivalent that can be integrated over time to obtain the evolution of the system.
### _Collision Detection_
Collision detection is defined as the procedure aimed to determine whether two or more objects are overlapping. Specifically, in the context of dynamics simulations, it detects when moving objects are in contact. It represents a computational geometry problem with applications in various fields, including computer graphics and video games. Popular algorithms for collision detection are those based on Minkowsky difference, like the Gilbert-Johnson-Keerthi (GJK) algorithm [9, 10]. Another category of collision detection algorithms are based on the Separating Axis Theorem (SAT) [11], applied in synergy with techniques to approximate the volume occupied by objects, like the Axis-Aligned Bounding Box [12]. Such algorithms are employed in several physics engines including Bullet [13], MuJoCo [14], and Box2D [15], where the trade-off between detection accuracy and computational effort has a major impact on the choice of the algorithm, since collision detection is mainly responsible for simulation slowdown [4]. Specifically, the SAT is generally preferred for simple applications, where accuracy plays a secondary role with respect to the availability of computing power. Besides the aforementioned traditional methods, examples of recently proposed methods are the Dynamic Collision Checking which is based on heuristic search [16], and Fastron which leverages machine learning techniques [17].
### _Collision Resolution_
Collision resolution is defined as the procedure to compute the dynamic behavior of two or more bodies that are in a state of contact with each other. This generally involves resolving the magnitude and direction of contact forces and resulting moments exerted on the interacting bodies to then compute the accelerations, though impulse-based approaches compute only the resulting velocities of the bodies [18]. The collision force computation relates to dynamics and depends primarily on the materials and mechanical properties of the colliding bodies. Models proposed in the literature develop algebraic models of collision for a pair of objects by running a series of collision experiments and considering the relationship between the pre-impact and post-impact velocities of each body [19]. These methods can be classified into two categories: the discrete and continuous models [20]. The discrete approach, known also as impulse-momentum or complementarity approach, is based on the assumption that the poses of the bodies do not change significantly when short-duration contact occurs; it models bodies as rigid and resolves contact forces using kinematics constraints [21]. Complementarity methods handle non-smooth events (e.g. collisions and contact interactions) by using impulsive dynamics, unlike continuous methods that simulate body deformation during contact and are affected by issues related to small time step size [22]. On the other hand, the application of this approach to flexible and multibody systems is complicated. The continuous approach, also referred to as force-based approach or penalty-based method, approximates the local deformation of the contacting bodies using the intersection between their respective geometries, which is then utilised to model the contact force accordingly; it has been widely applied in robotic contact problems, due to its suitability in handling complex geometries [23, 24]. It includes the bristle-friction [25] and the elastic-plastic deformation models [8]. The Bristle friction model is based on a linear approximation of the Coulomb friction model. The elastic-plastic model, which is utilized in this paper, calculates the interacting force as the sum of the normal and tangent components to the impact surface, the magnitudes of which depend on the relative velocity of the bodies and the amount of local deformation.
### _Collision response module architecture_
In a typical dynamics simulator, contact interactions are handled by a collision response module (illustrated in Fig. 1 as the blue dashed box) which takes as input the state of the bodies, i.e. their positions and velocities, and returns a list of forces and moments to update the EoMs for each time step. Its architecture is generally organized into a collision detection module and a collision resolver module; the former detects any collision between the bodies along with any other necessary information needed by the resolver. This generally includes a metric that describes the distance between the bodies, which is referred to as proximity (indicated in this paper with parameter \(\phi\), in SI units of \(m\)), and the minimum distance points (referred to as MDPs in the paper), i.e. the points where the distance between the two bodies is minimum and which therefore are potential future contact points. In case of no overlap, the collision check is terminated and the simulation continues without any update to the EoMs from the collision response module. In this case \(\phi\) still contains useful information that can be utilized in the future time steps of the simulation. In case a collision is detected, the collision detection module provides the required information to the resolver which determines all the data necessary for the simulator to resolve the outcome of the collision. Specifically, the interpenetration (indicated with parameter \(\rho\), in SI units of \(m\)) is the metric equivalent to proximity and indicates the depth of penetration between the bodies, while in this case the computed MDPs are actual contact points. MDPs are used to compute the minimum translation vector (MTV) [26], which is defined as the shortest distance along which objects should be moved away to no longer be in a collision state. The direction of the contact force is computed using the MTV, and its magnitude, which depends on the value of \(\rho\), is computed according to the specific contact method applied. Once the interacting forces and moments are determined, the resolver module returns them to the simulator which updates the states accordingly.
## III Collision Detection Methods
### _Separating Axis Theorem Based Collision Detection_
On a plane, the SAT states that two convex shapes do not intersect if and only if there exists a line for which the projections of the objects onto this line are not overlapping; this line is referred to as a separating axis (Fig. 2). The same concept extends to the 3D case with a separating plane. The number of lines to be tested depends on the shapes of the objects: two circles require only a single test on the line joining the centers; while for polygons, the normal to each edge of both shapes is a candidate separating axis. Projecting a polygon onto a line requires performing the scalar product between each vertex position and the unit vector lying along
Fig. 1: Architecture of the collision dynamics simulator
the line and storing the range defined by minimum and maximum values; the same operation is repeated for the vertices of the second object. The presence of an overlap can then be determined by a comparison between the two ranges. Therefore the collision detection check consists of a series of scalar products and can provide information also on the amount of interpenetration between shapes, a useful value to perform collision resolution at a later stage. All axes must be tested for overlap to determine intersection, this aspect is responsible for the algorithm's computational complexity and significantly impacts performance, especially when testing polygons with many edges or there are many objects to test. However, at the first axis found where the projections are not overlapping, the algorithm can immediately exit by concluding that the shapes are not in a collision state. Therefore it is possible to speed up the process by appropriately choosing the first axis to test [11], for example, the line joining the centroids of the shapes has the highest probability of detecting the absence of intersection between the shapes. The symbolic approach is particularly advantageous for the SAT implementation because it can handle symbolically the many geometric entities and linear algebra operations required by this algorithm.
### _Convex Optimization Based Collision Detection_
Collision detection can also be formulated as an inequality-constrained convex optimization (CO) program that performs a minimization over a continuous objective function in the form [27]:
\[\begin{split}\underset{\overline{x}}{\operatorname{minimize}}& \frac{1}{2}\mathbf{x}^{\text{T}}\mathbf{P}\mathbf{x}+\mathbf{c}^{ \text{T}}\mathbf{x}\\ \operatorname{subject\ to}&\mathbf{G}\mathbf{x}\leq \mathbf{h}\end{split} \tag{1}\]
where \(\mathbf{x}\in\mathbb{R}^{n}\) is the independent variable, \(\mathbf{P}\in\mathbb{S}_{+}^{n\times n}\) and \(\mathbf{c}\in\mathbb{R}^{n}\) are quadratic and linear cost terms; the inequality constraint is described by \(\mathbf{G}\in\mathbb{R}^{l\times n}\), and \(\mathbf{h}\in\mathbb{R}^{l}\). The returned optimal value for the objective function indicates the proximity, which is positive when no collision is detected and zero if the objects are overlapping. The corresponding optimal values for the independent variables of the problem indicate the coordinates of the MDPs (points \(\hat{A}\) and \(\hat{B}\) in Fig. 3). The MDPs are constrained to be on the objects' boundaries when there is no overlap, or in any point of the intersecting region in case of collision. In this second case, indeed, any pair of overlapping points located in aforementioned region satisfy the constraints of belonging to both objects while having a distance equal to zero, which is an optimal value for the objective function. The MDPs indicate the location where the first contact occurred, consequently providing information to compute the direction of collision force and the amount of interpenetration. Since, in the case of objects intersection, the convex optimization algorithm returns arbitrary values among a set of valid ones for the coordinates of the MDPs, this means that convex optimization-based collision detection is not compatible with penalty-based methods for collision resolution. Under certain circumstances, it is possible to modify the problem to impose the position for the MDPs on the boundaries and correctly compute collision resolution using a method based on interpenetration. However, other methods could generally be used, such as those based on non-penetration constraints, which model the reaction force using Lagrangian multipliers [21]. In this work, cvxpy [28] is utilized to solve the convex quadratic program as it offers a variety of solvers based on methods commonly used, including the interior point and the projected gradient methods. We make use of the embedded conic solver (ECOS), an interior-point method for second-order cone programming.
## IV Collision Between Rectangle and Circle
This section describes a rectangle and circle in two-dimensional space (Fig. 4). To define the rectangle mathematically, let us consider a point \(A^{*}\) in the body-fixed reference frame \(\mathbf{A}=(\mathbf{\hat{a}_{1}},\mathbf{\hat{a}_{2}},\mathbf{\hat{a}_{3}})\) at a distance \(\mathbf{r_{A}}\in\mathbb{R}^{3}\) from \(I^{*}\), the origin of an inertial frame \(\mathbf{I}=(\mathbf{\hat{i}_{1}},\mathbf{\hat{i}_{2}},\mathbf{\hat{i}_{3}})\). Note that \(\mathbf{\hat{i}_{3}}\) and \(\mathbf{\hat{a}_{3}}\) are parallel and out-of-plane. Then any point \(y\in\mathbb{R}^{2}\) lying on the plane defined by \(\mathbf{\hat{a}_{1}},\mathbf{\hat{a}_{2}}\) is within the rectangle if:
\[\{y\mid\mathbf{y}\cdot\mathbf{\hat{a}_{1}}\leq C_{1}\wedge\mathbf{y}\cdot \mathbf{\hat{a}_{2}}\leq C_{2}\} \tag{2}\]
where \(\mathbf{y}\) is the vector from origin \(I^{*}\) to point \(y\), parameters \(C_{1}\) and \(C_{2}\) indicate rectangle dimensions, i.e. half-length and half-width, and \(\wedge\) is the logical conjunction operator.
Fig. 3: Two convex shapes defined as the intersection of a set of half-planes. The minimum distance points are highlighted in red.
Fig. 2: Two convex shapes with their respective projections on a separating axis
Then, consider a circle with center \(B^{*}\), of radius \(R\in\mathbb{R}\) located at a distance \(\mathbf{r_{B}}\in\mathbb{R}^{3}\) from origin \(I^{*}\). Its body-fixed reference frame is defined as \(\mathbf{B}=(\mathbf{\hat{b}_{1}},\mathbf{\hat{b}_{2}},\mathbf{\hat{b}_{3}})\), with \(\mathbf{\hat{b}_{3}}\) parallel to \(\mathbf{\hat{i}_{3}}\). The circle is made up of a set of points on the plane lying on axes \(\mathbf{x_{I}}\) and \(\mathbf{y_{I}}\), such that any point \(x\in\mathbb{R}^{3}\) is within a distance \(R\) from the center, as expressed in (3):
\[\{x\mid\|\mathbf{x}-\mathbf{r_{B}}\|\leq R\wedge\mathbf{x}\cdot\mathbf{\hat{i}_ {3}}=0\} \tag{3}\]
The following equation shows transformations between frames:
\[\mathbf{A} =\mathbf{{}^{I}R^{A}}\cdot\mathbf{I} \tag{4}\] \[\mathbf{B} =\mathbf{{}^{I}R^{B}}\cdot\mathbf{I}\]
where \(\mathbf{{}^{I}R^{A}}\) and \(\mathbf{{}^{I}R^{B}}\) are the rotation matrices relating the inertial frame \(\mathbf{I}\) to frames \(\mathbf{A}\) and \(\mathbf{B}\), respectively. They are defined as follows:
\[\mathbf{{}^{I}R^{A}} =\begin{bmatrix}\cos\theta_{1}&\sin\theta_{1}&0\\ -\sin\theta_{1}&\cos\theta_{1}&0\\ 0&0&1\end{bmatrix} \tag{5}\] \[\mathbf{{}^{I}R^{B}} =\begin{bmatrix}\cos\theta_{2}&\sin\theta_{2}&0\\ -\sin\theta_{2}&\cos\theta_{2}&0\\ 0&0&1\end{bmatrix}\]
where \(\theta_{1}\) and \(\theta_{2}\) indicate the orientation angles. In the remainder of this section, the vectors are expressed in the frame \(\mathbf{A}\), therefore the position of circle center \(B^{*}\) is given by \(\mathbf{q}=\mathbf{{}^{I}R^{A}}(\mathbf{r_{B}}-\mathbf{r_{A}})\).
### _Separating Axis Theorem Based Collision Detection_
This procedure determines firstly the MDP of the rectangle, which is then used to compute the MDP of the circle and the proximity value \(\phi\). Let us introduce two auxiliary quantities \(\alpha\) and \(\beta\):
\[\alpha =|\mathbf{q}\cdot\mathbf{\hat{a}_{1}}|-C_{1} \tag{6}\] \[\beta =|\mathbf{q}\cdot\mathbf{\hat{a}_{2}}|-C_{2}\]
which indicate how far \(B^{*}\) is from each rectangle side and in which region of the space it is located; as shown in Fig. 5, point \(B^{*}\) can be located in:
* corner regions and relative vertices, in green
* upper and lower regions and relative edges, in blue
* left and right regions and relative edges, in red
* region inside the rectangle, in purple
The MDP of the rectangle is indicated with \(\tilde{A}\); it can be any point on its boundary, including a vertex, depending on the circle center location. Its coordinates \(x\) and \(y\) are defined by (7).
\[\begin{cases}[\text{sgn}(\mathbf{q}\cdot\mathbf{\hat{a}_{1}})C_{1},\text{sgn} (\mathbf{q}\cdot\mathbf{\hat{a}_{2}})C_{2}]\text{ for }(\alpha,\beta\geq 0)\\ [\text{sgn}(\mathbf{q}\cdot\mathbf{\hat{a}_{1}})C_{1},\text{sgn}(\mathbf{q} \cdot\mathbf{\hat{a}_{2}})C_{2}]\text{ for }(\alpha<0\wedge\beta\geq 0)\\ [\text{sgn}(\mathbf{q}\cdot\mathbf{\hat{a}_{1}})C_{1},\mathbf{q}\cdot\mathbf{ \hat{a}_{2}}]\text{ for }(\alpha\geq 0\wedge\beta<0)\\ [\text{sgn}(\mathbf{q}\cdot\mathbf{\hat{a}_{1}})C_{1},\text{q}\cdot\mathbf{ \hat{a}_{2}}]\text{ for }(\alpha,\beta<0\wedge\alpha>\beta)\\ [\text{sgn}(\mathbf{q}\cdot\mathbf{\hat{a}_{1}})C_{1},\text{sgn}(\mathbf{q} \cdot\mathbf{\hat{a}_{2}})C_{2}]\text{ for }(\alpha,\beta<0\wedge\alpha<\beta)\\ [\text{sgn}(\mathbf{q}\cdot\mathbf{\hat{a}_{1}})C_{1},\text{sgn}(\mathbf{q} \cdot\mathbf{\hat{a}_{2}})C_{2}]\text{ for }(\alpha,\beta<0\wedge\alpha =\beta)\end{cases} \tag{7}\]
Thus its position vector \(\mathbf{\tilde{p}}\) is:
\[\mathbf{\tilde{p}}=x\mathbf{\hat{a}_{1}}+y\mathbf{\hat{a}_{2}} \tag{8}\]
Since the collision force is bound to point \(\tilde{A}\), its position with respect to the centroid of the rectangle is given by the vector \(\mathbf{p^{M}}\) (Fig. 4), which joins point \(A^{*}\) to point \(\tilde{A}\) and coincides with \(\mathbf{\tilde{p}}\). Consequently, the position of the MDP of the circle \(\tilde{B}\) with respect to \(\mathbf{A}\) frame can be directly computed as:
\[\mathbf{\tilde{q}}=\mathbf{q}+R\frac{\mathbf{\tilde{p}}-\mathbf{q}}{\|\mathbf{ \tilde{p}}-\mathbf{q}\|} \tag{9}\]
The position \(\mathbf{q^{M}}\) of the collision force applied to the circle with respect to its center \(B^{*}\) is:
\[\mathbf{q^{M}}=R\frac{\mathbf{\tilde{p}}-\mathbf{q}}{\|\mathbf{\tilde{p}}- \mathbf{q}\|} \tag{10}\]
and \(\phi\) is defined as:
\[\phi=\|\mathbf{\tilde{p}}-\mathbf{q}\|-R \tag{11}\]
Fig. 4: Geometrical description of a rectangle and a circle as defined by their centroids, orthogonal basis and parameters
Fig. 5: Rectangle regions
while \(\rho\) is given by 12.
\[\begin{cases}\rho=0&\text{for }\phi>0\\ \rho=|\phi|&\text{for }\phi\leq 0\end{cases} \tag{12}\]
The components \(n_{1},n_{2}\) of the normal to the surface can be calculated as follows:
\[\begin{cases}[\text{sgn}(\mathbf{q}\cdot\mathbf{\hat{a}_{1}})C_{1},\ \text{sgn}(\mathbf{q}\cdot\mathbf{\hat{a}_{2}})C_{2}]\ \text{for }(\alpha,\beta\geq 0)\\ [0,\ \text{sgn}(\mathbf{q}\cdot\mathbf{\hat{a}_{2}})]\ \text{for }(\alpha<0\wedge\beta\geq 0)\\ [\text{sgn}(\mathbf{q}\cdot\mathbf{\hat{a}_{1}}),\ 0]\ \text{for }(\alpha\geq 0\wedge\beta<0)\\ [\text{sgn}(\mathbf{q}\cdot\mathbf{\hat{a}_{1}}),\ 0]\ \text{for }(\alpha,\beta<0\wedge\alpha>\beta)\\ [0,\ \text{sgn}(\mathbf{q}\cdot\mathbf{\hat{a}_{2}})]\ \text{for }(\alpha,\beta<0\wedge\alpha<\beta)\\ [\text{sgn}(\mathbf{q}\cdot\mathbf{\hat{a}_{1}}),\ \text{sgn}(\mathbf{q}\cdot\mathbf{\hat{a}_{2}})] \ \text{for }(\alpha,\beta<0\wedge\alpha=\beta)\end{cases} \tag{13}\]
and finally the formulas of normal and tangent components of the force:
\[\begin{split}\mathbf{\hat{n}}=n_{1}\mathbf{\hat{a}_{1}}+n_{2} \mathbf{\hat{a}_{2}}\\ \mathbf{\hat{t}}=\mathbf{\hat{a}_{3}}\times\mathbf{\hat{n}}\end{split} \tag{14}\]
### _Convex Optimization Based Collision Detection_
Without loss of generality, all quantities are again expressed in the rectangle reference frame. The optimization problem consists of finding two points \(\tilde{A}\) and \(\tilde{B}\) belonging to the rectangle and circle, respectively, minimizing the square of the distance between them. The algorithm computes correctly the proximity and the MDPs coordinates only when the objects do not intersect because in this case, every point inside the overlapped regions is a candidate to be the MDP, the objective function is always zero and therefore the value of the interpenetration cannot be determined. However, a minor modification to the problem formulation makes it possible to calculate \(\rho\): by introducing a fictitious circle at the same location as the original one, but with a smaller radius \(R^{*}\) and solving the following convex program for it:
\[\begin{split}\underset{\mathbf{\tilde{p}},\mathbf{\tilde{q}}^{* }}{\text{minimize}}&\left\|\mathbf{\tilde{p}}-\mathbf{\tilde{q}}^{* }\right\|^{2}\\ \text{subject to}&\mathbf{\tilde{p}}\cdot\mathbf{ \hat{a}_{1}}\leq C_{1},\\ -\mathbf{\tilde{p}}\cdot\mathbf{\hat{a}_{1}}&\leq C _{1},\\ \mathbf{\tilde{p}}\cdot\mathbf{\hat{a}_{2}}&\leq C _{2},\\ -\mathbf{\tilde{p}}\cdot\mathbf{\hat{a}_{2}}&\leq C _{2},\\ \left\|\mathbf{q}-\mathbf{\tilde{q}}^{*}\right\|^{2}& \leq(R^{*})^{2}\end{split} \tag{15}\]
where \(\mathbf{\tilde{q}}^{*}\) is the position of the MDP belonging to the fictitious circle. Indicating with \(b=R-R^{*}\) the difference between real and fictitious radii, the interpenetration can be recovered from the surrogate proximity \(\phi^{*}\) with (16)
\[\begin{cases}\rho=0&\text{for }\phi^{*}>b\\ \rho=b-\phi^{*}&\text{for }0<\phi^{*}\leq b\end{cases} \tag{16}\]
\(\phi^{*}\) is positive also when the original objects are in a collision state, while it becomes zero if the rectangle and fictitious circle come into contact, in which case the interpenetration can no longer be calculated correctly. Vector \(\mathbf{p}^{\text{M}}\) coincides with \(\mathbf{\tilde{p}}\), which is returned by the optimization algorithm; while vectors \(\mathbf{\tilde{q}}\) and \(\mathbf{q}^{\text{M}}\) are given by (9) and (10) respectively. Finally the normal and tangent to the rectangle surface are computed as:
\[\begin{split}\mathbf{\hat{n}}=\frac{\mathbf{\tilde{q}}^{*}- \mathbf{\tilde{p}}}{\left\|\mathbf{\tilde{q}}^{*}-\mathbf{\tilde{p}}\right\| }\\ \mathbf{\hat{t}}=\mathbf{\tilde{a}_{3}}\times\mathbf{\hat{n}}\end{split} \tag{17}\]
## V Collision Resolution
In this study, collision resolution is implemented with a penalty-based method. For the sake of simplicity, the reaction force applied at the respective contact point is replaced by an equivalent force-moment system, with the force applied at the center of mass of the body. The required information provided by the collision detection algorithm is:
* the amount of interpenetration
* normal and tangent to the impact surface
* the vector joining each body center of mass to the contact point
The magnitude of each force component is computed based on the elastic-plastic approach [8], which defines the interaction between colliding bodies as a spring-damped model. Complete information about material, geometry and velocity of the bodies involved is supposed to be known. With reference to (18) the normal force \(F_{n}\) is composed of the elastic component, which is proportional to the interpenetration value, and the plastic component, which depends on the relative velocity along the normal direction. The tangent force \(F_{t}\) depends on the friction between the surfaces and the relative velocity along the tangent direction.
\[\begin{split} F_{N}=(kc\cdot\rho^{3})\cdot(1-cc\cdot v_{N})\\ F_{T}=-mu\cdot F_{N}\cdot(\frac{2}{1+e^{\frac{-v_{T}}{vs}}}-1) \end{split} \tag{18}\]
Where \(kc\) and \(cc\) are the contact stiffness parameter and damping coefficient respectively, \(mu\) is the friction coefficient and \(vs\) is a scaling factor, \(v_{N}\) and \(v_{T}\) are the normal and tangent values of the relative velocity of the bodies. The computed force is applied, with opposite signs, to both bodies; while the moment is obtained through a cross-product with the respective force position vector.
## VI Results
Table I compares performances between the symbolic approach and the traditional numeric one in simulating contact dynamics, applying SAT-based method for collision detection and penalty-based method for collision resolution. An additional comparison shows performances of the symbolic approach integrated with CO-based collision detection. Five scenarios were examined by running ten simulations each and calculating the average time, on a computer equipped with an 8-core Intel Core i7-9750H CPU @ 2.60GHz.
Regarding 2D scenarios, all methods achieve similar scores, but for the numeric one, this comes at the cost of more
modeling effort, which is not reflected in the table. Convex optimization has a clear advantage in the 3D case, where the SAT suffers from a significant increase in the number of tests to be run. Furthermore, the implementation simplicity of CO algorithm scales to more complex cases.
## VII Conclusions
This paper presented a novel approach to implementing collision dynamics, through the generation of symbolic equations of motion. The approach was tested in five scenarios, integrating it with the Separating Axis Theorem and convex optimization-based methods for collision detection of multibody systems and with the elastic-plastic approach for collision resolution. We demonstrated that it significantly simplifies the modeling process while retaining the performance advantages of the standard numerical approach; this allows modelers to investigate contact dynamics using a generalizable approach. Further, a comparison between the two tested collision detection methods highlighted that, for 2D cases, the SAT is fast and suitable for applications with simple shapes and few contacts. On the other hand, the convex optimization approach is better able to handle complex shapes, although being generally not compatible with penalty-based methods. In applications involving several contacts to check, this research indicates that convex optimization-based collision detection is faster and more accurate than algorithms based on the SAT.
|
2306.04421 | The Renoir Dataflow Platform: Efficient Data Processing without
Complexity | Today, data analysis drives the decision-making process in virtually every
human activity. This demands for software platforms that offer simple
programming abstractions to express data analysis tasks and that can execute
them in an efficient and scalable way. State-of-the-art solutions range from
low-level programming primitives, which give control to the developer about
communication and resource usage, but require significant effort to develop and
optimize new algorithms, to high-level platforms that hide most of the
complexities of parallel and distributed processing, but often at the cost of
reduced efficiency. To reconcile these requirements, we developed Renoir, a
novel distributed data processing platform written in Rust. Renoir provides a
high-level dataflow programming model as mainstream data processing systems. It
supports static and streaming data, it enables data transformations, grouping,
aggregation, iterative computations, and time-based analytics, incurring in a
low overhead. This paper presents In this paper, we present the programming
model and the implementation details of Renoir. We evaluate it under
heterogeneous workloads. We compare it with state-of-the-art solutions for data
analysis and high-performance computing, as well as alternative research
products, which offer different programming abstractions and implementation
strategies. Renoir programs are compact and easy to write: developers need not
care about low-level concerns such as resource usage, data serialization,
concurrency control, and communication. Renoir consistently presents comparable
or better performance than competing solutions, by a large margin in several
scenarios. We conclude that Renoir offers a good tradeoff between simplicity
and performance, allowing developers to easily express complex data analysis
tasks and achieve high performance and scalability. | Luca De Martini, Alessandro Margara, Gianpaolo Cugola, Marco Donadoni, Edoardo Morassutto | 2023-06-07T13:27:33Z | http://arxiv.org/abs/2306.04421v3 | # The Noir Dataflow Platform: Efficient Data Processing without Complexity
###### Abstract
Today, data analysis drives the decision-making process in virtually every human activity. This demands for software platforms that offer simple programming abstractions to express data analysis tasks and that can execute them in an efficient and scalable way. State-of-the-art solutions range from low-level programming primitives, which give control to the developer about communication and resource usage, but require significant effort to develop and optimize new algorithms, to high-level platforms that hide most of the complexities of parallel and distributed processing, but often at the cost of reduced efficiency.
To reconcile these requirements, we developed Noir, a novel distributed data processing platform written in Rust. Noir provides a high-level dataflow programming model as mainstream data processing systems. It supports static and streaming data, it enables data transformations, grouping, aggregation, iterative computations, and time-based analytics, incurring in a low overhead. This paper presents in this paper, we present the programming model and the implementation details of Noir. We evaluate it under heterogeneous workloads. We compare it with state-of-the-art solutions for data analysis and high-performance computing, as well as alternative research products, which offer different programming abstractions and implementation strategies. Noir programs are compact and easy to write: developers need not care about low-level concerns such as resource usage, data serialization, concurrency control, and communication. Noir consistently presents comparable or better performance than competing solutions, by a large margin in several scenarios.
We conclude that Noir offers a good tradeoff between simplicity and performance, allowing developers to easily express complex data analysis tasks and achieve high performance and scalability.
data analytics, distributed processing, batch processing, stream processing, dataflow model
## 1 Introduction
Today, companies heavily rely on data analytics to extract actionable knowledge from static and dynamic (streaming) datasets. Over the last decade, this need has driven the surge of several distributed platforms designed to process data at scale [1, 2, 3, 4, 5]. Despite their differences, they all build on the dataflow programming and execution model. Pioneered by the MapReduce framework [6], this model defines data analytics jobs as directed graphs of operators, each applying a functional transformation to its input data and feeding downstream operators with its output. This approach brings twofold benefits: (i) It enables a high degree of parallelism: different operators may execute simultaneously on the same or different hosts (task parallelism), and each operator may itself be decomposed into parallel instances, each one working independently on one partition of the input data (data parallelism). (ii) It exposes a simple programming interface that abstracts away most of the complexity associated to the distribution of data and processing: developers focus on the behavior of operators and how the input data is partitioned across parallel instances, while the runtime automates deployment, scheduling, synchronization, and communication. Some platforms also offer higher-level abstractions on top of the dataflow model for specific applicative domains such as relational data processing [7, 8, 9] or graph processing [10].
However, despite offering a simple and effective way to scale-out data analytics, state-of-the-art platforms cannot provide a level of performance that is comparable to custom programs optimized for the specific problem at hand. As recognized in recent literature [11, 12], custom implementations using low-level programming primitives, such as MPI, can yield more than one order of magnitude performance improvements. But this comes at a price: a much greater difficulty in software validation, debugging, and maintenance, as programmers are exposed to concerns related to memory management, data serialization, communication, and synchronization.
In this paper, we present Noir, a new distributed data processing platform that provides a lightweight and highly efficient implementation of the dataflow model. Noir aims to dramatically reduce the performance gap with custom programs while offering the same ease of use as mainstream dataflow platforms. In fact, a direct comparison with programs written for Flink [5] or Spark [2] shows nearly identical complexity.
Noir supports the analysis of static and dynamic data (batch and stream processing jobs), it offers a rich library of operators for data transformation, partitioning, and aggregation, and it enables iterative computations. At the same time, Noir delivers similar or even better performance than custom MPI implementations. In our experiments we measured up to more than an order of magnitude higher throughput than Flink. Noir can also be used to parallelize computations within a single-process, performing on a par with dedicated software libraries such as OpenMP. Noir has been used in practice to compete to the Grand Challenge in the 2022 ACM International Conference of Distributed and Event-Based Systems, winning the performance award for the solution with the highest throughput and lowest latency [13].
These results derive from some key design and implemen
tation choices. Noir abandons JVM-based languages, typically adopted by mainstream competitors, in favor of Rust [14], a compiled programming language that offers high-level abstractions at virtually no cost, with a trait system that statically generates custom versions of each abstraction for different data types and avoids dynamic dispatching. Rust also provides a safe and inexpensive memory management mechanism, where the compiler infers when to free allocated memory without recurring to garbage collection. Noir adopts a lightweight approach to resource management, which leverages the services offered by the operating system as much as possible. For example, Noir co-locates operators that perform different steps of a processing job on the same host, letting them compete for CPU time based on their dynamic requirements, while it leverages the mechanisms embedded into TCP to implement backpressure.
Noir is available as an open-source project1. Our work may benefit researchers and practitioners that want to build on top of a highly efficient dataflow platform or study the internal implementation and the trade-off that each design choice brings. To the best of our knowledge, no other system provides the same combination of simplicity and performance as Noir. As part of our evaluation, we also consider Timely Dataflow [15] a popular research product that also provides a Rust implementation: we show that Noir bring comparable or better performance and scalability, and more consistent results across a wide range of workloads, while offering a simpler programming model.
Footnote 1: [https://github.com/deib-polimi/noir](https://github.com/deib-polimi/noir)
In summary, the paper brings several contributions to the research on distributed data processing: (I) It introduces Noir, a new dataflow platform that combines the simplicity of mainstream data analytic systems with a level of efficiency that is close or even better than custom low-level code. (2) It presents the key design and implementation choices that affect the efficiency and scalability of Noir. (3) It presents an extensive experimental evaluation that analyzes the effectiveness of Noir with highly heterogeneous workloads and compares it with mainstream dataflow platforms, low-level custom solutions, alternative research proposals, and libraries for parallel computations.
The paper is organized as follows. Section 2 provides background on distributed data processing platforms and Rust. Section 3 and Section 4 present the programming model and the design of Noir, and Section 5 evaluates its performance and scalability, comparing them with alternative data processing platforms and custom MPI programs. Section 6 discusses related work and Section 7 draws conclusive remarks.
## 2 Background
This section presents the programming model of distributed data processing platforms and the key features of Rust that Noir exploits to attain simplicity and efficiency.
### _Distributed data processing_
Modern platforms for distributed data processing rely on a dataflow programming model [4, 1] first introduced by Google's MapReduce [6]. Computation is organized into a directed graph, whose vertices represent operators and edges represent the flow of data from operator to operator. Since operators do not share any state, the model promotes distribution and parallelism by deploying operators in multiple instances, each processing an independent partition of the input data in parallel with the others, on the same or on different hosts.
The famous example used to illustrate the model is "word count", a program to count the number of occurrences of each word in a large set of documents. It can be expressed using two operators, the first operates in parallel on various partitions of the input documents splitting them in words and emitting partial counts for each word. These partial results are then regrouped by word and passed to the second operator that sums the occurrences of each word. Developers need only to express how to operate on an individual document (first operator) and how to integrate partial results for each word (second operator). The runtime takes care of operator deployment, synchronization, scheduling, and data communication: the most complex and critical aspects in distributed applications.
The dataflow model accommodates stream processing computations with only minor adjustments. Due to the unbounded nature of streams, developers need to specify when certain computations are triggered and what is their scope, which is typically expressed using _windows_. For instance, developers could implement a streaming word count computation over a window of one hour that advances every ten minutes, meaning that the count occurs every ten minutes and considers only new documents produced in the last hour.
Data processing systems implemented the dataflow model using two orthogonal execution strategies. Systems such as Hadoop [16] and Apache Spark [2] dynamically schedule operator instances over the nodes of the compute infrastructure. Communication between operators occurs by saving intermediate results on some shared storage, with operators deployed as close as possible to the input data they consume. Systems such as Apache Flink [5] and Google Dataflow [4] deploy all operators instances before starting the computation. Communication takes place as message passing among instances. Noir adopts the second strategy, which enables lower latency for streaming computations, as it does not incur the overhead of operator scheduling at runtime.
### _Rust_
Noir heavily relies on some key features of the Rust programming language to offer a high-level API with limited performance overhead.
_Generics and static dispatch._ In Rust, developers can express data structures and functions that are _generic_ over one or more types. For instance, all Noir operators consume and produce a generic Stream<T>, which represents a bounded or unbounded dataset of a generic type T. This high-level construct is implemented at virtually no cost by Rust, which adopts static dispatching. The compiler generates a separate version of each generic structure or function for each different way in which it is instantiated in the program, while invocations to generic functions are translated into direct calls to the correct version [17].
_Memory management._ Rust provides automatic and safe deallocation of memory without the overhead of garbage collection. It achieves this goal through an _ownership and borrowing_ model [18], which represents Rust's most distinctive feature. In Rust, every value has an _owning scope_ (for instance, a function), and passing or returning a value transfers its ownership to a new scope. When a scope ends, all its owned values are automatically destroyed. A scope can lend out a value to the functions it
calls: the Rust compiler checks that a lease does not outlive the borrowed object. All together, this model allows Rust to fully check safety of memory accesses at compile time, also avoiding the need for (costly) runtime garbage collection.
_Iterators and closures._ The iterator pattern is heavily used in idiomatic Rust code and enables chaining operations over a collection of items without manually implementing the logic to traverse the collection. In practice, operations on collections are implemented as _iterator adapters_ that take in input an iterator and produce a new iterator. Moreover, iterator adapters are often defined as higher-order functions that accept _closures_ defining their behavior as parameters. This iterator pattern strongly resembles the dataflow model discussed above. For this reason, we used iterators as the blueprint for Noir's model and implementation, making its API intuitive both for Rust developers and for users of data processing platforms.
_Traits and serialization._ Traits represent a collection of functionalities (methods) that any data type implementing that trait should offer. Traits are widely used in Rust to bound generics, for instance to restrict the use of a generic function only to parameters that implement certain traits. Noir leverages traits to transparently implement parameter passing among distributed instances of operators. More specifically, Noir requires all data types to implement the Serialize and Deserialize traits, and automatically generates the code that efficiently implements these traits.
## 3 Programming Interface
Noir offers a high-level programming interface that hides most of the complexities related to data distribution, communication, serialization, and synchronization.
### _Streams_
Streams are the core programming abstraction of Noir. A generic Stream<T> represents a dataset of elements of type T, which can be any type that implements the Serialize and Deserialize traits. Since these traits can be automatically derived at compile time by the Serde library [19], developers can use their custom data types without manually implementing the serialization logic. Streams model both static (bounded) datasets (e.g., the content of a file) and dynamic (unbounded) datasets, where new elements get continuously appended (e.g., data received from a TCP link). Streams are created by _sources_, processed by _operators_ that produce output streams from input streams by applying functional transformations, and collected by _sinks_. Finally, streams can be partitioned, enabling those partitions to be processed in parallel.
### _Creating and consuming streams_
In Noir, a StreamEnvironment holds the system configuration and generates streams from sources. Noir comes with a library of sources. For instance, the following snippet uses the IteratorSource, which takes an iterator in input and builds a source that emits all the elements returned by the iterator. In the example, the iterator is 0..100, consequently the source will emit all integers in the range from 0 to 99.
```
letenv=StreamEnvironment:new(config); letsource=IteratorSource:new(0..100); letstream=env.stream(source);
```
Similarly, the ParallelIteratorSource builds a source consisting of multiple instances that emit elements in parallel. It takes in input a closure with two parameters: the total number of parallel instances (instances in the code snippet below) and a unique identifier for each instance (id). The closure is executed in parallel on every instance, getting a different id from 0 to instances-1. The closure must return an iterator for each instance, with the elements that the instance emits. In the example below, each source instance runs in parallel with the others and produces 10 integers, the first one starting from 0, the second one starting from 10, and so on.
```
letsource=ParallelIteratorSource::new( move[id,instances]{ letstart=10*id; letend=10*(id+1); start.end });
```
Since iterators are widespread in Rust, the sources above have wide applicability. For example, iterators are used in the Apache Kafka API for Rust, making it straightforward for developers to build sources that read elements from Kafka topics.
Sinks consume output data from Noir and may be used to print the results or store them into files or external systems, such as a database. Noir provides three main sinks: for_each applies a function to each and every element in the stream, collect gathers all elements in a collection and returns it as output, collect_channel returns a multi-producer multi-consumer channel that can be used by external code to receive the outputs. For example, the following code snippet prints all elements in the stream:
```
stream.for_each([i|printIn!('i'));
```
### _Transforming streams with operators_
Operators define functional transformations of streams. We first present _single stream_ operators, which operate on one stream and produce one stream. We distinguish stateless and stateful single stream operators and we discuss how they are executed in parallel over partitions of the input stream. Next, we generalize to _multiple stream_ operators that process data coming from multiple streams or produce multiple streams.
#### 3.3.1 Single-stream operators
Single stream operators ingest a single stream to produce a new stream. Examples of single stream operators are map, flat_map, filter and fold. A map operator transforms each element of the input stream into one element of the output stream, as specified in a user-defined closure. For instance, the following code snippet transforms a stream of integers doubling each element to produce the output stream.
```
stream.map([i|i * 2);
```
A flat_map operator may produce zero, one, or more elements in the output stream for each element in the input stream. For instance, for each integer i in the input stream, the following code outputs three integers: i, i multiplied by 2, and i multiplied by 3. The developer packs the output elements produced when processing an input element into a vector and Noir automatically "flattens" these results into the output stream.
stream.flat_map(|i:u32|vc|[i,i+2,i+3]);
A filter operator takes a predicate and retains only the input elements that satisfy it. For instance, the code snippet below retains only the even numbers from the input stream.
stream.filter(|i:u32|v$2--0);
A fold operator combines all elements into an _accumulator_ by applying a provided closure to each element in the stream. The closure takes as parameters a mutable reference to the accumulator and an input element, and updates the value of the accumulator using the input. For instance, the example below sums all input elements: the initial value of the accumulator is 0 and each element i is added to the accumulator sum.
stream.fold(0,|sum:simutu32,i:u32|*sum+=i);
The reduce operator provides a more compact way to express the same kind of computations as fold when the elements in the input and in the output streams are of the same types, as exemplified by the code below, which uses a reduce operator to sum the elements of a stream.
stream.reduce(|sum,i|sum+i);
Noir also supports stateful operators, which can access and modify some state during processing. This way, developers may implement computations where the evaluation of an element depends on the state of the system after processing all previous elements in the input stream. As an example, let us consider the rich_map operator, which is the stateful version of a map operator. The evaluation of an element depends on the input element and on the state of the operator. The listing below adopts a rich_map to output the difference between the current element and the previous one. The value of the previous element is stored inside the prev variable, which is initialized to 0 and moved inside the closure. When processing a new element x, the closure computes the difference (diff), updates the state (prev) with the new element, and finally outputs the difference.
stream.rich_map(| let mut prev=0; move|x:u32|{ letdiff=x-prev; prev=x; diff });
#### 3.3.2 Parallelism and partitioning
In Noir streams usually consist of multiple partitions. For instance, the ParallelIteratorSource builds a partitioned stream, where each partition holds the data produced by a different instance of the iterator.
Partitioning is key to improve performance, as it enables multiple instances of the same operator to work in parallel, each on a different partition. For instance, the map example described above processes data in parallel, using as many instances of the map operator as the number of partitions in the input stream.
This form of parallel execution is not possible in presence of operators, like fold and reduce, which intrinsically need to operate on the entire set of elements in the stream. To overcome this potential bottleneck, in presence of associative operations, Noir provides an optimized, associative version of the fold and reduce operators, which splits the computation in two stages. First, the operation is performed on each partition, producing a set of intermediate results, then these partial results are combined to produce the final results. The example below shows the associative version of the summing job introduced in the previous section:
stream.reduce_assoc(|sum,i|sum+i);
In some cases, it may be necessary to control the way in which stream elements are associated to partitions. For instance, given a stream of sensor readings, if we want to count the number of readings _for each sensor_, we need to ensure that all the readings of a given sensor are always processed by the same operator instance, which computes and stores the count. To support these scenarios, Noir allows developers to explicitly control stream partitioning with the group_by operator. It takes in input a closure that computes a _key_ for each element in the stream and repartitions the stream to guarantee that all elements having the same key will be in the same partition. This allows performing stateful operations with the guarantee that the instance responsible for a given key will receive all elements with that key.
Keys can be of any type that implements the Hash and Eq traits. As an example, the code below organizes an input stream of integers in two partitions, even and odd, by associating each element with a key that is 0 for even numbers and 1 for odd numbers. Then, it sums all elements in each partition. The result will be a stream with two partitions, each one made of a single element, representing the sum of all even (respectively, odd) numbers in the original stream.
stream.group_by(|i|i$2) .reduce(|sum,i|sum+i);
Since the summing operation is associative, Noir allows obtaining the same result in a more efficient way, using an optimized operator that combines group_by and reduce in a more parallel, associative way. In particular, the following code:
stream.group_by_reduce( |i|i$2, |sum,i|*sum+=i );
creates a separate, keyed (sub)partitioning for each original partition of the input stream, applies the summing closure to each one of those (sub)partitions, producing a set of intermediate results (one for each key and each original partition of the input stream), then combines these partial results by key, producing the final stream composed of just two partitions, each one made of a single element: the sum of all even (respectively, odd) numbers in the original stream.
When the key partitioning is not needed anymore (or when we want to re-partition a non-partitioned stream) the shuffle operator evenly redistributes input elements across a number of partitions decided by a configuration parameter or by a previous invocation of the max_parallelism operator.
#### 3.3.3 Multi-stream operators
Noir manages the definition of multiple streams within the same environment through the split, zip, merge, and join operators.
The split operator creates multiple copies of the same stream: each copy is independent of the others and may undergo
a different sequence of transformations. In the code below, s1 and s2 are two copies of stream.
The zip operator combines two streams, associating each element of the first stream with one of the second and producing a stream of pairs (tuples with two elements). Elements are paired in order of arrival. In the code below, after traversing independent transformations (not shown) s1 and s2 are combined together using zip.
```
letmutsplits=stream.split(2); lets1-splits.pop().unwrap(); lets2-splits.pop().unwrap();... lets3-s1.zip(s2);
```
Likewise, the merge operator applies to streams that transport the same type of elements to produce a new stream that outputs elements as they arrive in the input streams.
The join operator matches elements of a stream to those of another stream based on their value. It does so by using a closure to extract a key from the elements of the first and the second stream and matching elements with the same key. For instance, the listing below joins a stream of users and a stream of purchases (both made of pairs with user id as first entry and user or purchase information as second) using the user id as the joining key for both streams. Noir supports inner, outer and left joins and different joining algorithms.
```
s1.join(s2, |(u_id,user_info)|*u_id,//u_idaskey |(u_id,purchase_info)|*u_id//u_idaskey ).map[((u_id,user_info),(_,purchase_info))| (u_id,user_info,purchase_info));
```
### _Windows and time_
Windows identify finite portions of unbounded datasets [20]. As common in stream processing systems [21], Noir defines windows with two parameters: _size_ determines how many elements they include and _slide_ determines how frequently they are evaluated. Noir offers both _count_ windows, where size and slide are expressed in terms of number of elements, and _time_ windows, where size and slide are expressed in terms of time. After defining the windowing on data, the next operator will apply its logic to each window to produce elements that will be sent along the stream. For instance, the code below uses a count window to compute, every 2 elements, the sum of the last 5 elements received.
```
stream.window_all(CountWindow::sliding(5,2)).sum();
```
Windowing can also be performed on a stream that has been partitioned by key, using a group_by operator, in this case, the windowing logic is applied independently for each partition. For instance, the code below applies one window to even numbers and one to odd numbers. The example uses time windows that are evaluated every 20 ms over the elements received in the last 100 ms.
```
letwindow_def=ProcessingTimeWindow::sliding( Duration::from_millis(100), Duration::from_millis(20)); stream.group_by[(|v|v$2)...window(window_def)...();//Computethemaxoftthewindow
```
When dealing with time windows, Noir supports two definitions of time: _processing_ and _event_ time. Processing time is the wall clock time of the machine computing the window. For instance, when executing the code snippet above, the process responsible for the partition of even numbers computes the sum of elements received in the last 100 ms according to the clock of the machine hosting that process. However, many scenarios need to decouple application time from execution time [4] to guarantee consistent results even in the case of delays or when processing historical data. To handle these cases, Noir supports event time semantics. First, a timestamp is associated to the elements using the add_timestamps operator. This is typically done at the source. Then, the window can be defined using the EventTimeWindow.
Noir also supports _transaction windows_, whose opening and closing logic is based on the content of elements actually received through the stream. With this kind of windows, the user specifies a closure that determines when the current window should be opened and closed. For instance, the following code snippet defines a windowing logic that closes a window (and opens a new one) upon receiving an element greater than 100. The windowing logic seamlessly integrates with other operators like group_by and sum.
```
letwindow_def=TransactionWindow::new(|v| ifv>100{ TxCommand::Commit )else{ TxCommand::None }}; stream.group_by[|v$2)...window(window_def)...sum();//Computethesumoftthewindow
```
The examples presented above exploited pre-defined operators on windows, such as sum and max. To implement custom operators, Noir provides two approaches. The first approach exposes an accumulator interface, such that the result of a computation over a window can be calculated incrementally as new elements enter the window one by one. The second approach exposes the entire content of the window when it closes, for those operators that cannot be implemented incrementally.
### _Iterations_
Several algorithms for data analytics are iterative in nature. For instance, many machine learning algorithms iteratively refine a solution until certain quality criteria are met. Noir supports iterative computations with two operators.
The iterate operator repeats a chain of operators until a terminating condition is met or a maximum number of iterations is reached. In the first iteration, the chain consumes elements from the input stream, while at each subsequent iteration, the chain operates on the results of the previous iteration. It holds a state variable that is updated at each iteration using a local (per partition) and a global folding logic, specified via closures. In the end, the operator returns two streams: one with the final value of the state variable, the other with the elements exiting the last iteration. For instance, the following code snippet repeats the map operator, that multiplies all elements by 2 at each iteration and computes their sum in the state variable. The iteration terminates when either 100 iterations have been executed or the sum is greater than 1000.
let(state,items)=s.iterate( 100,//maxiterations 0,//initialstate 1s.state1s.map([n|n+2),//body 1_state:smut132,n! 1_state+=n,//localfold(sum), 1state,1_state+=1_state,//globalfold(sum), 1state|state<1000,//terminatingcondition }; ```
The replay operator takes the same parameters, but instead of feeding the output of the current iteration as input of the next one, it replays the input stream until the termination condition is reached and returns the final value of the state variable.
## 4 Design and Implementation
Noir is implemented as a Rust framework that offers the API discussed in Section 3. It is designed to scale horizontally by exploiting the resources of different machines, which we denote as _hosts_. Each host runs a _worker_ process and each worker adopts multiple threads to run the computation in parallel.
To run a data processing job on a set of _hosts_, developers: (i) write and compile a Rust _driver program_, which defines the job using Noir API; (ii) provide a configuration file that specifies the list of hosts and the computational resources (number of CPU cores) available; (iii) run the driver program, which starts the computation on the hosts. The driver program reads the configuration file and uses ssh/scp to: (i) connect to the hosts; (ii) send them the program executable, if needed; (iii) spawn one worker process per host. Workers connect to each other and coordinate to collectively execute the job. This workflow is inspired by MPI, the standard for compute-intensive tasks [22].
### _Job translation and deployment_
To illustrate the process of translating a job into executable tasks and deploying them onto threads, we use the classic word count example, which counts the occurrences of each word in a large document. The following code snippet shows its implementation in Noir2.
Footnote 2: The proposed implementation works best for illustrating the translation process. More compact and efficient implementations are possible and will be used as part of our evaluation of Noir in Section 5.
```
1lectresult=env.stream_file(file_path).flat_map([line|split_words(line)].group_by(word:#string|word.clone()).map([|_1|].reduce(|(count,w)|+count+=w).collect_vec(); ```
The stream_file helper method creates a parallel source that reads a file and produces a stream of text lines; flat_map extracts the words from each line (this is done by the split_words closure passed to the flat_map operator); group_by groups identical words together (the key is the entire word); map transforms each word into number 1; reduce sums up all numbers per partition (i.e., per word); finally, collect_vec gathers the final results (that is, the counts for each word) into a vector.
The translation and deployment process takes place when the driver program is executed and works in four steps, as illustrated in Fig. 1: (i) the job is analyzed to extract its _logical plan_; (ii) the logical plan is organized into _stages_ of computation; (iii) each stage is instantiated as one or more _tasks_; (iv) tasks are deployed onto hosts as independent threads.
_Logical plan._ The logical plan is a graph representation of the job, where vertices are operators and edges are flows of data between operators. It contains the logical operators that transform the input streams generated by sources into the output streams consumed by sinks. In most cases, there is a one-to-one mapping between the operators defined in the driver program using Noir API and the vertices in the logical plan, as in the word count example presented in Fig. 1. However, some high-level operators part of Noir API translate into multiple operators in the logical plan. For instance, a group_by_reduce translates to three operators: a key_by that organizes data by key, a local reduce performed within each partition, and a global reduce that combines all partial results for each key together.
_Stages._ Operators in the logical plan are combined into _stages_: each stage consists of contiguous operators that do not change the partitioning of data. For instance, the word count example in Fig. 1 contains three stages: the first one (S0) starts at the source and terminates when the group_by operator repartitions data by word; the second one (S1) performs the mapping and reduction in parallel for each word; the third one (S2) brings the results of all partitions together.
Since stages represent the minimal unit of deployment and execution in Noir, by combining operators into stages we avoid inter-task communication when it is not strictly necessary, that is, when the partitioning of data does not change when moving from the upstream operator to the downstream operator. In this case, passing of data from one operator to the next one takes place within the same stage as a normal function invocation. Additionally, this choice results in the code of stages being a monomorphized version of the Job code provided by the programmer, allowing for inlining and other compiler optimizations that involve the code of multiple operators.
_Execution plan and task deployment._ As anticipated above, each stage is instantiated multiple times into units of execution that we denote as _tasks_, which are deployed as independent threads within the workers running on hosts. For instance, Fig. 1 shows an execution plan deployed onto two hosts, where Host 0 runs two tasks for each stage (for instance, tasks T0.0 and T0.1 for stage S0) and Host 1 runs one task for each stage (for instance, task T0.2 for stage S0).
Known the number of tasks per stage to allocate into each host, each worker can autonomously determine, in a determinis
Fig. 1: Deployment of the word count example.
tic way, the set of tasks it is responsible for and how they should be connected to build the execution plan. This removes the need for coordination and synchronization at initialization time: each worker can start working independently from the others and the connections between different hosts can happen asynchronously.
### _Use of resources_
By default, Noir instantiates one task for each stage for each CPU core. Consequently, a host with \(n\) cores will execute \(n\) tasks for each stage. For instance, in Fig. 1, Host 0 has 2 CPU cores and is assigned 2 tasks for each stage, and Host 1 has 1 CPU core and is assigned a single task for each stage. While this default behavior can be changed by setting the number of tasks to be instantiated for each stage at each host in the configuration file, our experiments show that this default strategy most often yields the best performance. Indeed, by instantiating tasks as kernel threads, Noir delegates task scheduling to the operating system, and the default strategy leaves a high degree of flexibility to the scheduler, which may adapt task interleaving to the heterogeneous demands of the different processing stages and of the different tasks within the same stage. If a stage is particularly resource demanding, its tasks may obtain all CPU cores for a large fraction of execution time; likewise, if data is not evenly distributed, tasks associated to larger partitions can get a larger fraction of execution time.
The downside of this approach is that it overcommits resources by spawning multiple threads for each CPU core, which increases the frequency of context switching. Our empirical evaluation shows that this is not a problem, especially considering that the overall design of Noir contributes to alleviate this cost: for instance, as we explain in Section 4.3.1, tasks exchange batches of data instead of individual elements, thus ensuring that a task acquires CPU resources only when it has enough work to justify the context switch.
Other stream processing platforms such as Flink and Kafka Streams also choose to allocate a similar number of threads, but they use JVM threads, with the associated overhead introduced by the JVM architectural layer.
### _Communication and coordination_
In Noir, tasks communicate in one of two ways: in-memory channels or TCP sockets. Tasks running on the same host exploit shared memory through multiple-producer single-consumer (MPSC) channels3 to achieve fast communication avoiding serialization. Tasks that run on different hosts use TCP channels.
Footnote 3: The current Noir implementation adopts Flume channels by default: [https://github.com/zeestrer/flume](https://github.com/zeestrer/flume).
#### 4.3.1 Batching
To reduce the overhead of inter-task communication (both within the same host and across hosts), Noir supports batching of data elements. With batching, subsequent elements that need to be delivered to the same recipient task are grouped in a batch. Noir supports two batching policies: _fixed_ and _adaptive_. With fixed batching, a batch is sent when it reaches the exact size that was specified. This policy guarantees that a fixed number of elements are delivered together over a channel, but may increase latency, as it needs to wait to complete a batch before sending it to the recipient. Adaptive batching also limits latency by sending a batch if a maximum timeout expires after the last batch was sent, regardless of the number of elements in the batch. Developers can configure the system with different batch sizes and different timeouts for adaptive batching, depending on their needs, potentially choosing different batching policies for different parts of the job graph.
As anticipated, the use of batching is also beneficial for task scheduling. Indeed, when a task is scheduled for execution, it is guaranteed to have a minimum number of elements ready, which reduces the frequency of context switching.
#### 4.3.2 Flow control
In designing inter-task communication, we adopted an approach that is similar to what we presented for task scheduling and resource allocation: we delegate as much as possible to the operating system without replicating its functionalities within our framework. In particular, we delegate flow control to the underlying TCP implementation: if the receiving task cannot sustain the rate of data coming from the sending task, the sender will be automatically suspended until the receiver has processed previous data. This design differs from the typical approach of alternative dataflow engines, which implement flow control within the framework. For instance, Apache Flink adopts a mechanism denoted as back pressure4.
Footnote 4: [https://nightlies.apache.org/flink/flink-docs-stable/docs/ops/monitoring/back_pressure/](https://nightlies.apache.org/flink/flink-docs-stable/docs/ops/monitoring/back_pressure/)
To limit the number of TCP connections, Noir forces all tasks running on a host and belonging to the same stage to share the same TCP channel to communicate with tasks running on a different host and part of the downstream stage. For instance, Fig. 2 shows the communication channels for the word count example in Fig. 1. Host 0 has two tasks for stage 0: T0.0 and T0.1. They communicate with the tasks for stage 1 deployed on the same host (T1.0 and T1.1) using in memory channels (black arrows), while they share the same TCP connection (large gray arrows) to communicate with task T1.2, deployed on Host 1. Likewise, T0.2 communicates with T1.2 using an in-memory channel and with T1.0 and T1.1 over a single TCP connection. Elements M and D in Fig. 2 represent multiplexer and demultiplexer components that allow multiple tasks to share the same TCP channel for communication.
Fig. 3 provide further details on this mechanism by focusing on the architectural components involved in the communication. As anticipated above, each Noir worker includes a demultiplexer for each stage: this component receives batches of input elements for that stage from a TCP channel. Each batch is annotated with the sending task and the destination task (s and d in Fig. 3). The demultiplexer exploits this information to dispatch batches to recipient tasks. Each demultiplexer
Fig. 2: Communication between tasks in the word count example.
runs on a separate thread. Next, each task (as exemplified in the middle block in Fig. 3) receives incoming batches in a task queue, processes them according the specific logic of the task, and delivers them to a multiplexer component (one per host and stage). Each multiplexer runs on a separate thread. It stores incoming batches in a mux queue, serializes them using the binary serialization format bincode [23], and forwards them to remote tasks of the next stage using a TCP channel.
#### 4.3.3 Timestamped streams and watermarks
When using event time, sources associate a _timestamp_ metadata to each element in the stream they generate, and the runtime needs to guarantee timestamp order during processing. However, as the tasks of a stage evolve in parallel, they may not guarantee order. Noir solves this problem with _watermarks_[5], an established mechanism in dataflow platforms. Watermarks are special elements periodically emitted by sources that contain a single timestamp \(t\) indicating that no elements with timestamp lower than \(t\) will be produced in the future. Under event time semantics, tasks are required to process data in order, so they buffer and reorder incoming elements before processing them. In particular, when a task T in a stage S receives a watermark greater than \(t\) from all incoming channels, it can be sure that it will not receive any other element with timestamp lower than or equal to \(t\). At that point, it can process all elements up to timestamp \(t\), and propagate the watermark \(t\) downstream.
#### 4.3.4 Iterations
Iterations (see Section 3.5) enable repeating a set of operations (the _body_ of the iteration) until a given condition is met. This requires distributed coordination across the tasks that implement the iteration body to decide whether to start a new iteration.
Noir implements this coordination logic using two implicit operators: an Iteration operator is put before the body and an IterationLeader operator is added after the body. The Iteration operator implements a barrier logic to synchronize all body tasks at each iteration. The IterationLeader collects updates from all hosts and computes a global state to decide whether the iteration should continue. In that case, the new state is broadcast to all Iteration operators through a _feedback link_ and made available to all tasks. Inputs for the next iteration are also sent with feedback links, and the tasks in the body wait for the barrier from the Iteration operator before processing inputs for the next iteration.
### _Zero cost abstractions_
To reach our current level of performance, we fully exploited the advantages that Rust provides over the programming languages most commonly used in related systems, such as Java.
In Java, generics are implemented through _type erasure_: only one version of each generic function is compiled and it is invoked using a combination of casts and dynamic dispatching. On the contrary, we implemented all Noir interfaces through Rust generics, which are monomorphized and compiled to optimized code for each version of the generic call. In fact, our implementation entirely avoids dynamic dispatching for task execution, communication, and serialization.
We also use the higher-kind polymorphism capabilities of Rust to make our interfaces generic. As a concrete example, the windowing logic is generic over both the windowing strategy and the type of the state being accumulated during window evaluation. We implemented it as a Rust Trait with generic associated types, which enable developers to easily extend the API with new strategies and new accumulators, while still avoiding dynamic dispatching.
In summary, while Noir offers programmers high-level abstractions similar to those offered by platforms such as Apache Flink, it is able to build executable code with performance similar to that produced by dedicated C/C++ solutions. The next section empirically validates these claims.
## 5 Evaluation
The goal of this evaluation is to assess the performance and ease of use of Noir, in absolute terms and in comparison with: (i) state-of-the-art data processing platforms, and (ii) lower-level programming primitives used to implement high-performance parallel and distributed computations.
To do so, we implemented various data processing tasks, which cover the key functionalities offered by Noir and other state-of-the-art data processing platforms. We measure performance in terms of throughput, latency, and (horizontal and vertical) scalability. We evaluate the complexity of each implementation both quantitatively, by counting the lines of code, and qualitatively, by observing the aspects that the different programming interfaces can abstract away.
The section is organized as follows: Section 5.1 presents the setup we use in our experiments. Section 5.2 compares the programming model of Noir with that of alternative solutions. Section 5.3 and Section 5.4 measure the performance and horizontal scalability for batch and stream processing workloads, respectively. Section 5.5 measures vertical scalability within a single machine. Finally, Section 5.6 summarizes and discusses our findings.
### _Experiment setup_
We present the experiment setup in terms of systems under test, benchmarks adopted, hardware and software configurations, and evaluation methodology.
#### 5.1.1 Systems under test
We compare Noir with the following alternative solutions for distributed data processing and high-performance parallel and distributed computations.
_Apache Flink_. Apache Flink (Flink from now on) is a state-of-the-art dataflow processing system, widely used in industrial settings. It is written in Java and offers a high-level API to define batch and stream processing computations and deploy them on a cluster of nodes. We consider Flink as representative of
Fig. 3: Communication between tasks on different processes.
high-level data processing platforms, because it is frequently adopted as a reference in the recent literature, and offers a level of performance that is comparable or better than competing commercial systems [24].
_OpenMPL_. OpenMPI (MPI from now on) is an implementation of the Message Passing Interface specification for C/C++. It is used in high-performance computing and scientific computations to distribute a computational workload among multiple machines in a cluster. We consider MPI as representative of the level of performance that can be achieved with custom C++ solutions and low-level communication primitives for data distribution.
_Timely Dataflow._ First introduced in the Naiad system [25], Timely Dataflow (Timely from now on) is a generalization of the dataflow model to better express computations that iteratively and incrementally update some mutable state. Like the dataflow model, it is designed to run parallel computations on a cluster of nodes. The implementation we consider for our comparison is written in Rust5. To implement some benchmarks, we use Differential Dataflow [26], a higher-level programming interface for incremental computations written on top of Timely. We consider Timely for two reasons. First, it shares with Noir the goal of finding a balance between the expressiveness and ease of use of the programming model and the performance of the implementing platform. As such, it represents another point in the design space of data processing systems. Second, it relies on the same Rust programming language as Noir, allowing for a comparison of programming interfaces and system design that build on a common ground. We found that fully utilizing the potential of the system requires a deep understanding of how its timestamp logic and that improper use of timestamps often lead to degraded performance. For this reason, we compare Noir and Timely on a subset of benchmarks for which we found an implementation from the authors, thus ensuring a proper use of the system.
Footnote 5: [https://github.com/TimelyDataflow/timely-dataflow](https://github.com/TimelyDataflow/timely-dataflow)
_OpenMP_. OpenMP6 (OMP from now on) is a specification of an API considered the de-facto standard for high-performance parallel computations. We adopt the implementation of OMP provided by the gcc compiler and we use it as a reference comparison for the performance of Noir within a single machine.
_Rayon_. Rayon7 (Rayon from now on) is a Rust library for parallel processing. It is one of the most popular tools used to perform data-parallel computations on a single host in Rust. We consider Rayon for two reasons. First, it enables comparing the programming interface of Noir with a simple and idiomatic way to express data-parallel computations in Rust. Second, it allows us to measure the overhead of Noir when used within a single host and to compare it with a widely adopted library designed specifically for this purpose.
Footnote 6: [https://www.openmp.org](https://www.openmp.org)
#### 5.1.2 Benchmarks
The set of benchmarks has been chosen to highlight different patterns of computations that are common in batch and stream processing applications.
_Word count_. Word count (\(wc\)) is the classic example used to present the dataflow model, as it well emphasizes key features of the model such as data parallelism and repartitioning of data by key. The task consists in reading words from a file and counting the occurrences of each word. The input used for this task contains 4GB of books in plain text format from the project Gutenberg repository of books [27].
_Velicle collisions_. The vehicle collisions (coll) benchmark requires computing multiple queries of increasing complexity over a large set of data. The input used for this benchmark is a public dataset of vehicle collisions [28] in the form of a CSV file containing 4.2GB of data. The queries are the following: 1) compute the number of lethal accidents per week; 2) compute the number of accidents and percentage of lethal accidents per contributing factor; 3) compute the number of accidents and average number of lethal accidents per week per borough.
_K-Means_. K-Means (k-means) is a clustering algorithm that partitions a set of \(d\)-dimensional points in \(k\) non-overlapping clusters. It is an iterative algorithm that closes in to a local optimum at each iteration. We use a dataset of up to 100 million 2D points (2GB of data).
_Connected components_. Connected components (conn) takes in input a graph and computes the maximal sub-graphs whose nodes are all reachable from each other. It is an iterative graph algorithm that performs a join operation inside the loop body. We use a dataset of 20k nodes and 5M edges, where nodes are represented as integer numbers.
_Transitive closure_. Transitive closure (tr-clos) is another iterative graph algorithm, which computes the transitive closure of a relation graph. Differently from the conn example, the number of edges in the set grows at each iteration, reaching \(O(n^{2})\) nodes at the end of the execution. We use a dataset of 2k nodes and 3k edges, where nodes are represented as integer numbers.
_Enumeration of triangles_. Enumeration of triangles (tri) is an example of a non-iterative graph algorithm: it computes the number of node triplets that are directly connected in an undirected graph. We use a dataset of 1.5k nodes and 900k edges, where nodes are represented as integer numbers.
_Pagerank_. Pagerank (pagerank) is a well known graph algorithm used to estimate the importance of a node in a graph. Each node starts with the same _mnk_ and at each iteration, the rank is redistributed along the edges to other nodes. We use a dataset of 80k nodes and 2.5M edges, where nodes are represented as integer numbers.
_Collatz conjecture_. The collatz conjecture (collatz) benchmark computes the collatz conjecture steps for all the numbers from 1 to 1 billion to find which number takes the highest number of steps before converging to 1. This is an iterative algorithm that does not require intermediate synchronization. Accordingly, it is embarrassingly parallel but the workload for each number is very different, which makes this algorithm well suited to evaluate the ability of a solution to cope with unbalanced workloads. We use it to compare solutions for parallel computations.
_Nexmark_. The Nexmark (nexmark) benchmark suite [29] is increasingly being used for benchmarking stream processing platforms. It includes streaming queries with various complexities, ranging from simple stateless transformations of the input elements to stateful joins of multiple streams.
#### 5.1.3 Hardware and software configuration
Unless otherwise specified, we run the experiments on an AWS cluster composed of c5.2xlarge instances, equipped with 4
cores/8-threads processors and 16GB of RAM each, running Ubuntu server 22.04, residing in the us-east-2 zone, and communicating through the internal AWS network with an average ping time of 0.1ms. Noir, Timely and Rayon programs are compiled with rustc 1.66.1 in release mode with thin link-time optimization active and cpu-target=native. We use Flink 1.16.0 executed on the OpenJDK 11.0.17, with 12GB of RAM allocated to TaskManagers. To offer a fair comparison, we disable checkpointing to durable storage in Flink, as Noir currently does not support persistence or fault-tolerance. We compile MPI and OMP programs with gcc 11.3.0 using OpenMPI 4.1.2 and OpenMP 4.5 and maximum optimization level.
#### 5.1.4 Metrics and methodology
We use finite input datasets and measure the total execution time for both batch and stream processing tasks, which measures the maximum throughput of data a system can handle. For the streaming benchmarks we also measure latency of processing, defined as the difference between the arrival time, at the source, of the input element that triggered a certain output and the time at which that output element was delivered to the sink. To measure this form of latency, we use a single source and a single sink deployed on the same physical machine, and exploit the real-time clock of this machine to compute the timing. Each experiment is executed at least 4 times, discarding the result of the first run, which is used as a warm-up, allowing the operating system to cache the input in memory. We measure horizontal scalability from 1 to 4 hosts (that is, from 8 to 32 cores). For the batch benchmarks, our measurements include the cost of job deployment, calculating the time between submission of the job and its completion, rather than only measuring the processing time. Indeed, in real-world settings, deployment time contributes to the cost of running a dataflow application. For the streaming benchmarks, which represent long running, continuous computations, we only consider the time required to process data, excluding job deployment.
### _Programming model_
In this section, we analyze the programming models of the systems under test. We recognize that assessing code complexity is difficult and highly subjective, and we approach the problem by (i) measuring the number of lines of code for each benchmark as a coarse-grained indication of complexity; (ii) reporting the key features that some programming models expose to the developers and may contribute to code complexity.
Table I reports the lines of code for each benchmark. It follows the same structure as the remainder of the paper. First, it presents batch processing workloads, which we use to compare Noir, Flink, MPI, and Timely in Section 5.3. Then, it presents the nexmark stream processing benchmark, which we use to compare Noir, Flink8, and Timely in Section 5.4. Finally, it presents benchmarks for parallel computations on a single host, which we use to compare Noir, OMP, and Rayon in Section 5.5. To ensure a fair comparison, we adopted the following approach: we excluded comments, imports, input parsing and output formatting, we formatted the Rust, Java and C/C++ code using the default formatter provided with the respective language extensions in Visual Studio Code. In addition, Fig. 4 reports the average size in bytes of the source files for each solution after being compressed with gzip: the compression algorithm limits the contribution of common language keywords, and partially masks the differences in verbosity between languages.
Footnote 8: For the nexmark benchmarks we report the lines of code of the queries using the Flink SQL API
For both batch and streaming benchmarks, solutions written in Noir have roughly the same numbers of lines of code as those written in Flink. Overall, the Flink versions are slightly longer, with a total of 1037 lines of code to implement all batch and streaming benchmarks, with respect to 853 lines of code for Noir. Similar differences appear in the size of the compressed source files (Fig. 4), indicating that the gap may not be completely attributed to the different verbosity of the programming languages adopted.
Solutions written in Noir present a similar structure as those in Flink, but also closely resemble the syntax of Rust standard iterators, making it easy for developers to port sequential Rust code to Noir. Some differences between Noir and Flink appear in iterative algorithms. For instance, in k-means, Flink uses broadcast variables to propagate shared state, while Noir operators can interact with the shared state through a smart pointer passed to the closure that defines the inner loop.
Conversely, MPI requires more coding effort and results in significantly more lines of code than Noir:4.9\(\times\) more lines in Nor,2.6\(\times\) in \(\text{\sc roll},1.77\times\) in k-means, 4.6\(\times\) in tri. The compressed source files (Fig. 4) are about twice as large in MPI than in Noir.Only pagerank presents almost the same lines of code, as in this case the programming model of MPI, based on mutable state, helps simplifying the implementation. Most importantly, MPI developers need to deal with low-level concerns that are abstracted away in Noir and Flink: they need to select the data structures that encode input data and store intermediate results, they need to decide and implement serialization and deserialization strategies, they need to decide how communication and serialization overlaps with processing. As we will show in the following exposing these concerns gives a high degree of freedom to developers, but may also be an obstacle to achieve high performance, as code optimization may become a difficult task. In fact, Noir can outperform MPI in several benchmarks, mainly due to better serialization and deserialization strategies that exploit procedural macros, which would be hard to replicate in MPI.
In terms of code safety, the low-level communication primitives of MPI must be used with care to prevent deadlocks and data races. Additionally, C++ requires developers to manually allocate and manage the memory used to store data moving through the system, a complex task vulnerable to errors such as memory leaks or invalid references. Rust (used to develop Noir and Timely) avoids most of these issues with its automatic
Fig. 4: Bytes of the Gzip compressed source files: average and 90% confidence interval.
memory model and without incurring the overhead of garbage collection as in Java (used to develop \(\mathtt{Flink}\)).
The programming model of \(\mathtt{Timely}\) represents a different trade-off between ease of use and performance. It generalizes the dataflow model by exposing the management of timestamps and watermarks to the application, thus allowing it to control how the computation and its results evolve over time. However, its generality comes at the price of additional complexity, which reflects in a higher number of lines of code. Our experience also shows that properly handling the programming abstractions it exposes (e.g., timestamps to govern the evolution of a computation) may not be easy, and improper use may lead to poor performance. In principle, the architecture of \(\mathtt{Noir}\) could support the same abstraction, but we decided to hide the watermarking logic from the application layer to simplify the programming model. In practice, we only used \(\mathtt{Timely}\) with benchmarks for which we could find an available implementation, to be confident about their correctness and avoid misuse of the programming model, which could be detrimental for performance. In some cases, we had to adapt our implementation to make it comparable with that of \(\mathtt{Timely}\). For instance, the \(\mathtt{Timely}\) implementation of \(\mathtt{pagerank}\) adopts a higher-level library (differential dataflow) that hides some of the complexities of \(\mathtt{Timely}\), but introduces additional constraints as the impossibility of using floating point numbers to express ranks.
When compared with libraries for parallel computations, we observe that \(\mathtt{Noir}\) has nearly the same number of lines of code as \(\mathtt{Rayon}\). Indeed, both systems mimic the interface of standard \(\mathtt{Rust}\) iterators, leading to compact and paradigmatic \(\mathtt{Rust}\) code. The higher number of lines of code in \(\mathtt{OMP}\) are mainly due to the language being used (C++), which brings the same verbosity and safety concerns discussed for \(\mathtt{MPI}\). However, \(\mathtt{OMP}\) code is simpler than the equivalent \(\mathtt{MPI}\) code, as the use of shared memory avoids serialization and inter-process communication concerns.
### _Performance: batch workloads_
In this section, we evaluate the performance and scalability of \(\mathtt{Noir}\) and alternative solutions for batch processing workloads. For each workload, we measure the execution time while moving from one to 4 hosts. Fig. 5 presents the results we measured.
#### 5.3.1 Word count (wc)
Fig. 4(a) shows the execution time and scalability for \(\mathtt{wc}\). \(\mathtt{Noir}\) completes the task in 34.97s on a single host, and in 9.56s on 4 hosts. In comparison, \(\mathtt{Flink}\) is more than 6\(\times\) slower: 217.14s on a single host and 60.79s on 4 hosts.
We optimized the \(\mathtt{MPI}\) code in many ways. In the reduction phase, \(\mathtt{Noir}\) and \(\mathtt{Flink}\) partition the dataset by word and perform the reduction in parallel, before collecting all the results in a single process and saving the results. Given the limited size of the partial results, in the \(\mathtt{MPI}\) implementation, we skip the intermediate phase and collect all partial results directly in a single process, saving one communication step. We also made different experiments to exploit thread-level parallelism using OpenMP, but we obtained better results with one MPI process per core, with each process using a single thread.
Despite these custom-made optimizations, \(\mathtt{MPI}\) is still about 2.5\(\times\) slower than \(\mathtt{Noir}\). A detailed analysis showed a bottleneck when reading data from file using functions from the standard C++ library and parsing using regular expressions (the same we do in \(\mathtt{Flink}\) and \(\mathtt{Noir}\)). Thus, we implemented an additional version with ad-hoc file reading (by mapping the file in memory with \(\mathtt{mmap}\)) and a simplified parser that only considers 7-bit ASCII instead of UTF-8 encoded text. This version is presented in Fig. 4(b) and is labeled \(\mathtt{MPI}\)-\(\mathtt{mmap}\). It reduces the gap with \(\mathtt{Noir}\), but at the cost of additional code complexity and reduced generality and reusability, and remains about 1.5\(\times\) slower with 4 hosts. We attribute this difference to the different serialization strategy: \(\mathtt{MPI}\) uses fixed-size arrays to represent strings whereas \(\mathtt{Noir}\) uses a more compact binary serialization format. Fig. 4(b) also shows the performance of an optimized version of \(\mathtt{wc}\) in \(\mathtt{Noir}\), which exploits the same strategy as \(\mathtt{MPI}\) by skipping the intermediate reduction phase. This version only requires 10 additional lines of code: in particular, it does not exploit a \(\mathtt{group\_by\_count}\) operator, but it implements the count using an associative \(\mathtt{fold}\). This small change reduces the execution time by nearly 30%, and \(\mathtt{Noir}\) can complete the task in 6.7s with 4 hosts.
Going back to the comparison in Fig. 4(a), \(\mathtt{Timely}\) completes the task in 51.24s on one host and 22.33s on 4 hosts, which means up to 2.3\(\times\) higher execution time than \(\mathtt{Noir}\). \(\mathtt{Timely}\) adopts a different execution model with respect to \(\mathtt{Noir}\), where each worker thread is responsible for part of the dataflow graph, it loops through the operators within that part of the graph, and it executes one operator at a time. This architectural difference leads to different performance and scalability characteristics in the experiments we performed.
In terms of horizontal scaling, \(\mathtt{Noir}\) and \(\mathtt{Flink}\) achieve near linear scalability: when moving from 1 to 4 hosts, we measure a speedup of 3.65\(\times\) for \(\mathtt{Noir}\) and 3.57\(\times\) for \(\mathtt{Flink}\). Indeed, the most expensive operations, namely reading and parsing the file, and performing a partial count, are executed in parallel without synchronization across processes. \(\mathtt{MPI}\) has a speedup of 2.59\(\times\) and the \(\mathtt{MPI}\)-\(\mathtt{mmap}\) version has a scalability of only 1.39\(\times\): again, we suspect this may be due to a less efficient serialization strategy that introduces more network traffic. However, further optimizing serialization for the specific problem at
\begin{table}
\begin{tabular}{|l c c c c|} \hline \multicolumn{5}{|c|}{**Batch (Section 5.3)**} \\ \hline & \(\mathtt{Noir}\) & \(\mathtt{Flink}\) & \(\mathtt{MPI}\) & \(\mathtt{Timely}\) \\ \hline we & 28 & 26 & 138 & 93 \\ \(\mathtt{coll}\) & 192 & 139 & 503 & n.a. \\ \(\mathtt{k}\)-means & 125 & 158 & 22 & n.a. \\ \(\mathtt{pagerank}\) & 59 & 125 & 74 & 73 \\ \(\mathtt{conn}\) & 70 & 97 & 85 & n.a. \\ \(\mathtt{tri}\) & 44 & 159 & 204 & n.a. \\ \(\mathtt{tr}\)-clos & 39 & 82 & 162 & n.a. \\ \hline \end{tabular}
\begin{tabular}{|l c c c|} \hline \multicolumn{5}{|c|}{**Single host (Section 5.5)**} \\ \hline & \(\mathtt{Noir}\) & \(\mathtt{OMP}\) & \(\mathtt{Rayon}\) \\ \hline we & 29 & 84 & 39 \\ \(\mathtt{k}\)-means & 125 & 142 & 131 \\ \(\mathtt{collatz}\) & 30 & 48 & 23 \\ \hline \end{tabular}
\end{table} TABLE I: Lines of code used to implement each benchmark.
hand would require significant effort, while Noir offers better performance without any complexity about serialization being exposed to the developer. Timely presents a similar speedup of 2.29\(\times\). This is a general characteristic we observed in our experiments: Timely exhibits a lower scalability than Noir.
#### 5.3.2 Vehicle collisions (coll)
This workload presents two differences with respect to : (i) it computes three distinct output results starting from the same input data; (ii) each computation involves more operators.
Noir remains the fastest system, completing the task in 24.87s on a single host and in 6.51s on 4 hosts. In comparison, Flink requires 68.15s on one host and 26.02s on 4 hosts, while MPI requires 42.41s on a single host and 10.25s on 4 hosts. With respect to, the increased computational complexity of the operators partially masks the overhead of. Flink, reducing the gap with respect to MPI and Noir. One possible explanation for the lower performance of MPI is that the computations required to obtain the three output results are executed one after the other: indeed, implementing the strategies to run them in parallel would require a radical change of the code and would expose the additional complexity of managing parallel execution. Conversely, in.
task, with an execution time that is from 25\(\times\) to 40\(\times\) higher than Noir.
Fig. 4(e) shows the performance of the three systems when increasing the number of centroids from 30 to 300. Increasing the number of centroids adds computational complexity, as input points need to be compared with each centroid at each iteration. This decreases the gap between Flink and the other two systems, as the cost of communication and synchronization becomes smaller in comparison with the cost of computing. Noir remains the fastest system, with MPI being marginally (at most 10%) slower, and Flink reducing its gap but still remaining more than 10\(\times\) slower than Noir.
Fig. 4(f) shows the performance of the three systems when increasing the number of points from 10M to 100M (about 2GB of data). This workload stresses communication, as points are transferred at each iteration. Also in this case, Noir shows a level of performance that is comparable to MPI, with a slightly better scalability, while Flink is about 70 times slower, indicating the effectiveness of Noir in terms of communication.
#### 5.3.4. Pagerank (\(\mathit{pagerank}\))
The \(\mathit{pagerank}\) benchmark may be implemented in different ways. We present three different implementations, each of them mimicking a reference implementation in one of the other systems under test. This enables us to explore a larger area of design and implementation strategies, while offering a comparison with other systems that is as fair as possible.
The first approach stores the current rank of nodes in a mutable state. The list of adjacent nodes is replicated within each process as an additional immutable state. At each iteration, each process distributes part of its current rank to adjacent nodes, which will use it to update their rank in the next iteration. We implemented this approach in MPI and we present a performance comparison in Fig. 4(g). MPI uses messages between nodes to distribute the current rank. Noir mimics the same behavior by storing the current rank in a stateful operator (a rich_map) and keeping a single copy of the list of adjacent nodes per process; the rank is distributed to adjacent nodes by propagating it back in the feedback stream of an iteration. This is the most efficient implementation of the algorithm: Noir and MPI show similar performance, with Noir completing the task about 10% faster than MPI.
The second approach mimics the reference implementation found in the official repository of Flink. Flink considers both the list of adjacent nodes and the current rank of nodes as two streams that are joined together at each iteration to produce the new rank for each node (a new stream). It then compares the current and previous rank to check convergence and terminate the loop. Fig. 4(h) shows the performance of this implementation for Flink and Noir. Clearly, this implementation is not optimal for Noir, which can store the list of adjacent nodes as part of the state of each process, as exemplified in the first approach. Yet, Noir remains at least 6\(\times\) faster than Flink when using the same implementation strategy. However, the need to repeatedly join the input streams and transfer them over the network affects scalability, which in this case is even negative.
Finally, Fig. 4(i) uses the reference implementation of \(\mathit{pagerank}\) for Differential Dataflow, an abstraction built on top of Timely. We mimic the same approach in Noir and we adopt the same workload to ensure a fair comparison. Specifically, this version of \(\mathit{pagerank}\) uses integer numbers instead of floating point numbers to represent the rank of the nodes. The choice to use integers is forced by Timely, which requires that the types used in the feedback of a loop have mathematical characteristics which floats do not have. Interestingly, this implementation shows a speedup close to zero for Noir but also for Timely. Also in this case, Noir is consistently at least 20% faster than Timely.
#### 5.3.5. Connected components (\(\mathit{conn}\))
\(\mathit{conn}\) is an iterative algorithm to compute the connected components of a graph. The algorithm iteratively updates the component to which each node belongs: a components \(c\) is represented by the numerical identifier of the smallest node currently part of \(c\). If the algorithm discovers that a node \(n\) is connected to a component \(c\) with an identifier smaller than \(n\), it assigns \(n\) to \(c\) and propagates the new association to the next iteration, where nodes directly connected to \(n\) are evaluated and possibly included into \(c\). Flink natively supports this type of iterations that continuously update a mutable state. They are defined as _delta-iterations_ in the Flink documentation. In Noir, we implement a similar logic using the iterate operator: we store the association of nodes to components in the iteration state, and we propagate to the next iteration only the associations that have changed during the current iteration.
Fig. 4(j) shows the execution time and scalability of the systems under test. Scalability in conn is limited by the need to propagate state changes to all hosts at each iteration. This is visible for all the systems under test, but in particular for MPI, which has a maximum speedup of only 1.1\(\times\) when moving from 1 to 4 hosts. In comparison, Flink and Noir have a speedup of 1.63\(\times\) and 1.85\(\times\), respectively. In absolute terms, Noir is almost identical to MPI on a single host (1.81s vs 1.80s) but becomes faster on 4 hosts (1.38s vs 1.62s). Flink remains about an order of magnitude slower with total execution time that moves from 16.89s on a single host to 10.37s on 4 hosts.
#### 5.3.6. Enumerate triangles (\(\mathit{tri}\))
\(\mathit{tri}\) is a graph algorithm that does not require multiple iterative steps, as it needs to verify a local property: which triples of nodes are directly connected with each other, forming a triangle.
Flink implements this algorithm with a join operation to complete the triangle. Instead, MPI stores the complete adjacency list in memory, which is more efficient as it enables random access. In principle, Noir could also exploit shared state within processes to optimize the job. However, we decided to implement the same approach used by Flink, which is easier to express in a dataflow engine.
Fig. 4(k) shows the execution time and scalability of the systems under test when computing tri. MPI is the fastest systems, more than 3\(\times\) faster than Noir on a single host. Thanks to a better scalability (3\(\times\) vs 2.2\(\times\) speedup), Noir reduces the gap on 4 hosts, where Noir and MPI complete the task in 6.19s and 2.61s, respectively. Flink remains significantly slower, with an execution time of 66.8s on 4 hosts.
#### 5.3.7. Transitive closure (\(\mathit{tr}\)-\(\mathit{clos}\))
\(\mathit{tr}\)-\(\mathit{clos}\) is an iterative graph algorithm that presents different characteristics with respect to conn. In particular, it iteratively enriches a partial result that is proportional to the number of edges (quadratic with respect to the number of nodes).
Fig. 5l shows the execution time and scalability of the systems under test when performing this task. Noir shows execution times from 5.98s on 1 host to 2.63s on 4 hosts (2.26\(\times\) speedup), while Flink is about 8 times slower, with a maximum speedup of 24\(\times\) when moving from 1 to 4 hosts. Consistently with what we reported for tri, the MPI implementation yields better results by saving the adjacency list as mutable state, leading to an execution time of 1.64s on 4 hosts (2.61\(\times\) speedup).
### _Performance: streaming workloads_
In this section, we evaluate the performance of Noir and alternative solutions for stream processing workloads using nexmark. We consider all original nexmark queries (Q1-Q8) and the passthrough query Q0, which measures the monitoring overhead, that is, the time for analyzing the entire input data without performing any concrete data transformation.
We compare Noir with other stream processing platforms: Flink and Timely. We exclude from this analysis MPI as it is not designed for streaming workloads. We adopt the reference Flink implementation available in the nexmark repository9, and the Timely implementation made available by the authors as part of the Megaphone project10. For Noir, we generate input data using a parallel iterator source, making sure that it is consistent with the specification of the benchmark.
Footnote 9: [https://github.com/nexmark/nexmark](https://github.com/nexmark/nexmark)
Footnote 10: [https://github.com/strymon-system/megaphone](https://github.com/strymon-system/megaphone)
Fig. 6 shows the time required to process the entire workload for all queries using 4 hosts (32 cores). Noir consistently outperforms Flink, completing queries from 34\(\times\) to 57.8\(\times\) faster, depending on the query. Flink cannot even terminate query Q6, which Noir computes in about 17.7s.
The execution times of Noir and Timely are comparable, and Noir is faster in 6 out of 9 queries. In queries Q4 and Q6 Timely is about twice as fast as Noir. We attribute these results to the different ways in which the two systems implement the windowing logic required in the queries: the Timely implementation adopts a custom windowing implementation that simplifies the way in which watermarks are handled. While we could replicate the same approach in Noir by writing ad-hoc windowing operators for the specific queries, we decided to use the standard window operator offered by the framework, which simplifies the code.
Fig. 7 shows the latency of Noir for three representative queries with heterogeneous characteristics (Q2, Q3, and Q5), and compares it with Timely, the system that presents a similar execution time. Fig. 7 plots the mean latency over time in 1s windows, with the 99th percentile band (shaded area around each line). Recall that, to measure latency, we use a single source and a single sink placed on the same machine, consequently, the results in Fig. 7 are not directly comparable with those in Fig. 6.
For Timely, due to the cooperative scheduling implementation, developers need to explicitly use timestamps to indicate how frequently to alternate between sending (batches of) events and processing them, which clearly affects latency. We experimented with different fixed and dynamic intervals as detailed below, Fig. 7 reports the results when processing 100ms of events at each round.
Query Q2 (Fig. 6(a)) requires a single stage of stateless computations. In this setting, Noir never splits the input data into partitions after reading from the sources, and executes the entire computation sequentially, which leads to a latency of about 80ms. Instead, the latency in Timely is dominated by the evaluation interval, which is set to 100ms: reducing this interval to 1ms indeed improves latency, but it still remains around 10ms, while the overall execution time further increases to more than 140s. In comparison, the execution time of Noir is below 30s.
Query Q3 (Fig. 6(b)) includes multiple stages of computation and a join.
In Noir, this introduces inter-stage communication, which increases the latency to about 5.5ms. The latency of Timely is about 900ms. We considered an evaluation interval of 100ms as in Query Q2, which leads to a throughput that is comparable to that of Noir.
In query Q5 (Fig. 6(c)), the latency is dominated by the presence of windows. The average latency is below 3s for Noir and over 12.5s for Timely. Also, the latency is more stable in Noir, with a standard deviation of 1.1s compared to 14.3s of Timely.
### _Scalability on a single host_
Noir is designed to enable large-scale data analysis on a cluster of machines. However, it can also be used as an efficient library for parallel computations within a single process. This section focuses on this capability and compares the performance of Noir with alternative state-of-the-art solutions for parallel processing: OMP and Rayon. With OMP, developers annotate blocks of C/C++ code that can be executed in parallel, the compiler then generates the parallel implementation using a fork-join model of execution. With Rayon, developers use an iterator-like api that is translated to a set of tasks executed by a fixed size thread pool. All three systems are configured to use a level of parallelism equal to the number of processors available.
For the experiments presented in this section, we use c6a EC2 instances, which offer 3rd generation AMD EPYC processors. For each experiment, we use a single instance and we
Fig. 6: nexmark queries: execution time with 32 cores.
Fig. 7: nexmark: latency for queries Q2, Q3, Q5.
configure the systems under test for optimal exploitation of the compute resources available. We start from a cc6a.2xlarge instance (8 CPUs, 16GB RAM) and run the experiments scaling up to a cc6a.16xlarge instance (64 CPUs, 128GB RAM).
We use the collatz, wc and k-means benchmarks to cover different scenarios. collatz is an iterative algorithm that process numbers in parallel. It does not require synchronization across numbers, but presents highly heterogeneous execution times for different numbers. wc includes data transformations (to split lines into words) and an associative aggregation (to count the number of occurrences). k-means is an iterative algorithm that requires synchronization at each iteration.
Fig. 7(a) shows the performance we measure for collatz. The execution time is comparable for all the systems under test: Rayon is about 30% faster than OMP and about 15% faster than Noir, and the relative gap remains almost constant when changing the number of cores. Due to the unbalanced nature of the task, these differences may be attributed at least in part to different partitioning strategies: static in Noir and dynamic in OMP and Rayon.
Fig. 7(b) shows the performance we measure for wc. Noir shows consistently better performance than OMP. With this workload, Rayon exhibits a strange behavior, with execution time increasing when using more cores. This kind of problem when using many threads has been documented and may be caused be the scheduler11.
Footnote 11: [https://github.com/rayon-rs/rayon/issues/795#](https://github.com/rayon-rs/rayon/issues/795#) issuecommentment-1155652842
Fig. 7(c) shows the performance we measure for k-means. When considering 300 centroids (Fig. 7(c), left), all systems under test show nearly identical performance and scalability. When considering 30 centroids and 10M points (Fig. 7(c), center) both Rayon and Noir show better performance than OMP. Noir also scales better, and becomes almost twice as fast as Rayon with 64 cores. The same pattern emerges when increasing the number of points to 100M (Fig. 7(c), right): the execution times increase, but the relative performance between the systems remains similar. The performance advantage may be explained by the native support for iterations and partitioning: while OMP and Rayon need to join to the main thread after each iteration, Noir keeps the points partitioned, processes partitions independently from each other, and only synchronizes the tasks to collect the new centroids.
### _Discussion_
We evaluated Noir with very heterogeneous workloads, ranging from batch to streaming and even parallel computations, including iterative and graph processing algorithms. We compared it with state-of-the-art solutions in each field.
First of all, our analysis reveals the benefits of the programming interface of Noir. Indeed, writing programs with Noir (and Flink, which adopts a similar programming abstraction) was considerably simple than using MPI or Timely. Frequently, the added complexity may also become detrimental for performance: despite experimenting alternative solutions, in several benchmarks MPI shows higher execution times than Noir. Writing efficient MPI solutions requires significant effort in designing parts of the code that are outside the scope of the task at hand, such as communication and buffering, and most solutions may be sub-optimal with respect to what Noir achieves automatically with efficient default strategies. Moreover, when we found possible optimizations in MPI, we could frequently replicate them in Noir.
While Timely also offers a high-level programming interface, it is more complex than the one in Noir. Even when starting from implementations provided by Timely developers, it was difficult for us to ensure that the final algorithm was equivalent to that of the other platforms: for instance, in pagerank, we could not modify the provided implementation to use floating point numbers as rank; similarly, in nemark, the results where highly dependent on the strategies used to define timestamps.
In terms of absolute performance and scalability, Noir was always comparable or faster than alternative solutions in all the scenarios we tested, showing that it is suitable for a wide range of problems and diverse hardware configurations.
In summary, our evaluation shows that Noir provides a good balance between ease of use and performance, allowing to easily develop code that solves the problem at hand in a way that outperforms more complex solutions, developed with lower-level programming primitives.
## 6 Related Work
Our work focuses on programming models and platforms for distributed data processing. In this context, the dataflow model we consider in this paper has attracted increasing attention over the last several years, and many platforms have been implemented by researchers and practitioners [30, 31, 4, 32]. The Flink system we use for our evaluation is a mature commercial product, representative of these platforms, and often cited for its good level of performance [24].
To simplify the implementation of complex algorithms, most platforms also offer higher-level libraries for specific domains. Prominent examples are the libraries to process structured data [9], which convert declarative queries from SQL-like languages to dataflow programs, often providing unified abstractions for batch and stream processing of structured data [33, 34]. The conversion enables automated query optimizations, which are common in database systems. Other examples of libraries range from machine learning [35] to graph processing [10,
Fig. 8: Scalability on a single host.
to pattern recognition in streams of events [36]. As all these libraries generate dataflow programs, we could implement similar abstractions on top of Noir. Given the performance advantages of Noir especially in iterative computations, we believe that it has the potential to bring significant improvements in domains like machine learning and graph processing.
Some research works propose alternative programming models or extensions to the dataflow model. Naiad [25] and its timely dataflow model enrich dataflow computations with explicit timestamps that enable implementing efficient coordination mechanisms. We compared Noir with the Rust implementation of the timely dataflow model in Section 5.
Fernandez et al. [37] introduce an imperative programming model with explicit mutable state and annotations for data partitioning and replication. TSpoon [38] extends the dataflow abstraction with additional features and guarantees, such as transactional semantics. These efforts are orthogonal to our work, which mostly targets efficient system design and implementation, rather than investigating new models.
Some systems optimize the use of resources on a single machine. For instance, StreamBox targets multi-core machines [39], while SABER considers heterogeneous hardware platforms consisting of multi-core CPUs and GPUs, which are increasingly available in modern heterogeneous servers [40]. By building on a compiled language, Noir simplifies the access to hardware resources with respect to JVM-based systems. In fact, we already experimented with OpenCL-based implementations of operators that exploit GPUs, and we plan to further explore this line of research in future work.
As a final note, all modern data processing systems provide fault-tolerance mechanisms to recover from software and hardware failures. As the current version of Noir does not offer fault-tolerance mechanisms, we disabled them in all the systems used in our evaluation (see Section 5) for a fair comparison. We plan to implement fault-tolerance in future releases by building on consolidated approaches such as asynchronous snapshots [41], which bring negligible runtime overhead, since they do not block normal processing.
## 7 Conclusions
This paper introduced Noir, a novel data processing framework written in Rust. Noir provides all core features of state-of-the-art data processing platforms - unified batch and stream processing, iterative computations, windowing, time-based data analytics - within the same, high-level processing model. At the same time, its design and implementation choices - compiled language, efficient memory and communication management, task allocation that maximizes the use of processing resources - yields up to more than an order of magnitude improvements in throughput with respect to existing data processing systems, rivaling and even outperforming custom MPI solutions in some workloads.
Our research shows that the advantages of a high-level programming model are not restricted to the simplicity in defining data analysis tasks. If the model is supported by an efficient execution platform, it can unlock performance improvement that are hard to achieve with custom, manual optimizations.
Moving from this observation, we plan to further contribute to the research on data processing platforms focusing both on enriching the programming model, for instance to support domain-specific operators, and on improving the capabilities of the processing engines, such as supporting hardware accelerators and dynamic scaling.
|
2308.12937 | Panoptic-Depth Color Map for Combination of Depth and Image Segmentation | Image segmentation and depth estimation are crucial tasks in computer vision,
especially in autonomous driving scenarios. Although these tasks are typically
addressed separately, we propose an innovative approach to combine them in our
novel deep learning network, Panoptic-DepthLab. By incorporating an additional
depth estimation branch into the segmentation network, it can predict the depth
of each instance segment. Evaluating on Cityscape dataset, we demonstrate the
effectiveness of our method in achieving high-quality segmentation results with
depth and visualize it with a color map. Our proposed method demonstrates a new
possibility of combining different tasks and networks to generate a more
comprehensive image recognition result to facilitate the safety of autonomous
driving vehicles. | Jia-Quan Yu, Soo-Chang Pei | 2023-08-24T17:25:09Z | http://arxiv.org/abs/2308.12937v1 | # Panoptic-Depth Color Map for Combination of Depth and Image Segmentation
###### Abstract
Image segmentation and depth estimation are crucial tasks in computer vision, especially in autonomous driving scenarios. Although these tasks are typically addressed separately, we propose an innovative approach to combine them in our novel deep learning network, Panoptic-DepthLab. By incorporating an additional depth estimation branch into the segmentation network, it can predict the depth of each instance segment. Evaluating on Cityscape dataset, we demonstrate the effectiveness of our method in achieving high-quality segmentation results with depth and visualize it with a color map. Our proposed method demonstrates a new possibility of combining different tasks and networks to generate a more comprehensive image recognition result to facilitate the safety of autonomous driving vehicles.
\({}^{1}\)Jia-Quan Yu, \({}^{2}\)Soo-Chang Pei \({}^{1}\)Graduate Institute of Communication Engineering,
National Taiwan University, Taiwan
E-mail: [email protected]
\({}^{2}\)Department of Electrical Engineering,
National Taiwan University, Taiwan
E-mail: [email protected] Autonomous Driving, Depth Estimation, Panoptic Segmentation with Depth
## 1 Introduction
Image segmentation is a crucial task in Computer Vision that involves partitioning camera images into different segments or instances based on the semantic meaning of each pixel. It has a wide range of applications in fields such as medical image processing, image processing, and autonomous vehicles. There are three types of segmentation tasks: semantic, instance, and panoptic segmentation. Semantic segmentation is equivalent to pixel classification while each pixel belongs to a semantic category, while instance segmentation focuses on segmenting out the foreground objects and differentiating different instances within the scene. The third task, panoptic segmentation, is a relatively new task that aims to unify semantic and instance segmentation into a single task by performing semantic segmentation on background pixels and instance segmentation on foreground pixels.
Another important task in autonomous driving scenarios is depth estimation. It involves predicting the depth value of each image pixel, where the depth represents the distance from the camera center to the nearest obstacle. Depth estimation is particularly essential in autonomous driving as it provides information for the navigation system to avoid collisions. However, with only pixel-wise depth information, it is insufficient to distinguish instances in the scene. Therefore, we propose to combine image segmentation and depth estimation to predict depth value on the instance level and generate a color map to visualize the image recognition result. We depict the image tasks taxonomy we introduced in this section in Fig.1.
To achieve our proposed task, we introduce Panoptic-DepthLab, an extended variant of the Panoptic-DeepLab[1] segmentation network. By incorporating an additional depth estimation branch into the panoptic segmentation network,
Figure 1: Image task taxonomy. Semantic segmentation classifies each image pixel into semantic categories. Instance segmentation identifies different instances within the foreground pixels. Panoptic segmentation combines both instance and semantic segmentation tasks. Depth estimation evaluates the depth value of each pixel. Lastly, our proposed segmentation with depth task aims to integrate depth estimation and panoptic segmentation, producing a color map that represents the instance-level depth value.
it is able to generate segmentation and depth estimation results at the same time. Subsequently, we fuse the panoptic segmentation and depth estimation results to generate a color map that represents the depth of each instance. This color map can provide valuable insights for ensuring safety in autonomous driving scenarios. We illustrate our proposed Panoptic-DepthLab architecture in Fig.2.
The rest of the paper is organized as follows. Section 2 presents a related work review in panoptic segmentation and depth estimation. In section 3, we describe our proposed method, Panoptic-DepthLab in details. Section 4 reports our experimental results on the Cityscape dataset. Finally, section 5 concludes the paper.
## 2 Related Works
### Panoptic Segmentation
Panoptic segmentation[2], also known as scene parsing, combines the tasks of semantic and instance segmentation. It involves classifying background pixels into semantic categories and classifying foreground pixels into individual instances. Many related works extend well-established instance segmentation networks by incorporating an additional sementic segmentation branch to achieve panoptic segmentation. For instance, Panoptic-FPN [3], which is based on Mask-RCNN[4] network, add a segmentation branch to handle background pixel categories. Additionally, it utilizes a Feature Pyramid Network (FPN) to extract detailed features for the segmentation branches.
DeeperLab[5], for other example, adopts the encoder-decoder architecture and Atrous Spatial Pyramid Pooling(ASPP) of DeepLab[6] and adds an additional prediction head to predict semantic segments and four other keypoint-based detections to find instances. The UPSNet[7] architecture is similar to Panoptic-FPN, which adds an FPN module to the backbone and designs separate semantic and instance heads for prediction. UPSNet is able to achieves 61.8% PQ on the Cityscape validation dataset.
Panoptic-DeepLab[1], the network on which our work is based, extends the DeepLabv3+ network architecture to enable panoptic segmentation. It incorporates dual-ASPP and dual-decoder modules to produce three outputs: semantic segmentation, object center prediction, and center offset prediction. For the instance segmentation branch, Panoptic-DeepLab combines the center prediction heatmap with pixel offset regression to generate class-agnostic instances. It then employs a majority-vote algorithm, fusing the instance predictions with the semantic segmentation result to determine the instance categories. Panoptic-DeepLab adopts a bottom-up approach to address the instance segmentation aspect of the task, achieving a PQ (Panoptic Quality) score of 64.1% on the Cityscape validation set.
In conclusion, these works achieve panoptic segmentation by extending instance segmentation networks to generate semantic segmentation results for background pixels. Among
Figure 2: Panoptic-DepthLab architecture. Based on the Panoptic-DeepLab[1] framework, we extend it by adding a depth estimation decoder branch to predict depth map. The second decoder branch is responsible for predicting semantic segmentation result, as the third branch predict the instance segmentation outcome. To incorporate depth and segmentation results, we average out the region of each correspondent segment in a depth map to acquire the instance-level depth value.
these works, Panoptic-DeepLab has demonstrated outstanding performance. Therefore, we have chosen it to be the basis network for our own approach, leveraging its superior performance.
### Monocular Depth Estimation
Monocular depth estimation is another fundamental task in Computer Vision that aims to predict the distance of each image pixel from the camera center to the nearest obstacle using a single camera image as input. Traditional approaches adopt an encoder-decoder architecture similar to semantic segmentation networks design. The other line of work intent to model the ordinal relationship of the depth value. Deep Ordinal Regression Network (DORN) [8] offers an effective approach by formulating the depth regression problem as an ordinal regression problem. DORN introduces multiple binary classification subtasks to infer the depth, where each subtask determines whether the pixel depth is greater than a specific threshold. The final depth value is obtained by summing the number of subtasks that predict the pixel to be deeper than each threshold. Additionally, DORN incorporates the ASPP module to capture global contextual information and enhance the decoder's performance. Given the effectiveness of DORN, we adopt this design and integrate it into our depth estimation branch.
## 3 Proposed Method
### Panoptic-DepthLab
In this section, we introduce Panoptic-DepthLab, our proposed network that aims to produce segmentation with instance-level depth. By adding a depth estimation decoder branch to the Panoptic-DeepLab segmentation network, we can get a unified one-stage network that effectively predict segmentation and depth map in parallel branches. Additionally, we let the segmentation and depth branch share the same extracted feature map as input. This design attempt to increase efficiency and avoid extracting duplicate feature map. During training, we optimize the whole model, allowing the network to jointly learn multiple tasks. To combine the predicted depth map with the panoptic segmentation result, we average the pixel depths within each instance region. The shared encoder features facilitate efficient end-to-end training. Specifically, our newly added depth estimation branch follows a similar design to the semantic segmentation branch. The detailed architecture is illustrated in Fig 3.
## 4 Experiments
In this section, we present the detail experimental setting and results of our Panoptic-DepthLab network, which is built upon the Detectron2[9] platform. We conduct our experiment on Cityscape[10] dataset, which includes 2975 images for training and 500 images for validation, where most of the image is shot on cities street in Germany. The dataset consists of eight background categories and eleven foreground categories for the panoptic segmentation challenge.
During training, we initialize our network with the pre-trained weights provided by Panoptic-DeepLab, which were trained on the Cityscape dataset for 90K iterations. Subsequently, we fine-tune the entire network for an additional 10K iterations after incorporating our additional depth estimation branch. The training process is conducted on 2 TITAN RTX GPUs, with a batch size of 14. We employ the Adam optimizer with a learning rate set to 0.001. Additionally, since Cityscape does not provide ground truth depth estimation, we generate depth maps by converting the provided disparity map to depth map.
### Evaluation Metric
To assess the quality of panoptic segmentation results, Kirillov et al. introduced a novel metric called Panoptic Quality (PQ) [2]. PQ evaluates the accuracy of both semantic classification and masks prediction. The PQ for a class is defined as follows:
\[PQ=\overbrace{\frac{\sum_{(p,g)TP}IoU(p,g)}{|TP|}}^{\text{Segmentation Quality(SQ)}}\times \overbrace{\frac{|TP|}{|TP|+\frac{1}{2}|FP|+\frac{1}{2}|FN|}}^{\text{Recognition Quality(RQ)}} \tag{1}\]
where \(|TP|\), \(|FP|\), and \(|FN|\) represent the number of true positive, false positive, and false negative respectively. The first term of the equation 1 is segmentation quality (SQ), which is the average IoU(Intersection over Union) of true positives masks. The second term is recognition quality (RQ), which is the F1 score of classification result.
To evaluate the depth estimation result, we adopt the following metrics:
Relative squared error:
\[\text{sqErr}=\frac{1}{N}\sum_{i=1}^{N}(\frac{d_{i}^{*}-d_{i}}{d_{i}})^{2} \tag{2}\]
where \(d_{i}\) and \(d_{i}^{*}\) represent ground truth depth value and predicted depth value. And the \(N\) represent the number of pixel in image.
Relative absolute error:
\[\text{absErr}=\frac{1}{N}\sum_{i=1}^{N}|\frac{d_{i}^{*}-d_{i}}{d_{i}}| \tag{3}\]
Inverse root mean square error:
\[\text{IRMSE}=\sqrt{\frac{1}{N}\sum_{i=1}^{N}\left(\frac{1}{d_{i}^{*}}-\frac{ 1}{d_{i}}\right)^{2}} \tag{4}\]
[MISSING_PAGE_POST]
Scale invariant logarithmic error:
\[\text{SILog}=\frac{1}{N}\sum_{i=1}^{N}x_{i}^{2}-\frac{1}{N^{2}}(\sum_{i=1}^{N}x_ {i})^{2} \tag{5}\]
where \(x_{i}=logd_{i}-logd_{i}^{*}\)
Accuracy with threshold:
\[\delta_{i}=max(\frac{d_{i}}{d_{i}^{*}},\frac{d_{i}^{*}}{d_{i}})<t \tag{6}\]
where \(t\in[1.25,1.25^{2},1.25^{3}]\), It represents the percentage of image pixels whose depth value ratio with the ground truth is smaller than the given threshold \(t\).
### Quantitative Result
We present the Panoptic Quality (PQ) of the Panoptic-DepthLab model tested on the Cityscape validation dataset in Table 1. Our results show that Panoptic-DepthLab achieves slightly better performance compared to other panoptic segmentation networks. This improvement can be attributed to the additional depth map data available during our fine-tuning training, implicitly giving more useful information to the segmentation branch.
To determine the better loss function for our depth estimation branch, we compared two different loss functions. The first one is the loss function proposed by DORN[8], which discretizes depth values into intervals based on uncertainty and formulates the regression problem as multiple binary classification subtasks. The second loss function is smoothed L1 loss.
The results of our experiments are presented in Table 2 and Table 3. Surprisingly, we found that smooth L1 loss outperformed DORN's loss after training for 10K iterations. This may be attributed to the complexity of the DORN method, which involves a more complex network structure and a larger number of additional parameters. In contrast, the smoothed L1 loss proved to be a more effective and straightforward method for our specific task.
### Qualitative Result
To assess the performance of Panoptic-DepthLab, we visualize inference example on the Cityscape validation set, presented in Fig 4 and Fig 5. In these examples, each predicted instance is visually distinguished by its assigned color based on their depth value. Objects in close proximity appear in vibrant red, while those with medium distance is colored in green and farther away objects are colored in cooler shades of blue. Additionally, the background pixels, such as road, sky, and vegetation, are effectively segmented and labeled in the resulting output. Each instance is also annotated with its respective category and depth information, providing a detail understanding of the scene.
## 5 Conclusions
In this paper, we have presented a novel approach to combine depth estimation and image segmentation tasks into a unified framework, resulting in a informative outcome for autonomous driving scenarios. Our proposed network, Panoptic-DepthLab, builds upon a panoptic segmentation network with an encoder-decoder architecture and an ASPP module for capturing global context. We have further extended the network by incorporating an additional depth prediction branch. The fusion of panoptic segmentation and depth estimation results is achieved by averaging the pixel
\begin{table}
\begin{tabular}{c|c c c} Methods & PQ & SQ & RQ \\ \hline Panoptic-FPN[3] & 55.4 & 77.9 & 69.3 \\ UPSNet[7] & 60.1 & 80.3 & **73.5** \\ Panoptic-DepthLab & **60.3** & **81.5** & 72.9 \\ \end{tabular}
\end{table}
Table 1: Experimental results of Panoptic-DepthLab on the Cityscape validation set. The metric used is Panoptic Quality (PQ). Semantic Quality (SQ) represents the quality of predicted masks, while Recognition Quality (RQ) represents the classification accuracy. The PQ is calculated as \(PQ=SQ\times RQ\). Panoptic-DepthLab performs slightly better than other network on PQ meitre.
\begin{table}
\begin{tabular}{c|c c c} Loss & sqErr & absErr & IRMSE & SILog \\ \hline DORN[8] & 0.72 & 0.69 & 44.64 & 22.31 \\ L1 & **0.63** & **0.60** & **34.57** & **18.58** \\ \end{tabular}
\end{table}
Table 2: Evaluation of depth estimation results produced by Panoptic-DepthLab. Both loss function settings were trained for 10K iterations and tested on the Cityscape validation dataset. Evaluation metrics include relative squared error (sqErrorREL), relative absolute error (absErrorRel), inverse root mean square error (IRMSE), and scale invariant logarithmic error (SILog), which assess the difference between the predicted depth map and the ground truth depth map. Smaller values indicate better performance.
\begin{table}
\begin{tabular}{c|c c c} Loss & \(\delta_{1}<1.25\) & \(\delta_{2}<1.25^{2}\) & \(\delta_{3}<1.25^{3}\) \\ \hline DORN & 0.27 & 0.49 & 0.68 \\ L1 & **0.30** & **0.61** & **0.82** \\ \end{tabular}
\end{table}
Table 3: Evaluation of depth estimation results produced by Panoptic-DepthLab. Both loss function settings were trained for 10K iterations and tested on the Cityscape validation dataset. The evaluation metric is accuracy with threshold, representing the percentage of depth map pixels whose ratio with the ground truth pixels is smaller than the assigned threshold value. Threshold values of 1.25, \(1.25^{2}\), and \(1.25^{3}\) are used. Higher values indicate better performance.
[MISSING_PAGE_POST]
Figure 5: Inference example of Panoptic-DepthLab on the Cityscape validation set. Each instance is labeled with a category and a depth value, which determine the color of each foreground segment. The objects closest to the camera are colored in red, while the farthest objects are colored in blue. The background pixel is also segmented by its semantic meaning. Our inference example shows our network is able to produce high-quality segmentation with depth.
depths within each instance region, enabling comprehensive scene understanding. The effectiveness of our method has been evaluated on the Cityscape dataset, demonstrating high-quality segmentation and depth estimation results.
|
2306.14632 | On the Mobility Analysis of UE-Side Beamforming for Multi-Panel User
Equipment in 5G-Advanced | Frequency range 2 (FR2) has become an integral part of 5G networks to fulfill
the ever-increasing demand for data hungry-applications. However, radio signals
in FR2 experience high path and diffraction loss, which also pronounces the
problem of inter and intra-cell interference. As a result, both the serving and
target links are affected, leading to radio link failures (RLFs) and handover
failures (HOFs), respectively. To address this issue, multi-panel user
equipment (MPUE) is proposed for 5G-Advanced whereby multiple spatially
distinct antenna panels are integrated into the UE to leverage gains from
antenna directivity. It also opens the possibility of using UE-side
Rx-beamforming for each panel. In this paper, three different Rx-beamforming
approaches are proposed to improve the serving link, the target link, and the
handover process for an MPUE equipped with three directional panels.
Thereafter, the mobility performance is analyzed in a system-level simulation
for a multi-beam FR2 network. Results have shown that the proposed schemes can
help reduce RLFs by 53\% and HOFs by 90\%. | Subhyal Bin Iqbal, Salman Nadaf, Umur Karabulut, Philipp Schulz, Anna Prado, Gerhard P. Fettweis, Wolfgang Kellerer | 2023-06-26T12:12:37Z | http://arxiv.org/abs/2306.14632v1 | # On the Mobility Analysis of UE-Side Beamforming for Multi-Panel User Equipment in 5G-Advanced
###### Abstract
Frequency range 2 (FR2) has become an integral part of 5G networks to fulfill the ever-increasing demand for data hungry-applications. However, radio signals in FR2 experience high path and diffraction loss, which also pronunces the problem of inter and intra-cell interference. As a result, both the serving and target links are affected, leading to radio link failures (RLFs) and handover failures (HOFs), respectively. To address this issue, multi-panel user equipment (MPUE) is proposed for 5G-Advanced whereby multiple spatially distinct antenna panels are integrated into the UE to leverage gains from antenna directivity. It also opens the possibility of using UE-side Rx-beamforming for each panel. In this paper, three different Rx-beamforming approaches are proposed to improve the serving link, the target link, and the handover process for an MPUE equipped with three directional panels. Thereafter, the mobility performance is analyzed in a system-level simulation for a multi-beam FR2 network. Results have shown that the proposed schemes can help reduce RLFs by 53% and HOFs by 90%.
frequency range 2 (FR2), radio link failures (RLFs), handover failures (HOFs), multi-panel user equipment (MPUE), 5G-Advanced, UE-side Rx-beamforming, mobility performance, system-level simulation.
## I Introduction
Although the reliance of 5G multi-beam networks on the large spectrum availability offered by frequency range 2 (FR2) offers a practical solution to fulfill the demand of data-hungry applications, it introduces additional challenges to the link budget such as higher free-space path loss and penetration loss [1]. This leads to rapid degradation of the signal power in mobile environments where there are many static and moving obstacles and makes handovers (HOs) from the serving to target cells more challenging. Furthermore, it also pronunces the problem of both inter and intra-cell interference [2]. Consequently, this affects the serving link that exists between the UE and the serving cell, leading to radio link failures (RLFs). It also affects the target link that exists between the UE and the target cell, leading to handover failures (HOFs). In both these types of mobility failures, the UE would be prevented from performing a HO to another target cell with a better link, leading to an outage in the network.
On the UE architecture side, multi-panel UEs (MPUEs) [3, 4] offer a solution to this problem by integrating multiple spatially distant antenna panels into the UE, thus offering both directional gain and inter-cell interference suppression from neighboring cells. In this paper, we propose a solution based on the MPUE architecture with UE-side receiver (Rx)-beamforming that individually caters to improving both the serving and target links and on making the handover process more reliable by integrating Rx-beamformed measurements into the handover process. The Rx-beamforming is based on 3GPP's beamforming framework [5, Section 6.1] where Rx beams are swept for a given fixed transmit (Tx) beam and a narrow refined beam is selected to give the beam-pair for communication between the UE and the serving cell.
Both MPUEs and UE-side Rx-beamforming are an essential part of 5G-Advanced [6] and this paper offers a detailed system-level mobility performance outlook into the benefits of Rx-beamforming and the different ways in which it can reduce outage by improving the serving and target links and making the handover process more reliable. The authors in [7] consider Rx-beamforming for MPUEs but the focus is at studying the impact of hand blockages on beam management, whereas inter-cell mobility performance is not considered. In [8] the authors consider Rx-beamforming in a mobile environment with a single base station (BS) but do not consider MPUEs. The novelty of this paper is three-fold. Firstly, to the best of our knowledge, the system-level mobility performance for MPUEs with Rx-beamforming-centric serving link improvement has not been investigated in the literature before. Secondly, there has not been any mobility study for MPUEs where Rx-beamforming has been integrated into the layer 3 (L3) measurements-based handover decision-making process. Thirdly, a novel enhancement proposed in [9] to acquire the narrow refined beam of the target cell before a handover in order to improve the target link is validated and analyzed.
The rest of the paper is organized as follows. In Section II, we provide insights into the inter-cell and intra-cell mobility procedures and the signal-to-noise-plus-interference ratio (SINR) model. In Section III, we explain the UE-side beamforming in terms of the MPUE architecture and the different approaches whereby it is integrated into the SINR and handover models. The simulation scenario used in the performance evaluation is discussed in Section IV. Then in Section V, the mobility key performance indicators (KPIs) are presented and the mobility performance of UE-side beamforming with different approaches is compared with non-beamformed MPUEs. Finally, in Section VI, we conclude the paper and provide an outlook for future enhancements.
## II Network Model
In this section, the inter-cell and intra-cell mobility that form part of the handover and beam management procedures, respectively, are reviewed along with the SINR model.
### _Inter-cell Mobility_
Inter-cell mobility relates to HOs between cells in the network. A pre-requisite for a successful HO from the serving cell to the target cell is that the physical layer reference signal received power (RSRP) measurements undergo filtering to mitigate the effect of channel impairments. The HO model that is considered in this paper is the baseline HO mechanism of _3GPP Release 15_[10, 11]. In the 5G multi-beam network, each UE has the capability to measure the raw RSRP values \(P_{c,b}^{\text{RSRP}}(n)\) (in dBm) at a discrete time instant \(n\) from each Tx beam \(b\in B\) of cell \(c\in C\), using the synchronization signal block (SSB) bursts that are periodically transmitted by the BS. The separation between the time instants is denoted by \(\Delta t\) ms. At the UE end, L1 and L3 filtering are then applied sequentially to the raw RSRPs in order to mitigate the effects of fast-fading and measurement errors and determine the L3 cell quality of the serving and neighboring cells. L1 filtering implementation has not been specified in 3GPP standardization and is UE-specific. Herein, we use a moving average filter for L1 filtering, where the L1 filter output is expressed as
\[P_{c,b}^{\text{L1}}(m)=\frac{1}{N_{\text{L1}}}\sum_{i=0}^{N_{\text{L1}}-1}P_{c,b}^{\text{RSRP}}(m-\omega i),\ m=n\omega \tag{1}\]
where \(\omega\in\mathbb{N}\) is the L1 measurement period (aligned with the SSB periodicity) that is normalized by the time step duration \(\Delta t\), and \(N_{\text{L1}}\) is the number of samples that are averaged in each L1 measurement period. The L1 beam measurements are then used for cell quality derivation, where we first consider the set \(B_{\text{str},c}\) of strongest beams with signal measurements that are above a certain threshold \(P_{\text{thr}}\). \(B_{\text{str},c}\) is, thus, defined as
\[B_{\text{str},c}(m)=\big{\{}b\ |\ P_{c,b}^{\text{L1}}(m)>P_{\text{thr}}\big{\}}. \tag{2}\]
After this, up to \(N_{\text{str}}\) beams representing the subset \(B_{\text{str},c}^{\prime}\) of \(B_{\text{str},c}\) with the strongest \(P_{c,b}^{\text{L1}}(m)\) are taken and averaged to derive the L1 cell quality of cell \(c\) as
\[P_{c}^{\text{L1}}(m)=\frac{1}{|B_{\text{str},c}^{\prime}|}\sum_{b\in B_{\text{ str},c}^{\prime}}P_{c,b}^{\text{L1}}(m). \tag{3}\]
The cardinality of the set is given by \(|\cdot|\) and the set \(B_{\text{str},c}(m)\) is taken as \(B_{\text{str},c}^{\prime}\) if \(|B_{\text{str},c}(m)|<N_{\text{str}}\). If \(B_{\text{str},c}(m)\) is empty, the highest \(P_{c,b}^{\text{L1}}(m)\) is taken as the L1 cell quality \(P_{c}^{\text{L1}}(m)\).
The L1 cell quality is further smoothed by L3 filtering to yield the L3 cell quality. An infinite impulse response (IIR) filter is used for L3 filtering, where the L3 filter output is expressed as
\[P_{c}^{\text{L3}}(m)=\alpha_{c}P_{c}^{\text{L1}}(m)+(1-\alpha_{c})P_{c}^{ \text{L3}}(m-\omega), \tag{4}\]
where \(\alpha_{c}=(\frac{1}{2})^{\frac{k}{2}}\) is the forgetting factor that controls the impact of older L3 cell quality measurements \(P_{c}^{\text{L3}}(m-\omega)\) and \(k\) is the filter coefficient of the IIR filter [10].
Similarly, the L1 RSRP beam measurement \(P_{c,b}^{\text{L1}}\) of each beam \(b\) of cell \(c\) also undergoes L3 filtering, where the output is now the L3 beam measurement \(P_{c,b}^{\text{L3}}\)
\[P_{c,b}^{\text{L3}}(m)=\alpha_{b}P_{c,b}^{\text{L1}}(m)+(1-\alpha_{b})P_{c,b} ^{\text{L3}}(m-\omega), \tag{5}\]
where \(\alpha_{b}\) can be configured independently of \(\alpha_{c}\).
L3 cell quality \(P_{c}^{\text{L3}}(m)\) is an indicator of the average downlink signal strength for a link that exists between a UE and cell \(c\). It is used by the network to trigger the HO from the serving cell \(c_{0}\) to one of its neighboring cells, termed as the target cell \(c^{\prime}\). For intra-frequency HO decisions, typically the A3 trigger condition is configured for measurement reporting [10]. The UE is triggered to report the L3 cell quality measurement of the target cell \(P_{c^{\prime}}^{\text{L3}}(m)\) and L3 beam measurements \(P_{c^{\prime},b}^{\text{L3}}(m)\) to its serving cell \(c_{0}\) when the A3 trigger condition, i.e.,
\[P_{c_{0}}^{\text{L3}}(m)+o_{c_{0},c^{\prime}}^{\text{A3}}<P_{c^{\prime}}^{ \text{L3}}(m)\text{ for }m_{0}-T_{\text{TTT,A3}}<m<m_{0}, \tag{6}\]
expires at the time instant \(m=m_{0}\) for \(c^{\prime}\neq c_{0}\), where \(o_{c_{0},c^{\prime}}^{\text{A3}}\) is termed as the HO offset between cell \(c_{0}\) and \(c^{\prime}\) and the observation period in (6) is termed as the time-to-trigger \(T_{\text{TTT,A3}}\) (in ms).
Once the serving cell \(c_{0}\) has received the L3 cell quality measurements, it sends out a HO request to the target cell \(c^{\prime}\), which is typically the strongest cell, along with the L3 beam measurements \(P_{c^{\prime},b}^{\text{L3}}(m)\) of the target cell \(c^{\prime}\). Thereafter, the target cell prepares contention-free random access (CFRA) resources for beams \(b\in B_{\text{prep},c^{\prime}}\), e.g., with the highest signal power based on the reported L3 beam measurements. The target cell replies by acknowledging the HO request and provides a HO command to the serving cell, which includes the information required by the UE to access the target cell. The serving cell forwards the HO command to the UE and once the UE receives this message, it detaches from its serving cell \(c_{0}\) and initiates random access towards the target cell \(c^{\prime}\) through a target beam \(b^{\prime}\) using the CFRA resources.
### _Intra-cell Mobility_
Intra-cell mobility relates to a set of L1 and L2 Tx beam management procedures for the determination and update of serving Tx beam(s) for each UE within a serving cell \(c_{0}\), as defined in _3GPP Release 15_[5]. One of the key components is Tx beam selection, where the UE uses network assistance to select the serving beam \(b_{0}\) that it uses to communicate with \(c_{0}\)[4, Section II-A]. This is based on the periodic reporting of selected L1 beam measurements by the UE to the network that were initially received as raw beam measurements through SSB bursts. The other key component is beam failure detection, where the aim is to detect a failure of the serving beam \(b_{0}\) that is determined by its radio link quality SINR [13]. If a beam failure is detected, the UE is prompted to initiate a beam recovery failure (BFR) procedure where it aims to recover another beam of the serving cell \(c_{0}\). To that effect, the UE attempts random access on the target beam \(b^{\prime}\) that has the highest L1 RSRP beam measurement \(P_{c_{0},b}^{\text{L1}}(m)\) and then waits for the BS to send a random access response indicating that the access was successful. If the first attempt is unsuccessful, the
UE attempts another random access using \(b^{\prime}\). In total, \(N_{\rm BAtt}\) such attempts are made at time intervals of \(T_{\rm BAtt}\). If all such attempts are unsuccessful, an RLF is declared.
### _SINR Model_
The average downlink SINR at the discrete time instant \(m\) for Tx beam \(b\in B\) of cell \(c\in C\) is denoted as \(\gamma_{c,b}(m)\). It is evaluated using the Monte-Carlo approximation given in [2] for the strict fair resource scheduler, where all UEs in the network get precisely the same amount of resources. As will be seen later in Section IV, the SINRs of the serving beam-cell pair \(\gamma_{c_{0},b_{0}}(m)\) and target beam-cell pair \(\gamma_{c^{\prime},b^{\prime}}(m)\) have a key role in the RLF and HOF models, respectively.
## III UE-Side Rx-Beamforming Model
In this section, the UE-side Rx-beamforming for the MPUE architecture is explained along with the three different UE-side beamforming approaches.
### _UE-Side Beamforming with MPUE_
An MPUE in \(edge\) design with three integrated directional panels is considered [3, 4]. Each directional panel \(d\in D\) is assumed to have four antenna elements in a 1\(\times\) 4 configuration with a spacing of 0.5\(\lambda\), where the wavelength \(\lambda=c/f_{\rm FR2}\) and \(f_{\rm FR2}\) is the FR2 carrier frequency. Thereafter, each panel is considered to have the beamforming capabilities for Rx-beamforming refinement, generating directional beams \(r\in R\) for each of its panels \(d\in D\), where \(r\in\{1,\ldots,7\}\).
The concept of serving and best panel for the MPUE architecture that is used in the measurement reporting for Tx beam management and HO procedure, respectively, were introduced in one of our earlier works [4]. This paper builds upon that and introduces the allied concept of serving and best Rx beam. In line with 3GPP [14], the signal measurement scheme that we consider is MPUE-A3, where it is assumed that the UE can measure the RSRP values from the serving cell \(c_{0}\) and neighboring cells by activating all three panels simultaneously. 3GPP defines UE-side Rx-beamforming as a follow-up procedure to the Tx serving beam selection procedure at the BS-side discussed in Section II-B. After the selection of a serving beam \(b_{0}\) based on SSB, the UE can sweep through its beams and thereby select a narrow refined beam [5, 12, Chapter 4.2]. Herein, the serving cell now repeats the channel state information reference signal (CSI-RS) associated with the serving beam \(b_{0}\) for some time while the UE is sweeping its Rx beams on its panels. In our implementation, it is assumed that the serving beam on CSI-RS has the same beamwidth as the serving beam based on SSB. Furthermore, it is assumed that the Rx beam sweep for \(b_{0}\) can be completed within the designated SSB period. Once the Rx beam sweep is complete, the UE adjusts and selects the beam with the highest L1 RSRP. The serving panel \(d_{0}\) and serving Rx beam \(r_{0}\) are defined as
\[[d_{0},r_{0}]=\arg\max_{d,r}P_{c_{0},b_{0},d,r}^{\rm L1}(m). \tag{7}\]
The serving panel and Rx beam selection decisions are both fully UE-centric and made independent of the network. As seen in [4], the serving panel \(d_{0}\) serves two key purposes. Firstly, it is used for beam reporting for intra-cell Tx beam management, as discussed in Section II-B. Secondly, the raw beam panel RSRPs measured on \(d_{0}\) are used for calculating the average downlink SINR \(\gamma_{c,b}\) of a link between the UE and beam \(b\) of cell \(c\). As will be discussed later in Section IV, the SINR \(\gamma_{c,b}\) is used for HOF and RLF determination. An illustration of the serving panel and serving Rx beam in the MPUE context can be seen Fig. 1.
The best beam \(r_{c}\) is chosen as the beam with the strongest L1 beam panel RSRP \(P_{c,b,d,r}^{\rm L1}(m)\) on the best panel \(d_{c}\) for any beam \(b\) of cell \(c\) in the network and is defined as
\[[d_{c},r_{c}]=\arg\max_{b,d,r}P_{c,b,d,r}^{\rm L1}(m). \tag{8}\]
Herein, it is assumed that the UE can determine the best Rx beam with respect to the strongest L1 beam panel measurement of cell \(c\). The L1 beam panel RSRPs \(P_{c,b,d,r}^{\rm L1}(m)\) of the best panel \(d_{c}\) are denoted as L1 beam RSRPs \(P_{c,b}^{\rm L1}(m)\) and are used for deriving the L3 cell quality measurement \(P_{c}^{\rm L3}(m)\) and L3 beam measurements \(P_{c,b}^{\rm L3}(m)\), as explained in Section II-A. These L3 cell quality measurements are then used for HO decisions. It is pertinent to mention here that the standard does not mandate that Rx-beamformed measurements are used in HO decision-making [11]. Therefore, the HO decision could also be made dependent on the wide Rx beam on the best panel \(d_{c}\) and would not involve beam sweeping. As such, (8) reduces to (8) in [4].
In the current standard [11], UE-side beam refinement is only performed after a HO is complete. In [9], an enhancement is proposed whereby the UE can acquire the narrow refined Rx beam of a target cell \(c^{\prime}\) before a HO is initiated and use it during handover execution. This helps achieve a higher Rx-beamforming gain over the target link and less interference while performing the random access, thus enhancing the mobility performance in terms of HOFs. However, the implementation is far from trivial and the concept has not been validated and analyzed in a system-level simulation. In this paper, we implement this concept in our simulation framework and then validate the findings. The signaling diagram for the proposed scheme is illustrated in Fig. 2. Herein, it is seen that the UE sends an early measurement report to the serving cell \(c_{0}\) before a HO is initiated, which then decides on a
Fig. 1: An illustration of the serving panel and serving Rx beam in the MPUE context, where the MPUE has three integrated panels in the _edge_ design configuration and is assumed parallel to the ground.
potential target cell \(c^{\prime}\). The serving cell then sends a _CSI-RS Repetition Configuration Request_ to the target cell, requesting a repetition of the CSI-RS associated with the Tx beam that has the strongest L1 beam RSRP in the measurement report. This is acknowledged by \(c^{\prime}\). The serving cell then forwards the _Repetition Configuration_ message to the UE, after which the target cell transmits the CSI-RS associated with its strongest Tx beam. Thereafter, the UE sweeps its beams and determines the narrow refined Rx beam on a panel as per the same concept as seen in (7). The UE then sends a measurement report based on the A3 trigger condition in (6). Having received the measurement report, the serving cell prepares the handover. Thereafter, the serving cell \(c_{0}\) sends an _RRC Reconfiguration_ message to the UE, which is followed by a HO execution using the narrow refined Rx beam that has been acquired.
### _UE-Side Beamforming Approaches_
The employment of UE-side Rx-beamforming in the MPUE architecture means that different approaches can be defined whereby Rx-beamforming is used in the system model. These approaches are summarized in Table I. The reference approach considers a single antenna element per panel [4] and therefore there is no Rx-beamforming capability. In Rx-beamforming Approach 1, the effect of Rx-beamforming is considered and impacts both the Tx beam management and serving link SINR. Rx-beamforming Approach 2 extends the concept of Rx-beamforming Approach 1 and incorporates the proposed enhancement of refined Rx target beam acquisition into the system model. Lastly, Rx-beamforming Approach 3 extends Rx-beamforming Approach 2 and now also incorporates the Rx-beamformed measurements based on the refined Rx beam into the L3-measurements-based HO decision.
## IV Simulation Scenario and Parameters
In this section, the simulation setup for the 5G network model is discussed along with the simulation parameters that are listed in Table II. The simulations have been performed in our proprietary MATLAB-based system-level simulator.
We consider a 5G network model with an urban-micro (UMi) cellular deployment consisting of a standard hexagonal grid with seven BS sites, each divided into three sectors or cells. The inter-cell distance is 200 meters and the FR2 carrier frequency is 28 GHz. 420 UEs are dropped randomly following a 2D uniform distribution over the network at the beginning of the simulation, moving at constant velocities along straight lines where the direction is selected randomly at the start of the simulation [15, Table 7.8-5]. A wrap-around [16, pp. 140] is considered, i.e., the hexagonal grid with seven BS sites is repeated around the original hexagonal grid shown in Fig. 3 in the form of six replicas. This implies that the cells on network borders are subject to interference from the network's other edge that is comparable to those not on the network borders. All the UEs travel at 60 km/h, which is the usual speed in the non-residential urban areas of cities [17].
In accordance with 3GPP's study outlined in _Release 15_[15], the channel model considered in this article takes into account shadow fading due to large obstacles and assumes a soft line-of-sight (LoS) for all radio links between the UEs and the cells. Soft LoS is defined as a weighted average of the LoS and non-LoS channel components [15, pp. 59-60] and is used for both shadow fading and distance-dependent path loss calculation. We take fast fading into account through the low complexity channel model for multi-beam systems proposed in [18], which integrates the spatial and temporal characteristics of 3GPP's geometry-based stochastic channel model [15] into Jake's channel model. The Tx-side beamforming gain model is based on [18], where a 12-beam grid configuration is considered. Beams \(b\in\{1,\ldots,8\}\) have smaller beamwidth and higher beamforming gain and cover regions further apart from the BS. Tx beams \(b\in\{9,\ldots,12\}\) have larger beamwidth and relatively smaller beamforming gain and cover regions closer to the BS. This can also be seen in Fig. 3, where the eight outer beams are shown in light colors and the four inner beams are shown in dark colors. The number of simultaneously scheduled beams per cell is taken as \(K_{b}=4\).
The antenna element radiation pattern for each of the three MPUE panels is based on [5]. The four antenna element patterns per panel produce seven Rx beams \(\forall r\in\{1,\ldots,7\}\)
Fig. 2: Signaling diagram showing the proposed enhancement [9] of acquiring the narrow refined Rx beam before a HO.
where the elevation angle is \(\theta_{r}=~{}90^{\circ}\) and the azimuth angle is \(\phi_{r}=-45^{\circ}+15(r-1)^{\circ}\). The UE screen, held by the user, is assumed to be parallel to the ground [4].
As mentioned in Section II-C, the SINR-dependent HOF and RLF models are now discussed below.
_HOF Model:_ A HOF is a failure over the target link that models the failure of a UE to handover from its serving cell \(c_{0}\) to its target cell \(c^{\prime}\). The UE initiates a handover by using the CFRA resources to access the selected beam \(b^{\prime}\) of target cell \(c^{\prime}\). For successful random access, it is a prerequisite that the SINR \(\gamma_{c^{\prime},b^{\prime}}(m)\) of the target cell remains above the threshold \(\gamma_{\mathrm{out}}\) during the RACH attempt, which is made after every 10 ms. A HOF timer \(T_{\mathrm{HOF}}\) = 200 ms is started when the UE initiates the random access towards the target cell \(c^{\prime}\) and sends the RACH preamble. The RACH procedure is repeated until either a successful RACH attempt is achieved or \(T_{\mathrm{HOF}}\) expires. A UE only succeeds in accessing the target cell if the SINR \(\gamma_{c^{\prime},b^{\prime}}(m)\) remains above the threshold \(\gamma_{\mathrm{out}}\) and as such a successful HO is declared. A HOF is declared if the timer \(T_{\mathrm{HOF}}\) expires and the UE fails to access the target cell, i.e., \(\gamma_{c^{\prime},b^{\prime}}(m)<\gamma_{\mathrm{out}}\) for the entire duration that the HOF timer runs. The UE then performs connection re-establishment to a new cell (possibly the previous serving cell) and this procedure contributes to additional signaling overhead and signaling latency [10].
_RLF Model:_ An RLF is a failure over the serving link that models the failure of a UE while it is in the serving cell \(c_{0}\). The UE further averages the average downlink SINR measurements of the serving cell \(\gamma_{c_{0},b_{0}}\) to yield the radio link monitoring (RLM) SINR metric, which it constantly keeps track of. An RLF timer \(T_{\mathrm{RLF}}\) = 1000 ms is started when the RLM SINR \(\bar{\gamma}_{\mathrm{RLM}}\) of the serving cell \(c_{0}\) drops below \(\gamma_{\mathrm{out}}=\)\(-\)8 dB, and if the timer \(T_{\mathrm{RLF}}\) expires, an RLF is declared. The UE then initiates connection re-establishment. While the timer \(T_{\mathrm{RLF}}\) runs, the UE may recover before declaring an RLF if the SINR \(\bar{\gamma}_{\mathrm{RLM}}\) exceeds a second SINR threshold defined as \(\gamma_{\mathrm{in}}\) = \(-\)6 dB, where \(\gamma_{\mathrm{in}}>\gamma_{\mathrm{out}}\)[10]. As discussed in Section II-B, if the BFR process fails the UE also declares an RLF and this is also taken into account in the RLF model.
## V Performance Evaluation
In this section, the mobility performance of the reference approach is compared with the three different Rx-beamforming approaches. The mobility KPIs used for evaluation are explained below.
### _KPIs_
* _RLFs:_ Sum of the total number of RLFs in the network.
Fig. 3: Simulation scenario consisting of seven hexagonal sites, where each site is serving three cells with 120\({}^{\circ}\) coverage.
* _HOFs:_ Sum of the total number of HOFs in the network.
* _Successful HOs:_ Sum of the total number of successful HOs from the serving to the target cells in the network.
* _Fast HOs:_ Sum of the total number of ping-pongs and short-stays in the network. A ping-pong is characterized as a successful HO followed by a HO back to the original cell within a very short time \(T_{FH}\)[19], e,g., 1 second. It is assumed that both HOs could have been potentially avoided. A short-stay is characterized as a HO from one cell to another and then to a third one within \(T_{FH}\). Here it is assumed that a direct HO from the first cell to the third one would have served the purpose. Although fast HOs are part of successful HOs, they are accounted for as a detrimental mobility KPI which adds unnecessary signaling overhead to the network.
RLFs, HOFs, successful HOs, and Fast HOs are normalized to \(N_{\mathrm{UE}}\) in the network per minute and expressed as UE/min.
* _Total Outage:_ Outage is denoted as a time period when a UE is not able to receive data from the network. This could be due to a number of reasons. When the SINR of the serving cell \(\gamma_{c_{0},b_{0}}\) falls below \(\gamma_{\mathrm{out}}\) it is assumed that the UE is not able to communicate with the network and, thus, in outage. This is characterized as _outage due to SINR degradation_. This outage type always precedes an RLF but it could also be that the SINR recovers before the RLF timer \(T_{\mathrm{RLF}}\) expires. Besides, if the HOF timer \(T_{\mathrm{HOF}}\) expires due to a HOF or the RLF timer \(T_{\mathrm{RLF}}\) expires due to an RLF, the UE initiates connection re-establishment and this is also accounted for as outage. A successful HO, although a necessary mobility procedure, contributes also to outage since the UE cannot receive any data during the time duration the UE is performing random access to the target cell \(c^{\prime}\). This outage is modeled as relatively smaller (55ms) than the outage due to connection re-establishment (180 ms) [19]. The total outage in the network is denoted in terms of a percentage as \[\text{Total Outage }(\%)=\frac{\sum_{u}\text{Outage duration of UE }u}{N_{\mathrm{UE}}\ \cdot\ \text{Simulated time}}\cdot 100.\] (9)
### _Simulation Results_
Fig. 4 shows a mobility performance comparison between the reference non-beamformed approach and the three different Rx-beamforming approaches in terms of RLFs, HOFs, fast HOs, and successful HOs. It is seen in Fig. 4a that for all the three Rx-beamforming approaches, there is an approximate 53% relative reduction in RLFs when compared with the reference approach. This significant reduction stems from the fact that the serving link (and hence the serving link SINR \(\gamma_{c_{0},b_{0}}\)) sees a significant improvement due to the Rx-beamformed measurements. This is illustrated in Fig. 5, where the CDF of the serving link SINR is shown. It is seen that at the 50th percentile, the Rx-beamforming approaches (in blue, green, and cyan and overlapping) have an SINR of 6.8 dB whereas the reference non-beamformed approach (shown in red) has an SINR of 5.4 dB. Of particular interest is the low SINR regime in the vicinity of \(\gamma_{\mathrm{out}}\) = \(-\)8 dB, since this is where mobility failures take place as explained by the HOF and RLF models in Section IV. At the 2nd percentile, the Rx-beamforming approaches have an SINR of \(-\)5.4 dB whereas the reference non-beamformed approach has an SINR of \(-\)10 dB. When the HOFs are analyzed in Fig. 4b, it is seen that for Rx-beamforming Approach 1 the HOFs increase by 32.6% (from 0.043 HOFs/UE/min to 0.057 HOFs/UE/min) when compared with the reference approach. This performance degradation occurs because some RLFs may turn into HOFs because the UE makes more HO attempts as a result of the serving link SINR gain. With Rx-beamforming Approaches 2 and 3, the refined beam of the target link is acquired before the handover and the associated target link SINR gain significantly reduces HOFs by approximately 90%.
An analysis of the fast HOs in Fig. 4c reveals that as a consequence of the reduction in RLFs and HOFs, the UE attempts more HOs and a large number of them are fast HOs. For Approaches 1 and 2, the fast HOs increase relatively by 20% when compared to the reference approach. With Approach 3, it is now seen that compared to the other two beamforming approaches, fast HOs can be curtailed by about
Fig. 4: A comparison of the mobility performance between the reference and the three different Rx-beamforming approaches in terms of total number (a) RLFs, (b) HOFs, (c) fast HOs, and (d) successful HOs.
Fig. 5: A comparison between the reference and the three different Rx-beamforming approaches in terms of the serving link SINR.
8%. This is because the incorporation of the Rx-beamformed measurements into the L3 HO decision process makes it more reliable and therefore unnecessary HOs can be avoided. The same trend is also seen for successful HOs in Fig. 4d, where it is now seen that a reduction in RLFs and HOFs results in more successful HOs in general.
Next, the outage performance is analysed in Fig. 6. It can be seen that for all the Rx-beamforming approaches, the outage due to SINR degradation (shown in blue) reduces by about 42% in relative terms when compared with the reference approach. This is because the serving link SINR \(\gamma_{c_{0},b_{0}}\) improves due to the beamforming gain as seen in Fig. 5. Due to this \(\gamma_{c_{0},b_{0}}\) falls below the SINR threshold \(\gamma_{\mathrm{out}}\) less often. From our stimulative investigations, it is known that this is the second most common type of outage after outage due to successful handovers and therefore this is a significant improvement. It is also observed that the total outage (shown in red) reduces by 26% in relative terms (from 4.60% to 3.42%) when Rx-beamforming Approach 1 is compared with the reference approach. This outage reduction stems from the reduction in outage due to SINR degradation and the outage reduction due to re-establishment as a result of less RLFs, as seen in Fig. 4a. It can also be observed that even though the outage due to successful HOs is increasing due to more successful HOs as seen in Fig. 4d, it is offset by the outage reduction due to re-establishment. This is because as mentioned in Section V-A, this outage is modeled as relatively smaller. With Rx-beamforming Approach 2, the total outage is comparable because the outage reduction due to a decrease in HOFs as seen in Fig. 4b is offset by the increased number of fast HOs (and therefore successful HOs) seen in Fig. 4d. There is a tradeoff between the HOFs and fast HOs in terms of their respective outage contribution. In mobility studies, it is known that minimizing the number of HOFs has a higher priority over fast HOs [19]. Lastly, it can be visualized that Approach 4 has the least total outage value of 3.36% due to the reduction in fast HOs as seen in Fig. 4d, which results in less outage due to successful handovers.
## VI Conclusion
In this paper, the performance of Rx-beamforming for an MPUE architecture is analyzed in a multi-beam FR2 network. Both Rx-beamforming and MPUE are an integral part of 5G-Advanced and this paper is a novel attempt to understand how Rx-beamforming can help improve the serving link, the target link, and the L3 HO process. For this purpose, three different approaches are proposed. These approaches individually target the most common types of mobility problems, i.e,. RLFs, HOFs, and fast HOs. For a mobility setting based in non-residential areas of cities where the UE speed is assumed as 60 km/h, it is seen that compared to a non-beamformed reference setting, RLFs reduce by 53% and HOFs reduce by 90%. It is also seen that fast HOs increase due to a reduction in mobility failures and the use of Rx-beamformed measurements in the HO process can curtail them by 8%. As a result, with Approach 3, where all three mobility problems are targeted, it is seen that the outage reduces by 26% when compared with the reference approach. Based on these findings, future studies may be carried out to investigate the mobility performance of Rx-beamforming with UE hand blockage effect [7, 20].
|
2303.11194 | Polynomial stability of the homology of Hurwitz spaces | For a finite group $G$ and a conjugation-invariant subset $Q\subseteq G$, we
consider the Hurwitz space $\mathrm{Hur}_n(Q)$ parametrising branched covers of
the plane with $n$ branch points, monodromies in $G$ and local monodromies in
$Q$. For $i\ge0$ we prove that $\bigoplus_n H_i(\mathrm{Hur}_n(Q))$ is a
finitely generated module over the ring $\bigoplus_n H_0(\mathrm{Hur}_n(Q))$.
As a consequence, we obtain polynomial stability of homology of Hurwitz spaces:
taking homology coefficients in a field, the dimension of
$H_i(\mathrm{Hur}_n(Q))$ agrees for $n$ large enough with a quasi-polynomial in
$n$, whose degree is easily bounded in terms of $G$ and $Q$. Under suitable
hypotheses on $G$ and $Q$, we prove classical homological stability for certain
sequences of components of Hurwitz spaces. Our results generalise previous work
of Ellenberg-Venkatesh-Westerland, and rely on techniques introduced by them
and by Hatcher-Wahl. | Andrea Bianchi, Jeremy Miller | 2023-03-20T15:23:37Z | http://arxiv.org/abs/2303.11194v2 | # Polynomial stability of the homology of Hurwitz spaces
###### Abstract.
For a finite group \(G\) and a conjugation-invariant subset \(Q\subseteq G\), we consider the Hurwitz space \(\operatorname{Hur}_{n}(Q)\) parametrising branched covers of the plane with \(n\) branch points, monodromies in \(G\) and local monodromies in \(Q\). For \(i\geq 0\) we prove that \(\bigoplus_{n}H_{i}(\operatorname{Hur}_{n}(Q))\) is a finitely generated module over the ring \(\bigoplus_{n}H_{0}(\operatorname{Hur}_{n}(Q))\). As a consequence, we obtain polynomial stability of homology of Hurwitz spaces: taking homology coefficients in a field, the dimension of \(H_{i}(\operatorname{Hur}_{n}(Q))\) agrees for \(n\) large enough with a quasi-polynomial in \(n\), whose degree is easily bounded in terms of \(G\) and \(Q\). Under suitable hypotheses on \(G\) and \(Q\), we prove classical homological stability for certain sequences of components of Hurwitz spaces. Our results generalise previous work of Ellenberg-Venkatesh-Westerland, and rely on techniques introduced by them and by Hatcher-Wahl.
2020 Mathematics Subject Classification: 20F36, 55R80, 55T05, 55U10, 55U15
## 1. Introduction
Let \(G\) be a finite group and let \(Q\subseteq G\) be a conjugation-invariant subset. For \(n\geq 0\), the Hurwitz space \(\operatorname{Hur}_{n}(Q)\) considered in this article is a certain homotopy quotient of the set \(Q^{n}\) of \(n\)-tuples of elements in \(Q\) by an action of the braid group \(\mathfrak{Br}_{n}\): see Definition 2.1. The homotopy type of \(\operatorname{Hur}_{n}(Q)\) coincides with that of the moduli space of certain "decorated" branched covers of the complex plane \(\mathbb{C}\); see Subsection 1.2 for more details, and for the link between the Hurwitz spaces of this article and the Hurwitz spaces usually considered in algebraic geometry.
We are broadly interested in stability properties of homology of components of Hurwitz spaces. Suitable stability results for \(H_{i}(\operatorname{Hur}_{n}(Q))\) can find applications in enumerative number theory: the main instance of this is are the work of Ellenberg-Venkatesh-Westerland on the Cohen-Lenstra heuristics [1] and the work of Ellenberg-Tran-Westerland on the Malle conjecture [1]. The results of this article are an attempt to generalise the topological part of [1], as we do not require \(Q\subseteq G\) to be a single conjugacy class with the "non-splitting property", and we consider homology in any Noetherian ring \(R\); yet we have not been able to find a counterpart to the nice linear stability ranges from [1], making our results of a more qualitative than quantitative nature; in Subsection 1.2 we briefly explain why this prevents applications of our results in enumerative number theory.
### Statement of results
The disjoint union \(\operatorname{Hur}(Q)=\coprod_{n\geq 0}\operatorname{Hur}_{n}(Q)\) has a natural structure of topological monoid,1 which is recalled in Subsection 2.3. As a consequence, \(H_{*}(\operatorname{Hur}(Q))=\bigoplus_{i\geq 0,n\geq 0}H_{i}(\operatorname{Hur} _{n}(Q))\) admits a natural structure of
## 1. Introduction
Let \(R\) be a finite field. Let \(\mathbb{F}\) be a field and let \(\ell\geq 1\) be such that \(a^{\ell}=\mathbb{1}\in G\) for all \(a\in Q\). Then \(\ell\geq 1\) is called a _quasi-polynomial_ if \(\ell\geq 1\). If \(\ell\geq 1\) is such that \(a^{\ell}=\mathbb{1}\in G\) for all \(a\in Q\), then \(\ell\geq 1\) is called a _quasi-polynomial_ if \(\ell\geq 1\).
Let \(\mathbb{F}\) be a field and let \(\ell\geq 1\) be such that \(a^{\ell}=\mathbb{1}\in G\) for all \(a\in Q\). Then \(\ell\geq 1\) is called a _quasi-polynomial_ if \(\ell\geq 1\) is such that \(a^{\ell}=\mathbb{1}\in G\) for all \(a\in Q\).
Let \(\mathbb{F}\) be a field and let \(i\geq 0\) be such that \(a^{\ell}=\mathbb{1}\in G\) for all \(a\in Q\). Then \(\ell\geq 1\) is called a _quasi-polynomial_ if \(\ell\geq 1\) is such that \(a^{\ell}=\mathbb{1}\in G\) for all \(a\in Q\).
\(H_{0}(\operatorname{Hur}(Q))\) contains a subring \(A_{\natural}=H_{0}(\operatorname{Hur}(Q)_{\natural})\), and for \(\omega\in G\), multiplication in \(H_{*}(\operatorname{Hur}(Q))\) makes the submodule \(H_{i}(\operatorname{Hur}(Q)_{\omega})\subset H_{i}(\operatorname{Hur}(Q))\) into a left \(A_{\natural}\)-module, for all \(i\geq 0\). Our second main result is the following.
**Theorem B**.: _Let \(R\) be a Noetherian commutative ring. Let \(i\geq 0\) and let \(\omega\in G\); then \(H_{i}(\operatorname{Hur}(Q)_{\omega};R)\) is finitely generated over \(A_{\natural}\otimes R=H_{0}(\operatorname{Hur}(Q)_{\natural};R)\)._
A quantitative consequence of the previous theorem is the following (see Definition 2.6 for the constant \(k(G,Q,\omega)\geq 1\)).
**Corollary B'**.: _Let \(\mathbb{F}\) be a field and let \(i\geq 0\). Let \(\ell\geq 1\) be such that \(a^{\ell}=\mathbb{1}\in G\) for all \(a\in Q\). Then there is a quasi-polynomial \(p_{i,\omega}^{\mathbb{F}}\) of degree at most \(k(G,Q,\omega)-1\) and period dividing \(\ell\) such that, for \(n\) large enough, \(\dim_{\mathbb{F}}H_{i}(\operatorname{Hur}_{n}(Q)_{\omega};\mathbb{F})=(p_{i, \omega}^{\mathbb{F}})_{[n]_{\varepsilon}}(n)\)._
We compare Theorems A and B through the following example: let \(G=\mathfrak{S}_{d}\) be the symmetric group on \(d\) elements, and let \(Q\subset G\) denote the conjugacy class of transpositions. Then \(k(G,Q)=\lfloor d/2\rfloor\), and for \(d\geq 4\) Theorem A only predicts quasi-polynomial growth in \(n\) of the Betti numbers \(\dim_{\mathbb{F}}H_{i}(\operatorname{Hur}_{n}(Q);\mathbb{F})\); in particular these Betti numbers can attain arbitrarily high values for \(n\to\infty\). However, if \(\omega=(1,\dots,d)\) is the standard _long cycle_, then \(k(G,Q,\omega)=1\) and Theorem B predicts that the Betti number \(\dim_{\mathbb{F}}H_{i}(\operatorname{Hur}_{n}(Q)_{\omega};\mathbb{F})\) eventually coincides with a periodic function of \(n\).
Our third, main result is a classical homological stability result for certain sequences of components of Hurwitz spaces.
**Theorem C**.: _Let \(R\) be a commutative ring. Assume that \(Q\subset G\) is a single conjugacy class and that there is an element \(\omega\in G\) which is large with respect to \(Q\) (see Definition 2.7). Let \(i\geq 0\), and let \(\ell\geq 1\) be such that \(a^{\ell}=\mathbb{1}\in G\) for all \(a\in Q\). For \(a\in Q\) denote by \(\operatorname{lst}(a)\) the map \(\operatorname{Hur}(Q)\to\operatorname{Hur}(Q)\) induced by left multiplication by a point in \(\operatorname{Hur}_{1}(Q)_{a}\); see also Definition 4.6.2 Then for \(n\) sufficiently large compared to \(i\) the stabilisation map_
Footnote 2: In our model for Hurwitz spaces, \(\operatorname{Hur}_{1}(Q)_{a}\) consists of precisely one point.
\[\operatorname{lst}(a)_{*}^{\ell}\colon H_{i}(\operatorname{Hur}_{n}(Q)_{ \omega};R)\to H_{i}(\operatorname{Hur}_{n+\ell}(Q)_{\omega};R)\]
_is independent of \(a\in Q\) and is an isomorphism._
Theorem C applies for instance to the following settings:
* \(G=\mathfrak{S}_{p}\) for a prime number \(p\), \(Q\) is the conjugacy class of transpositions, and \(\omega=(1,\dots,p)\) is the long cycle;
* \(G=\mathbb{Z}/d\rtimes\mathbb{Z}/2\) for \(d\) odd, \(Q\) is the conjugacy class of involutions, and \(\omega\) is a generator of \(\mathbb{Z}/d\).
We remark that Tietz [16, Theorem 2] also obtains a homology stability result (with an explicit linear stable range) for the integral homology of certain components of Hurwitz spaces, also generalising [10]; even though our notion of "large element of a group with respect to a conjugacy class" seems comparable with his notion of "collection of conjugacy classes that invariably generate a group", we consider our result rather disjoint from his result.
Theorem C can be applied together with the group-completion theorem in order to compute some stable homology groups, giving a partial, positive answer to [10, Conjecture 1.5].
**Notation 1.1**.: For \(\alpha\in\pi_{0}(\operatorname{Hur}(Q))\), we denote by \(\operatorname{Hur}_{\alpha}(Q)\subseteq\operatorname{Hur}(Q)\) the connected component \(\alpha\).
**Corollary C'**.: _Let \(R\) be a commutative ring. Let \(G,Q,\omega\) be as in Theorem C, and let \(a\in Q\). Fix \(\alpha\in\pi_{0}(\operatorname{Hur}(Q)_{\omega})\subseteq\pi_{0}( \operatorname{Hur}(Q))\). Then for \(n\) sufficiently large compared with \(i\) the map induced by the group completion induces a homology isomorphism_
\[H_{i}(\operatorname{Hur}_{\hat{a}^{\ell n}\alpha};R)\cong H_{i}(\Omega_{0}B \operatorname{Hur}(Q);R),\]
_where \(\Omega_{0}B\operatorname{Hur}(Q)\) denotes the zero component of the group completion of \(\operatorname{Hur}(Q)\)._
The rational homology of a component \(\Omega_{0}B\operatorname{Hur}(Q)\) of \(\Omega B\operatorname{Hur}(Q)\) is \(\mathbb{Q}\) in degrees \(0\) and \(1\), and \(0\) in all other degrees. This statement is first implicitly claimed in [11, Conjecture 1.5], and a strategy of proof is contained in [11, Subsection 5.6], where it is shown that the statement follows from [11, Theorem 2.8.1]. However, we believe that the proof of [11, Theorem 2.8.1] contains a gap, which probably can be fixed. We refer to [12, Corollary 5.4] and [1, SS6.5] for alternative proofs of the statement.
### Connections to algebraic geometry and enumerative number theory
The space \(\operatorname{Hur}_{n}(Q)\) considered in this article is homotopy equivalent to the certain moduli space of regular branched covers of the complex plane \(p\colon\mathcal{F}\to\mathbb{C}\) endowed with the following data:
1. an identification of the deck transformation group \(\operatorname{Aut}(\mathcal{F},p)\) with \(G\);
2. a trivialisation over a suitable lower half-plane \(\mathbb{H}\subset\mathbb{C}\) of the restricted \(G\)-principal bundle \(p^{-1}(\mathbb{H})\cong G\times\mathbb{H}\), up to replacing \(\mathbb{H}\) by smaller and smaller half-planes,
such that there are precisely \(n\) branch points, and such that local monodromies around the branch points have values in \(Q\). A branched cover is "regular" if its group of deck transformations acts transitively on each fibre; we use the term "\(G\)-cover" for a regular branched cover endowed with a decoration as (1) above.
Romagny-Wewers [12] describe how to construct, for \(n\geq 0\) and a finite group \(G\), a scheme \(\mathcal{H}_{n,G}\) of finite type over \(\operatorname{Spec}(\mathbb{Z})\) whose associated analytic space \(\mathcal{H}_{n,G}(\mathbb{C})\) can be identified with the moduli space of branched \(G\)-covers as above, but with \(\mathbb{P}^{1}_{\mathbb{C}}\) as target, without the decoration (2), and without the restriction that local monodromies be in \(Q\). The construction already appears in Wewers' PhD thesis [12], and it is preceeded by work of Fulton [13] and of Fried-Volklein [10]: Fulton constructs for \(d\geq 1\) and \(n\geq 0\) a scheme \(\mathcal{H}_{d,n}\) of finite type over \(\operatorname{Spec}(\mathbb{Z})\) whose complex points isomorphism classes of degree-\(d\) simple branched covers of \(\mathbb{P}^{1}_{\mathbb{C}}\) with \(n\) branch points, and Fried-Volklein construct the basechange \(\mathcal{H}_{n,G}\times_{\operatorname{Spec}(\mathbb{Z})}\operatorname{Spec}( \mathbb{Q})\), which is of finite type over \(\operatorname{Spec}(\mathbb{Q})\).
Ellenberg-Venkatesh-Westerland [11] show that at least when \(Q\subset G\) is a _rational_ conjugation-invariant subset (that is, for all \(a\in Q\) and all \(n\geq 1\) coprime with the order of \(a\) in \(G\), one has \(a^{n}\in Q\)), then Wewers' Hurwitz scheme \(\mathcal{H}_{n,G}\) contains a subscheme \(\mathcal{H}_{n,G}^{Q}\) whose complex points are isomorphism classes of branched \(G\)-covers of \(\mathbb{P}^{1}_{\mathbb{C}}\) with local monodromies lying in \(Q\). Similarly, assuming again that \(Q\) is rational, they show the existence of a subscheme \(\operatorname{Hn}_{G,n}^{Q}\) of \(\mathcal{H}_{n+1,G}\) whose complex points are isomorphism classes of branched \(G\)-covers of \(\mathbb{C}\) with local monodromies in \(Q\). The only missing "decoration" is (2) in the list above, namely a trivialisation of the \(G\)-cover over some lower half-plane; it follows that \(\operatorname{Hn}_{G,n}^{Q}(\mathbb{C})\)
should be thought of as corresponding to the quotient \(\operatorname{Hur}_{n}(Q)/G\), where \(G\) acts by global conjugation on the set \(Q^{n}\), the action is compatible with that of \(\mathfrak{Br}_{n}\), and hence we obtain an induced action of \(G\) on \(\operatorname{Hur}_{n}(Q)\). It would be interesting to know, without any restriction on \(Q\subseteq G\), whether there exists a scheme \(\tilde{\operatorname{Hin}}_{G,n}^{Q}\), possibly of finite type over \(\operatorname{Spec}(\mathbb{Z})\) (and ideally, an affine scheme) whose associated analytic space \(\tilde{\operatorname{Hin}}_{G,n}^{Q}(\mathbb{C})\) is homotopy equivalent to \(\operatorname{Hur}_{n}(Q)\).
In [10], rational homological stability for \(\operatorname{Hur}_{n}(Q)\) is proved under the non-splitting hypothesis, and an _explicit linear stability range_ is provided. This leads to an exponential upper bound on the size of the rational cohomology of \(\operatorname{Hur}_{n}(Q)\) that does not depend on \(n\): there is a constant \(C>0\) such that in each degree \(i\geq 0\) one has \(\dim_{\mathbb{Q}}H^{i}(\operatorname{Hur}_{n}(Q);\mathbb{Q})<C^{i}\). This in turn leads to a similar upper bound on the size of \(H^{i}(\operatorname{Hur}_{n}(Q)/G;\mathbb{Q})\cong H^{i}(\operatorname{Hin}_{ G,n}^{Q}(\mathbb{C});\mathbb{Q})\), when \(Q\) is a rational conjugacy class of \(G\). For a finite field \(\mathbb{F}_{q}\) with algebraic closure \(\bar{\mathbb{F}}_{q}\), the latter upper bound can be used to control the size of the etale cohomology of the basechange \(\operatorname{Hin}_{G,n}^{Q}\times_{\operatorname{Spec}(\mathbb{Z})} \operatorname{Spec}(\bar{\mathbb{F}}_{q})\). One can then use the Grothendieck-Lefschetz trace formula to express the \(|\operatorname{Hin}_{G,n}^{Q}(\mathbb{F}_{q})|\) as the alternating sum of the traces of the \(\mathbb{F}_{q}\)-Frobenius acting on the etale cohomology of \(\operatorname{Hin}_{G,n}^{Q}\otimes_{\operatorname{Spec}(\mathbb{Z})} \operatorname{Spec}(\bar{\mathbb{F}}_{q})\). The above bounds on the dimension of the etale cohomology groups, together with the Deligne bounds on the size of the eigenvalues of the Frobenius, give an estimate of \(|\operatorname{Hin}_{G,n}^{Q}(\mathbb{F}_{q})|\), and in particular describe, at least for \(q\) large, how \(|\operatorname{Hin}_{G,n}^{Q}(\mathbb{F}_{q})|\) grows for \(n\to\infty\).
The lack of an explicit linear stability range in the results of this article seems to exclude the possibility to employ our results in a similar framework as that of [10]. However, if an explicit linear stable range could be established, these kinds of polynomial stability results plausibly would have implications for point count problems.
### Acknowledgments
The first author would like to thank Beranger Seguin for a useful conversation on the algebraic-geometric theory of Hurwitz spaces, and Oscar Randal-Williams for a conversation about stability phenomena in the presence of several stabilisation maps. Both authors would like to thank Jordan Ellenberg and Craig Westerland for useful comments on a first draft of the article.
Andrea Bianchi was supported by the Danish National Research Foundation through the Centre for Geometry and Topology (DNRF151) and the European Research Council under the European Union Horizon 2020 research and innovation programme (grant agreement No. 772960).
Jeremy Miller was supported in part by NSF grant DMS-2202943 and a Simons Foundation Collaboration Grants for Mathematicians.
## 2. Preliminaries
Throughout the article, \(G\) denotes a finite group and \(Q\subseteq G\) a conjugation-invariant subset. We denote by \(q_{1},\dots,q_{m}\) the elements of \(Q\), where \(m=|Q|\), and we let \(\ell\geq 1\) be a positive integer such that \(a^{\ell}=\mathbb{1}\in G\) for each \(a\in Q\): for instance, we could take \(\ell\) to be the least common multiple of the multiplicative orders in the group \(G\) of the elements of \(Q\).
### Hurwitz spaces as homotopy quotients
For \(n\geq 0\), we denote by \(\mathfrak{Br}_{n}\) the Artin braid group on \(n\) strands, with generators \(\sigma_{1},\dots,\sigma_{n-1}\) satisfying the usual
braid and commuting relations. The group \(\mathfrak{Br}_{n}\) acts on the set \(Q^{n}\) of \(n\)-tuples of elements of \(Q\) as follows: for \(1\leq i\leq n-1\), the standard generator \(\sigma_{i}\in\mathfrak{Br}_{n}\) sends
\[\sigma_{i}\colon(a_{1},\dots,a_{n})\mapsto(a_{1},\dots,a_{i-1},a_{i+1},a_{i}^{ a_{i+1}},a_{i+2},\dots,a_{n}),\]
where, for \(a,b\in Q\), we denote \(a^{b}=b^{-1}ab\in Q\). The fact that the action is well-defined is a consequence of the fact that \(Q\), with the operation of conjugation restricted from \(G\), is a quandle. The following is the definition of Hurwitz spaces that we are going to use throughout the article.
**Definition 2.1**.: For \(n\geq 0\), we define the Hurwitz space \(\operatorname{Hur}_{n}(Q)\) as the homotopy quotient
\[\operatorname{Hur}_{n}(Q)=Q^{n}\mathbin{/\!\!/}\;\mathfrak{Br}_{n}.\]
A priori, a homotopy quotient is only defined as a homotopy type; in this article we realise homotopy quotients by the standard bar construction; for example \(\operatorname{Hur}_{n}(Q)\), as a concrete topological space, is the geometric realisation of the simplicial set \(B_{\bullet}(*,\mathfrak{Br}_{n},Q^{n})\).
### Components of Hurwitz spaces
By definition, \(\operatorname{Hur}_{n}(Q)\) is the homotopy quotient of the set \(Q^{n}\) by the action of the discrete group \(\mathfrak{Br}_{n}\): it follows that \(\pi_{0}(\operatorname{Hur}_{n}(Q))\) is in natural bijection with the set of orbits of the action of \(\mathfrak{Br}_{n}\) on the set \(Q^{n}\).
**Definition 2.2**.: For \(\underline{a}=(a_{1},\dots,a_{n})\in Q^{n}\) we denote by \(\operatorname{Hur}_{n}(Q,\underline{a})\subset\operatorname{Hur}_{n}(Q)\) the component corresponding to the orbit of \(\underline{a}\) under the action of \(\mathfrak{Br}_{n}\).
The space \(\operatorname{Hur}_{n}(Q,\underline{a})\) is aspherical. Let us denote by \(\mathfrak{Br}_{n}\cdot\underline{a}\subset Q^{n}\) the orbit of \(\underline{a}\) under the braid group action: then \(\operatorname{Hur}_{n}(Q,\underline{a})\) is canonically homeomorphic to \(\left(\mathfrak{Br}_{n}\cdot\underline{a}\right)/\mathfrak{Br}_{n}=|B_{ \bullet}(*,\mathfrak{Br}_{n},\mathfrak{Br}_{n}\cdot\underline{a})|\), and the fundamental group of \(\left(\mathfrak{Br}_{n}\cdot\underline{a}\right)/\!\!/\mathfrak{Br}_{n}\) based at the \(0\)-simplex \(\underline{a}\) is canonically isomorphic to the subgroup of \(\mathfrak{Br}_{n}\) stabilising \(\underline{a}\), which we denote \(\mathfrak{Br}_{n}(\underline{a})\subseteq\mathfrak{Br}_{n}\).
### Topological monoid structure
For \(n,m\geq 0\), consider the standard concatenating map of sets \(Q^{n}\times Q^{m}\xrightarrow{\cong}Q^{n+m}\) and the standard concatenating map of braid groups \(\mathfrak{Br}_{n}\times\mathfrak{Br}_{m}\hookrightarrow\mathfrak{Br}_{n+m}\). If we let \(\mathfrak{Br}_{n}\times\mathfrak{Br}_{m}\) act on \(Q^{n+m}\) through its inclusion into \(\mathfrak{Br}_{n+m}\), we have that the map \(Q^{n}\times Q^{m}\xrightarrow{\cong}Q^{n+m}\) is \(\left(\mathfrak{Br}_{n}\times\mathfrak{Br}_{m}\right)\)-equivariant; we also say that the map of sets \(Q^{n}\times Q^{m}\xrightarrow{\cong}Q^{n+m}\) is equivariant with respect to the map of groups \(\mathfrak{Br}_{n}\times\mathfrak{Br}_{m}\hookrightarrow\mathfrak{Br}_{n+m}\). This equivariance gives rise to a map between the homotopy quotients
\[\operatorname{Hur}_{n}(Q)\times\operatorname{Hur}_{m}(Q)\cong(Q^{n}\times Q^{ m})/\!\!/\left(\mathfrak{Br}_{n}\times\mathfrak{Br}_{m}\right)\to Q^{n+m}/\!\!/ \,\mathfrak{Br}_{n+m}=\operatorname{Hur}_{n+m}(Q),\]
and these maps assemble into a topological monoid structure on the disjoint union
\[\operatorname{Hur}(Q)=\coprod_{n\geq 0}\operatorname{Hur}_{n}(Q),\]
with unit given by the unique point of \(\operatorname{Hur}_{0}(Q)\).
Taking connected components, we obtain a discrete monoid \(\pi_{0}(\operatorname{Hur}(Q))\); if we denote by \(\hat{q}_{i}\) the connected component of \(\operatorname{Hur}(Q)\) given by the (contractible) space \(\operatorname{Hur}_{1}(Q,q_{i})\), for all \(q_{i}\in Q\), we have the following presentation by generators and relations of \(\pi_{0}(\operatorname{Hur}(Q))\) as an associative, unital monoid:
\[\pi_{0}(\operatorname{Hur}(Q))\cong\left\langle\hat{q}_{1},\dots,\hat{q}_{m} \ |\ \hat{q}_{i}\cdot\hat{q}_{j}=\hat{q}_{j}\cdot\widehat{q_{i}^{q_{j}}}\right\rangle.\]
**Definition 2.3**.: We denote by \(\pi_{0}^{\ell}(\operatorname{Hur}(Q))\subseteq\pi_{0}(\operatorname{Hur}(Q))\) the unital submonoid generated by the elements \(\hat{q}_{j}^{\ell}\) for \(1\leq i\leq m\).
**Lemma 2.4**.: _The monoid \(\pi_{0}^{\ell}(\operatorname{Hur}(Q))\) is commutative, and for all \(1\leq i,j\leq m\) the following relation holds in \(\pi_{0}^{\ell}(\operatorname{Hur}(Q))\):_
\[\hat{q}_{i}^{\ell}\cdot\hat{q}_{j}^{\ell}=\widehat{q_{i}^{q_{j}}}^{\ell}\cdot \hat{q}_{j}^{\ell}\]
Proof.: We note that the elements \(\hat{q}_{i}^{\ell}\) are central elements of the monoid \(\pi_{0}(\operatorname{Hur}(Q))\): indeed for all \(j\) we have \(\hat{q}_{j}\cdot\hat{q}_{i}^{\ell}=\hat{q}_{i}^{\ell}\cdot\widehat{q_{j}^{q_{ i}^{\ell}}}=\hat{q}_{i}^{\ell}\cdot\hat{q}_{j}\). This implies in particular that \(\pi_{0}^{\ell}(\operatorname{Hur}(Q))\) is a commutative monoid.
Moreover for all \(i,j\) we have in \(\pi_{0}(\operatorname{Hur}(Q))\) the equality \(\hat{q}_{i}^{\ell}\cdot\hat{q}_{j}=\hat{q}_{j}\cdot\widehat{q_{i}^{q_{j}}}^{\ell}\), which by the previous argument is also equal to \(\widehat{q_{i}^{q_{j}}}^{\ell}\cdot\hat{q}_{j}\); multiplying both sides on right by \(\hat{q}_{j}^{\ell-1}\) we obtain the described relations among the generators of \(\pi_{0}^{\ell}(\operatorname{Hur}(Q))\).
### Constants attached to finite groups
The following constants, attached to a group \(G\), a conjugation-invariant subset \(Q\subset G\), and an element \(\omega\in G\), will be used to bound the degree of the quasi-polynomials that govern the growth of the homology of Hurwitz spaces.
**Definition 2.5**.: We denote by \(k(G,Q)\geq 1\) the maximum, for \(H\subseteq G\) ranging among all subgroups of \(G\), of the number of conjugacy classes in \(H\) in which the conjugation-invariant subset \(Q\cap H\subseteq H\) decomposes.
**Definition 2.6**.: We denote by \(k(G,Q\omega)\geq 1\) the maximum, for \(H\subseteq G\) ranging among all subgroups of \(G\) containing \(\omega\), of the number of conjugacy classes in \(H\) in which the conjugation-invariant subset \(Q\cap H\subseteq H\) decomposes.
**Definition 2.7**.: Let \(\omega\in G\) and assume that \(Q\subset G\) is a single conjugacy class. We say that \(\omega\) is _large_ with respect to \(Q\) if for every \(a\in Q\) the elements \(\omega\) and \(a\) generate \(G\).
Note that if \(\omega\in G\) is large with respect to a conjugacy class \(Q\subset G\), then in particular \(k(G,Q,\omega)=1\). As an example, let \(p\) be a prime number, let \(G=\mathfrak{S}_{p}\) and let \(Q\) be the conjugacy class of transpositions: then the long cycle \((1,\dots,p)\) is large with respect to \(Q\). Another example is the following: let \(d\) be an odd number and let \(G=\mathbb{Z}/d\rtimes\mathbb{Z}/2\) be the \(d^{\text{th}}\) dihedral group; let \(Q\) be the conjugacy class of involutions; then any generator of \(\mathbb{Z}/d\subset G\) is large with respect to \(Q\).
### Invariants of components
The action of \(\mathfrak{B}\mathfrak{r}_{n}\) on \(Q^{n}\) preserves the following invariants defined on the set \(Q^{n}\):
* the _total monodromy_: this invariant associates with an \(n\)-tuple \((a_{1},\dots,a_{n})\) the product \(\omega:=a_{1}\dots a_{n}\in G\);
* the _image subgroup_: this invariant associates with \((a_{1},\dots,a_{n})\) the subgroup \(H:=\langle a_{1},\dots,a_{n}\rangle\subseteq G\);
* the _conjugacy class partition_: let \((a_{1},\dots,a_{n})\in Q^{n}\) and let \(H=\langle a_{1},\dots,a_{n}\rangle\) as above; let \(Q_{1},\dots,Q_{s}\subset Q\cap H\) be the conjugacy classes in \(H\) in which the conjugation invariant subset \(Q\cap H\) splits; this invariant associates with \((a_{1},\dots,a_{n})\) the splitting \(n=n_{1}+\dots+n_{s}\), where \(n_{i}\) is the cardinality of \(\{a_{1},\dots,a_{n}\}\cap Q_{i}\). This is called the "multidiscriminant" in [12].
Each of the above invariant gives rise to an invariant of connected components of \(\operatorname{Hur}(Q)\); we will introduce notation only for the first invariant.
**Definition 2.8**.: For \(\omega\in G\) and for \(n\geq 0\) we denote by \(\operatorname{Hur}_{n}(Q)_{\omega}\subset\operatorname{Hur}_{n}(Q)\) the union of connected components corresponding to \(n\)-tuples \((a_{1},\dots,a_{n})\) with total monodromy \(\omega\). Similarly, we denote \(\operatorname{Hur}(Q)_{\omega}=\coprod_{n\geq 0}\operatorname{Hur}_{n}(Q)_{\omega}\).
### Quasi-polynomials
Let \(\ell\geq 1\), and for \(n\in\mathbb{Z}\) denote by \([n]_{\ell}\in\mathbb{Z}/\ell\) the class of \(n\) modulo \(\ell\). We will use the following notion of quasi-polynomial.
**Definition 2.9**.: A quasi-polynomial of period dividing \(\ell\), denoted \(p_{[-]_{\ell}}(t)\), is the datum of \(\ell\) polynomials \(p_{[1]_{\ell}}(t),\dots,p_{[\ell]_{\ell}}(t)\in\mathbb{Q}[t]\).
The degree of a non-zero quasi-polynomial is the maximum among the degrees of the polynomials \(p_{[i]_{\ell}}(t)\); the degree of the zero quasi-polynomial is set to be \(-\infty\).
A quasi-polynomial induces a function \(\mathbb{Z}\to\mathbb{Q}\), given by sending \(n\mapsto p_{[n]_{\ell}}(n)\).
As one can see, the argument \(n\in\mathbb{Z}\) must be input twice, once as index of the polynomial, once as value of the variable \(t\), in order to evaluate a quasi-polynomial at \(n\).
## 3. Noetherian rings
In this short section we prove that \(A=H_{0}(\operatorname{Hur}(Q))\) is a Noetherian ring, and give a prove of Theorem B assuming Theorem A.
### Several subrings of \(A\)
The ring \(A\) admits the following presentation by generators and relation as an associative ring (compare with the presentation of \(\pi_{0}(\operatorname{Hur}(Q))\) from Subsection 2.3):
\[A=\mathbb{Z}\left\langle[q_{1}],\dots,[q_{m}]\ |\ [q_{i}][q_{j}]=[q_{j}][q_{i} ^{q_{j}}]\right\rangle,\]
where \([q_{i}]\in H_{0}(\operatorname{Hur}_{1}(Q))\) is defined as the ground class of the (contractible) space \(\operatorname{Hur}_{1}(Q,q_{i})\).
We introduce two subrings of \(A\).
**Definition 3.1**.: We denote by \(B\subseteq A\) the subring generated by the elements \([q_{i}]^{\ell}\), for \(1\leq i\leq m\). Equivalently, \(B\) is the monoid ring of the monoid \(\pi_{0}^{\ell}(\operatorname{Hur}(Q))\) from Definition 2.3.
**Definition 3.2**.: Recall Definition 2.8. We denote by \(A_{\mathbb{1}}\subseteq A\) the monoid ring of the submonoid
\[\pi_{0}(\operatorname{Hur}(Q)_{\mathbb{1}})\subseteq\pi_{0}(\operatorname{ Hur}(Q)).\]
We observe that \(B\subseteq A_{\mathbb{1}}\), as each generator \(\hat{q}_{i}^{\ell}\in\pi_{0}^{\ell}(\operatorname{Hur}(Q))\) has total monodromy equal to \(q_{i}^{\ell}=\mathbb{1}\in G\). We also observe that \(A_{\mathbb{1}}\) is a central subring of \(A\), as a consequence of the fact that \(\pi_{0}(\operatorname{Hur}(Q)_{\mathbb{1}})\) is a central submonoid of \(\pi_{0}(\operatorname{Hur}(Q))\): to see this, let \(a\in Q\) and let \((a_{1},\dots,a_{n})\in Q^{n}\) be such that \(a_{1}\dots a_{n}=\mathbb{1}\in G\); then \(\hat{a}\cdot(\hat{a}_{1}\dots\hat{a}_{n})=(\hat{a}_{1}\dots\hat{a}_{n})\cdot \overline{a^{a_{1}\dots a_{n}}}\) by the relations holding in \(\pi_{0}(\operatorname{Hur}(Q))\), and we have \(\widehat{a^{a_{1}\dots a_{n}}}=\widehat{a^{\mathbb{1}}}=\hat{a}\).
**Lemma 3.3**.: _The associative ring \(A\) is finitely generated as a \(B\)-module. As a consequence we have the following:_
1. \(A\) _is Noetherian: a sub-module of a finitely generated left or right_ \(A\)_-module is finitely generated._
2. \(A_{\mathbb{1}}\) _is also finitely generated as a_ \(B\)_-module, and is also a Noetherian ring._
3. \(A\) _is finitely generated as a_ \(A_{\mathbb{1}}\)_-module._
Proof.: The commutative ring \(B\) is Noetherian, as it is a quotient of a polynomial ring over \(\mathbb{Z}\) with \(m\) variables (in the same way as \(B\) is a quotient of a free abelian monoid on \(m\) generators). Thus if we prove that \(A\) is a finitely generated \(B\)-module, we immediately have that \(A\) is Noetherian: for if \(M\) is a finitely generated (left or right) \(A\)-module and \(M^{\prime}\subset M\) is a submodule, then \(M\) is also finitely generated over \(B\), and by Noetherianity of \(B\) we have that \(M^{\prime}\) is finitely generated over \(B\), and a fortiori over \(A\). This proves that the main statement implies (1); it also implies (2), as \(A_{\mathbb{1}}\) is a sub-\(B\)-module of \(A\), and \(B\) is Noetherian; Noetherianity of \(A_{\mathbb{1}}\) then follows from the same argument used to prove (1). Finally, (3) follows from the inclusion \(B\subseteq A_{\mathbb{1}}\) together with the fact that \(A\) is finitely generated as a \(B\)-module.
We now prove the main statement. Recall that \(\pi_{0}^{\ell}(\operatorname{Hur}(Q))\) is a central sub-monoid in \(\pi_{0}(\operatorname{Hur}(Q))\): this implies that \(B\) is a central subring of \(A\). Our next goal is to show that \(A\) is a finitely generated \(B\)-module: more precisely, the products \([a_{1}]\cdots[a_{k}]\) with \(k\leq m(\ell-1)\) and \(a_{1},\ldots,a_{k}\in Q\) suffice to generate \(A\) over \(B\). For this, let \([a_{1}]\cdots[a_{n}]\) be any product of generators of \(A\), and assume \(n>m(\ell-1)\). By the pigeonhole principle there is \(q_{j}\in Q\) and there are \(\ell\) indices \(i_{1},\ldots,i_{\ell}\) such that \(a_{i_{1}}=\cdots=a_{i_{\ell}}=q_{j}\). We can then use the relations in \(A\) and rewrite \([a_{1}]\cdot\cdots\cdot[a_{n}]\) in the form \([q_{j}]^{\ell}\cdot[a_{1}^{\prime}]\cdot\cdots\cdot[a_{n-\ell}^{\prime}]\), where the sequence \(a_{1}^{\prime},\ldots,a_{n-\ell}^{\prime}\) is obtained by removing from \(a_{1},\ldots,a_{n}\) the elements \(a_{i_{1}},\ldots,a_{i_{\ell}}\), and by suitably conjugating the remaining elements by powers of \(q_{j}\). This concludes the proof that \(A\) is a finitely generated \(B\)-module.
If \(R\) is a commutative ring, we can tensor \(A,A_{\mathbb{1}},B\) with \(R\) and obtain the rings \(A\otimes R=H_{0}(\operatorname{Hur}(Q);R)\), \(A_{\mathbb{1}}\otimes R=H_{0}(\operatorname{Hur}(Q)_{\mathbb{1}};R)\) and \(B=R[\pi_{0}^{\ell}(\operatorname{Hur}(Q))]\). Lemma 3.3 gives the following corollary.
**Corollary 3.4**.: _Let \(R\) be a commutative ring; then \(A\otimes R\) and \(A_{\mathbb{1}}\otimes R\) are finitely generated \(B\otimes R\)-modules, and \(A\otimes R\) is a finitely generated \(A_{\mathbb{1}}\otimes R\)-module. If \(R\) is Noetherian, then all rings \(A\otimes R\), \(A_{\mathbb{1}}\otimes R\) and \(B\otimes R\) are Noetherian._
### Proof of Theorem B assuming Theorem A
Let \(R\) be a Noetherian ring and let \(i\geq 0\); then by Theorem A\(H_{i}(\operatorname{Hur}(Q);R)\) is finitely generated over \(A\otimes R\); by Lemma 3.3\(A\otimes R\) is finitely generated as an \(A_{\mathbb{1}}\otimes R\)-module, and hence also \(H_{i}(\operatorname{Hur}(Q);R)\) is finitely generated as an \(A_{\mathbb{1}}\otimes R\)-module. We have a finite direct sum decomposition of \(A_{\mathbb{1}}\otimes R\)-modules
\[H_{i}(\operatorname{Hur}(Q);R)\cong\bigoplus_{g\in G}H_{i}(\operatorname{Hur} (Q)_{g};R),\]
and hence each direct summand is a finitely generated \(A_{\mathbb{1}}\otimes R\)-module.
## 4. Arc complexes
Hatcher and Wahl [10] introduced, for \(n\geq 0\), an augmented semisimplicial set \(\mathcal{A}_{n,\bullet}\), whose initial application was an alternative proof of homology stability of braid groups [10, Proposition 1.7], originally proved by Arnol'd [11] (see also [12, SS5.6]). In the following definition we recall this construction and generalise it to our Hurwitz setting; this is approach has been already used in [13]
### High connectivity of augmented arc complexes
We denote by \(D=[0,1]^{2}\subset\mathbb{R}^{2}\) the standard unit square in the complex plane, endowed with basepoint \(*=(0,0)\), and denote by \(\mathring{D}=(0,1)^{2}\) the interior of \(D\). We also let \(I=[0,1]\times\{0\}\) be the bottom edge of \(D\).
For \(n\geq 0\) and \(1\leq i\leq n\) we let \(\bar{z}_{n,i}=(i/(n+1),1/2)\in\mathring{D}\), and we denote \(\bar{P}_{n}=\{\bar{z}_{n,i}\}\).
**Definition 4.1**.: Let \(n\geq 0\). For \(p\geq-1\) we denote by \(\mathcal{A}_{n,p}\) the set of isotopy classes of collections of \(p+1\) arcs \(\alpha_{0},\dots,\alpha_{p}\colon[0,1]\to D\) satisfying the following conditions:3
Footnote 3: Two collections of arcs are considered isotopic if they are connected by an isotopy through collections of arcs with the required properties.
* each arc \(\alpha_{i}\) is an embedding of \([0,1]\) in \(D\), sending \(0\) to a point of \(I\), \((0,1)\) inside \(\mathring{D}\setminus\bar{P}_{n}\), and \(1\) to a point of \(\bar{P}_{n}\);
* the arcs have disjoint images, also at their endpoints, and using the natural orientation of \(I\) we have \(\alpha_{0}(0)<\dots<\alpha_{p}(0)\).
In particular, \(\mathcal{A}_{n,-1}\) is a singleton (the empty collection of arcs), and \(\mathcal{A}_{n,p}\) is empty for \(p\geq n\). Forgetting arcs makes the collection \(\mathcal{A}_{n,\bullet}\) into an augmented semisimplicial set.
We denote by \(\mathcal{A}_{n}(Q)_{\bullet}\) the augmented semisimplicial set \(\mathcal{A}_{n,\bullet}\times Q^{n}\), whose set of \(p\)-simplices is \(\mathcal{A}_{n,p}\times Q^{n}\).
The braid group \(\mathfrak{B}\mathfrak{r}_{n}\) acts both on the augmented semisimplicial set \(\mathcal{A}_{n,\bullet}\) and on the set \(Q^{n}\), hence it acts diagonally on \(\mathcal{A}_{n}(Q)_{\bullet}\) by automorphisms of augmented semisimplicial sets; taking levelwise the homotopy quotient we obtain an augmented semisimplicial space \(\mathcal{A}_{n}(Q)_{\bullet}\mathbin{/\!\!/}\mathfrak{B}\mathfrak{r}_{n}\).
The braid group \(\mathfrak{B}\mathfrak{r}_{n}=\Gamma_{0,1}^{(n)}\) acts both on the augmented semisimplicial set \(\mathcal{A}_{n,p}\) and on the set \(Q^{n}\), and hence there is a diagonal action of \(\mathfrak{B}\mathfrak{r}_{n}\) on \(\mathcal{A}_{n}(Q)_{\bullet}\). The geometric realisation of \(\mathcal{A}_{n,\bullet}\) is contractible, i.e. the augmentation \(|\mathcal{A}_{n,\bullet\geq 0}|\to\mathcal{A}_{n,-1}\) is a homotopy equivalence, the second space being a point [1, Theorem 2.48] (see also [1, Proposition 3.2]).
It also follows that the augmentation \(|\mathcal{A}_{n}(Q)_{\bullet\geq 0}|\to\mathcal{A}_{n}(Q)_{-1}\) is a homotopy equivalence, the second space being the set \(Q^{n}\), and it further follows that the augmentation \(|\mathcal{A}_{n}(Q)_{\bullet\geq 0}\mathbin{/\!\!/}\mathfrak{B}\mathfrak{r}_{n}| \to\mathcal{A}_{n}(Q)_{-1}\mathbin{/\!\!/}\mathfrak{B}\mathfrak{r}_{n}\) is a homotopy equivalence, the second space being the space \(\operatorname{Hur}_{n}(Q)\). In the next subsection we will analyse more closely the augmented semisimplicial space \(\mathcal{A}_{n}(Q)_{\bullet}\mathbin{/\!\!/}\mathfrak{B}\mathfrak{r}_{n}\).
**Notation 4.2**.: We denote by \(\mathcal{S}_{\bullet}^{n}\) the augmented semisimplicial space \(\mathcal{A}_{n}(Q)_{\bullet}\mathbin{/\!\!/}\mathfrak{B}\mathfrak{r}_{n}\), leaving \(Q\) implicit.
We record the previous discussion as a lemma for future reference.
**Lemma 4.3**.: _The augmentation \(|\mathcal{S}_{\bullet}^{n}|\to\mathcal{S}^{n}{-}1\) is a homotopy equivalence._
### The augmented semisimplicial space \(\mathcal{S}_{\bullet}^{n}\)
For \(-1\leq p\leq n-1\), the space of \(p\)-simplices in \(\mathcal{S}_{\bullet}^{n}\) is \((\mathcal{A}_{n,p}\times Q^{n})\mathbin{/\!\!/}\mathfrak{B}\mathfrak{r}_{n}\), in particular it is the homotopy quotient of a set by an action of a discrete group: it has therefore the homotopy type of a disjoint union of aspherical spaces, one for each orbit of the diagonal action of \(\mathfrak{B}\mathfrak{r}_{n}\) on \(\mathcal{A}_{n,p}\times Q^{n}\). The action of \(\mathfrak{B}\mathfrak{r}_{n}\) on \(\mathcal{A}_{n,p}\) is transitive. Let \(\bar{\underline{\alpha}}_{n,p}=(\bar{\alpha}_{n,p,0},\dots,\bar{\alpha}_{n,p,p})\) be the isotopy class of the collection of \(p+1\) straight vertical segments joining \(I\) with the points \(z_{n,1},\dots,z_{n,p+1}\in\bar{P}_{n}\) (see Figure 1); then the stabiliser of \(\bar{\underline{\alpha}}_{n,p}\) is
the subgroup of \(\mathfrak{Br}_{n}\) generated by the last \(n-p-2\) standard generators, which is isomorphic to \(\mathfrak{Br}_{n-p-1}\).
**Notation 4.4**.: We denote by \(\mathbb{1}_{n}\subset\mathfrak{Br}_{n}\) the trivial subgroup of the \(n^{\text{th}}\) braid group. For \(0\leq p\leq n\) we consider \(\mathfrak{Br}_{p}\times\mathfrak{Br}_{n-p}\) as a subgroup of \(\mathfrak{Br}_{n}\), generated by all standard generators except the \(p^{\text{th}}\). In particular we denote by \(\mathbb{1}_{p}\times\mathfrak{Br}_{n-p}\subset\mathfrak{Br}_{n}\) the subgroup of \(\mathfrak{Br}_{n}\) generated by the last \(n-p-1\) standard generators.
**Lemma 4.5**.: _The spaces \(\mathcal{S}_{p}^{n}\) and \(Q^{p+1}\times\operatorname{Hur}_{n-p-1}(Q)\) are homotopy equivalent. More precisely, the natural map \(Q^{p+1}\times\operatorname{Hur}_{n-p-1}(Q)\to\mathcal{S}_{p}^{n}=\mathcal{A}_{ n}(Q)_{p}\not|\)\(\mathfrak{Br}_{n}\) induced by the inclusion of sets \(\{\bar{\alpha}_{n,p}\}\times Q^{n}\subset\mathcal{A}_{n,p}\times Q^{n}\) and the inclusion of groups \(\mathfrak{Br}_{n-p-1}\cong\mathbb{1}_{p+1}\times\mathfrak{Br}_{n-p-1}\subset \mathfrak{Br}_{n}\), is a homotopy equivalence._
Proof.: By the previous discussion, each orbit of the action of \(\mathfrak{Br}_{n}\) on \(\mathcal{A}_{n,p}\times Q^{n}\) contains elements of the form \((\bar{\underline{\alpha}}_{n,p};\underline{a})\). Moreover, any two elements \((\bar{\underline{\alpha}}_{n,p};\underline{a})\) and \((\bar{\underline{\alpha}}_{n,p};\underline{a}^{\prime})\) in the same \(\mathfrak{Br}_{n}\)-orbit can be transformed into each other by the action of a suitable element in \(\mathbb{1}_{p+1}\times\mathfrak{Br}_{n-p-1}\): this implies the equality \(a_{i}=a_{i}^{\prime}\) for \(0\leq i\leq p\), and it also implies that the subsequences \((a_{p+1},\ldots,a_{n})\) and \((a_{p+1}^{\prime},\ldots,a_{n}^{\prime})\) belong to the same orbit of the action of \(\mathfrak{Br}_{n-p-1}\) on \(Q^{n-p-1}\). Viceversa, two elements \((\bar{\underline{\alpha}}_{n,p};\underline{a})\) and \((\bar{\underline{\alpha}}_{n,p};\underline{a}^{\prime})\) satisfying the previous requirements can be transformed into each other by the action of \(\mathfrak{Br}_{n-p-1}\subset\mathfrak{Br}_{n}\), and thus belong to the same orbit of the action of \(\mathfrak{Br}_{n}\) on \(\mathcal{A}_{n}(Q)_{p}\).
We conclude by remarking that the stabiliser of a single element \((\bar{\underline{\alpha}}_{n,p};\underline{a})\in\mathcal{A}_{n}(Q)_{p}\) is the subgroup \(\mathbb{1}_{p+1}\times\mathfrak{Br}_{n-p-1}(a_{p+1},\ldots,a_{n})\subseteq \mathbb{1}_{p+1}\times\mathfrak{Br}_{n-p-1}\subseteq\mathfrak{Br}_{n}\).
In particular, as already remarked, the space of \((-1)\)-simplices \(\mathcal{S}_{-1}^{n}\) is homotopy equivalent, and in fact canonically homeomorphic, to \(\operatorname{Hur}_{n}(Q)\).
Let us now look at the face maps \(d_{i}\colon\mathcal{S}_{p}^{n}\to\mathcal{S}_{p-1}^{n}\), for \(p\geq 0\) and \(0\leq i\leq p\) (in the case \(p=0\) we denote by \(d_{0}\) the augmentation).
**Definition 4.6**.: Let \(a\in Q\) and \(n\geq 0\), and consider \(\mathfrak{Br}_{n}\cong\mathbb{1}_{1}\times\mathfrak{Br}_{n}\) as a subgroup of \(\mathfrak{Br}_{n+1}\). The map \(a\times-:Q^{n}\to Q^{n+1}\), sending \((a_{1},\ldots,a_{n})\mapsto(a,a_{1},\ldots,a_{n})\), is equivariant with respect to the action of \(\mathfrak{Br}_{n}\) on \(Q^{n}\) and, by restriction, on \(Q^{n+1}\). It thus induces a map on homotopy quotients
\[\operatorname{lst}(a)\colon\operatorname{Hur}_{n}(Q)\to\operatorname{Hur}_{n+ 1}(Q).\]
We define similarly a map \(\operatorname{rst}(a)\colon\operatorname{Hur}_{n}(Q)\to\operatorname{Hur}_{n+1}(Q)\) by considering the map of sets \(-\times a\colon Q^{n}\to Q^{n+1}\), sending \((a_{1},\dots,a_{n})\mapsto(a_{1},\dots,a_{n},a)\), which is equivariant with respect to the inclusion of groups \(\mathfrak{B}\mathfrak{r}_{n}\times\mathbb{1}_{1}\subset\mathfrak{B}\mathfrak{ r}_{n+1}\).
**Definition 4.7**.: Let \(0\leq p\leq n-1\) and \(0\leq i\leq p\). We define a map of spaces
\[\operatorname{lst}_{i}\colon Q^{p+1}\times\operatorname{Hur}_{n-p-1}(Q)\to Q^{p} \times\operatorname{Hur}_{n-p}(Q)\]
as the map induced on homotopy quotients by the map of sets \(Q^{n}\stackrel{{\cong}}{{\to}}Q^{n}\) given by
\[(a_{0},\dots,a_{n-1})\mapsto(a_{0},\dots,a_{i-1},a_{i+1},\dots,a_{p},a_{i}^{a_{ i+1}\dots a_{p}},a_{p+1},\dots,a_{n-1}),\]
which is equivariant with respect to the inclusion of groups \(\mathbb{1}_{p+1}\times\mathfrak{B}\mathfrak{r}_{n-p-1}\subset\mathbb{1}_{p} \times\mathfrak{B}\mathfrak{r}_{n-p}\).
**Proposition 4.8**.: _The following diagram is commutative up to homotopy, where the vertical maps are the homotopy equivalences given by Lemma 4.5._
Proof.: Let \(\underline{\bar{\alpha}}_{n,p}^{\hat{\mathfrak{i}}}\in\mathcal{A}_{n,p-1}\) be the collection of arcs \((\bar{\alpha}_{n,p,0},\dots,\hat{\bar{\alpha}}_{n,p,i},\dots,\bar{\alpha}_{n,p,p})\) obtained from \(\underline{\bar{\alpha}}_{n,p}\) by forgetting \(\bar{\alpha}_{n,p,i}\) (see Figure 2, left): then in \(\mathcal{A}_{n,\bullet}\) we have \(d_{i}(\underline{\bar{\alpha}}_{n,p})=\underline{\bar{\alpha}}_{n,p}^{\hat{ \mathfrak{i}}}\).
Consider moreover the product of standard generators
\[\mathfrak{b}_{n,p,i}=\sigma_{p}\sigma_{p-1}\sigma_{p-2}\dots\sigma_{i+1}\in \mathfrak{B}\mathfrak{r}_{p+1}\times\mathbb{1}_{n-p-1}\subset\mathfrak{B} \mathfrak{r}_{n},\]
depicted in Figure 2, right: this is the empty product, i.e. the neutral element in \(\mathfrak{B}\mathfrak{r}_{n}\), for \(i=p\). Then the action of \(\mathfrak{b}_{n,p,i}\in\mathfrak{B}\mathfrak{r}_{n}\) on \(\mathcal{A}_{n,p-1}\) sends \(\underline{\bar{\alpha}}_{n,p}^{\hat{\mathfrak{i}}}\mapsto\underline{\bar{ \alpha}}_{n,p-1}\). It follows that the stabiliser of \(\underline{\bar{\alpha}}_{n,p}^{\hat{\mathfrak{i}}}\) in \(\mathfrak{B}\mathfrak{r}_{n}\) is \((\mathbb{1}_{p}\times\mathfrak{B}\mathfrak{r}_{n-p})^{\mathfrak{b}_{n,p,i}}\). Repeating the argument of Lemma 4.5 with \(\underline{\bar{\alpha}}_{n,p}^{\hat{\mathfrak{i}}}\) instead of \(\underline{\bar{\alpha}}_{n,p-1}\), we obtain the following: the inclusion of sets \(\left\{\underline{\bar{\alpha}}_{n,p}^{\hat{\mathfrak{i}}}\right\}\times Q^{ n}\subset\mathcal{A}_{n,p-1}\times Q^{n}\) and the inclusion of groups \((\mathbb{1}_{p}\times\mathfrak{B}\mathfrak{r}_{n-p})^{\mathfrak{b}_{n,p,i}} \subset\mathfrak{B}\mathfrak{r}_{n}\) give rise to a homotopy equivalence
\[\left(\left\{\underline{\bar{\alpha}}_{n,p}^{\hat{\mathfrak{i}}}\right\} \times Q^{n}\right)/\!\!/\left(\mathbb{1}_{p}\times\mathfrak{B}\mathfrak{r}_{n -p}\right)^{\mathfrak{b}_{n,p,i}}\stackrel{{\cong}}{{\to}} \mathcal{A}_{n}(Q)_{p-1}/\!\!/\left.\mathfrak{B}\mathfrak{r}_{n}=\mathcal{S}_{p -1}^{n},\]
and the following square commutes on the nose, where the top map is induced on homotopy quotients by the obvious bijection \(\left\{\underline{\bar{\alpha}}_{n,p}\right\}\times Q^{n}\cong\left\{\underline{ \bar{\alpha}}_{n,p}^{\hat{1}}\right\}\times Q^{n}\), which is equivariant with respect to the injective group homomorphism \(\mathbb{1}_{p+1}\times\mathfrak{B}\mathfrak{r}_{n-p-1}\cong\left(\mathbb{1}_{p +1}\times\mathfrak{B}\mathfrak{r}_{n-p-1}\right)^{\mathfrak{b}_{n,p,i}}\subset \left(\mathbb{1}_{p}\times\mathfrak{B}\mathfrak{r}_{n-p}\right)^{\mathfrak{b}_ {n,p,i}}\):
We can then construct a strictly commutative square, whose vertical maps are homotopy equivalences and whose horizontal maps are homeomorphisms
The top horizontal map is induced by the bijection of sets \(\mathfrak{b}_{n,p,i}\colon\,\left(\left\{\underline{\bar{\alpha}}_{n,p}^{\hat{ 1}}\right\}\times Q^{n}\right)\cong\left(\left\{\underline{\bar{\alpha}}_{n, p-1}\right\}\times Q^{n}\right)\), given by the action of \(\mathfrak{b}_{n,p,i}\), together with the isomorphism of groups \(\left(\mathbb{1}_{p}\times\mathfrak{B}\mathfrak{r}_{n-p}\right)^{\mathfrak{ b}_{n,p,i}}\cong\left(\mathbb{1}_{p}\times\mathfrak{B}\mathfrak{r}_{n-p}\right)\), given by conjugation by \(\mathfrak{b}_{n,p,i}^{-1}\) inside \(\mathfrak{B}\mathfrak{r}_{n}\); note that the aforementioned bijection of sets sends
\[(\underline{\bar{\alpha}}_{n,p}^{\hat{1}};a_{0},\ldots,a_{n-1})\mapsto( \underline{\bar{\alpha}}_{n,p-1};a_{0},\ldots,a_{i-1},a_{i+1},\ldots,a_{p},a _{i}^{a_{i+1}\ldots a_{p}},a_{p+1},\ldots,a_{n-1}).\]
The bottom horizontal map has a similar description, but involving the automorphism of the set \(\mathcal{A}_{n}(Q)_{p-1}\) given by the action of \(\mathfrak{b}_{n,p,i}\), together with the inner automorphism of the group \(\mathfrak{B}\mathfrak{r}_{n}\) give by conjugation by \(\mathfrak{b}_{n,p,i}^{-1}\).
We now appeal to the following standard fact: if a group \(G\) acts on a set \(X\) and if \(g\in G\), then the homeomorphism \(g\cdot-:X\not\!/\;G\to X\not\!/\;G\), induced by the action of \(g\) on \(X\) and by conjugation by \(g^{-1}\) on \(G\), is homotopic to the identity of \(X\not\!/\;G\). In our case, the bottom horizontal map \(\mathfrak{b}_{n,p,i}\cdot-\) in the last square is homotopic to the identity of \(\mathcal{S}_{p-1}^{n}\).
We conclude by gluing the two squares along their common vertical side, after noticing that the composition of the two top horizontal maps is precisely \(\operatorname{lst}_{i}\).
Note that the map \(\operatorname{lst}_{i}\colon Q^{p+1}\times\operatorname{Hur}_{n-p-1}(Q)\to Q ^{p}\times\operatorname{Hur}_{n-p}(Q)\), as a map with codomain a product, can be described by giving two maps \(Q^{p+1}\times\operatorname{Hur}_{n-p-1}(Q)\to Q^{p}\) and \(Q^{p+1}\times\operatorname{Hur}_{n-p-1}(Q)\to\operatorname{Hur}_{n-p}(Q)\). The first of these maps is the projection \(Q^{p+1}\times\operatorname{Hur}_{n-p-1}(Q)\to Q^{p+1}\) followed by the map \(Q^{p+1}\to Q^{p}\) given by \((a_{0},\ldots,a_{p})\mapsto(a_{0},\ldots,\hat{a}_{i},\ldots,a_{p})\). The second of these maps, restricted to the slice \((a_{0},\ldots,a_{p})\times\operatorname{Hur}_{n-p-1}(Q)\cong\operatorname{Hur }_{n-p-1}(Q)\), is the stabilisation map \(\operatorname{lst}(a_{i}^{a_{i+1}\ldots a_{p}})\).
### Action of \(\operatorname{Hur}(Q)\) on \(\mathcal{S}_{\bullet}\)
**Notation 4.9**.: Recall Notation 4.2. We denote by \(\mathcal{S}_{\bullet}\) the disjoint union of augmented semisimplicial spaces \(\coprod_{n\geq 0}\mathcal{S}_{\bullet}^{n}\), again leaving \(Q\) implicit.
For all \(n,m\geq 0\) there is a map of augmented semisimplicial sets \(\mathcal{A}_{n,\bullet}\to\mathcal{A}_{n+m,\bullet}\) given by "adding \(m\) points on the right": more precisely, we send the isotopy class
of the collection of arcs \(\alpha_{0},\ldots,\alpha_{p}\colon[0,1]\to D\), representing a \(p\)-simplex in \(\mathcal{A}_{n,p}\), to the isotopy class of the collection of arcs \(\chi_{n}^{m}\circ\alpha_{0},\ldots,\chi_{n}^{m}\circ\alpha_{p}\colon[0,1]\to D\), where \(\chi_{n}^{m}\colon D\to D\) sends \((x,y)\mapsto(\frac{m+1}{m+n+1}x,y)\). We consider the action of \(\mathfrak{B}\mathfrak{r}_{n}\times\mathfrak{B}\mathfrak{r}_{m}\) on \(\mathcal{A}_{n,\bullet}\) given by projecting \(\mathfrak{B}\mathfrak{r}_{n}\times\mathfrak{B}\mathfrak{r}_{m}\to\mathfrak{B }\mathfrak{r}_{n}\) and then letting \(\mathfrak{B}\mathfrak{r}_{n}\) act; then the map \(\mathcal{A}_{n,\bullet}\to\mathcal{A}_{n+m,\bullet}\) is equivariant with respect to the actions of \(\mathfrak{B}\mathfrak{r}_{n}\times\mathfrak{B}\mathfrak{r}_{m}\) on \(\mathcal{A}_{n,\bullet}\) and of \(\mathfrak{B}\mathfrak{r}_{n+m}\) on \(\mathcal{A}_{n+m,\bullet}\), along the inclusion of groups \(\mathfrak{B}\mathfrak{r}_{n}\times\mathfrak{B}\mathfrak{r}_{m}\subset \mathfrak{B}\mathfrak{r}_{n+m}\).
Similarly, the identification of sets \(Q^{n}\times Q^{m}\cong Q^{n+m}\) is equivariant with respect to the action of \(\mathfrak{B}\mathfrak{r}_{n}\times\mathfrak{B}\mathfrak{r}_{m}\) and \(\mathfrak{B}\mathfrak{r}_{n+m}\) on the two sets, respectively. Taking cartesian products, we obtain a map of augmented semisimplicial sets
\[\mathcal{A}_{n}(Q)_{\bullet}\times Q^{m}\to\mathcal{A}_{n+m}(Q)_{\bullet},\]
which is equivariant with respect to the action of \(\mathfrak{B}\mathfrak{r}_{n}\times\mathfrak{B}\mathfrak{r}_{m}\) and \(\mathfrak{B}\mathfrak{r}_{n+m}\) on the two augmented semisimplicial sets. Taking homotopy quotients, we finally obtain a map of augmented semisimplicial spaces
\[\mathcal{S}_{\bullet}^{n}\times\operatorname{Hur}_{m}(Q)\to\mathcal{S}_{ \bullet}^{n+m}.\]
The following is immediate.
**Proposition 4.10**.: _The maps \(\mathcal{S}_{\bullet}^{n}\times\operatorname{Hur}_{m}(Q)\to\mathcal{S}_{ \bullet}^{n+m}\) constructed above assemble into a right action of the topological monoid \(\operatorname{Hur}(Q)\) on the augmented semisimplicial space \(\mathcal{S}_{\bullet}\)._
## 5. The spectral sequence argument
Let \(R\) be a commutative ring. Recall that for an augmented semisimplicial space \(X_{\bullet}\) there is a spectral sequence in homology, with first page \(E^{1}_{p,q}=H_{q}(X_{p};R)\) and limit \(H_{p+q+1}(X_{-1},|X|;R)\). For \(X=\mathcal{S}_{\bullet}^{n}\) we get \(E^{1}_{p,q}=H_{q}(\mathcal{S}_{p}^{n};R)\), and the limit is \(H_{p+q+1}(\mathcal{S}_{-1}^{n},|\mathcal{S}_{\bullet}^{n}|;R)\), which is zero for all \(p,q\in\mathbb{Z}\) by Lemma 4.3.
For \(p,q\geq 0\), the \(E^{1}\)-differential \(d^{1}_{p,q}\colon E^{1}_{p,q}\to E^{1}_{p-1,q}\) is given by the map
\[\sum_{j=0}^{p}(-1)^{j}(d_{j})_{*}\colon H_{q}(\mathcal{S}_{p}^{n};R)\to H_{q} (\mathcal{S}_{p-1}^{n};R),\]
i.e. it is the alternating sum of the maps induced in homology by the face maps (and by the augmentation).
By Proposition 4.10, we can put together these spectral sequences for varying \(n\geq 0\), obtaining a spectral sequence of right \(H_{*}(\operatorname{Hur}(Q);R)\)-modules, and in particular of right \(A\otimes R\)-modules. We will use this to prove Theorem A.
### The Koszul-like complex
We aim at proving Theorem A by induction on \(i\). The statement is obvious for \(i=0\), as the ring \(A\otimes R\) is finitely generated over itself. We focus henceforth on the inductive step: we fix \(\nu\geq 0\), assume that \(H_{i}(\operatorname{Hur}(Q);R)\) is finitely generated over \(A\otimes R\) for all \(0\leq i\leq\nu\), and aim at proving that \(H_{\nu+1}(\operatorname{Hur}(Q);R)\) is also finitely generated over \(A\otimes R\).
In the previous paragraph, all \(A\otimes R\)-modules are meant as _left_\(A\otimes R\)-modules, as in the statement of Theorem A. Yet the Pontryagin ring structure of \(H_{*}(\operatorname{Hur}(Q);R)\) makes \(H_{i}(\operatorname{Hur}(Q);R)\) into an \(A\otimes R\)-bimodule, i.e. \(H_{i}(\operatorname{Hur}(Q);R)\) is endowed with compatible structures of left \(A\otimes R\)-module and right \(A\otimes R\)-module.
The following definition is taken from [1, Subsection 4.1] and slightly adapted to our purposes.
**Definition 5.1**.: Let \(M\) be an \(A\otimes R\)-bimodule. We define a chain complex \(K_{*}(M)\) of right \(A\otimes R\)-modules, concentrated in degrees \(*\geq-1\). We set \(K_{p}(M)=RQ^{p+1}\otimes_{R}M\), where for a set \(S\) we denote by \(RS\) the free \(R\)-module generated by \(S\). We put on \(K_{p}(M)\) the right \(A\otimes R\)-module structure coming from \(M\).
The differential \(d_{p}\colon K_{p}(M)\to K_{p-1}(M)\) has the form \(d_{p}=\sum_{j=0}^{i}(-1)^{j}d_{p,j}\), where \(d_{p,j}\) sends
\[(a_{0},\dots,a_{p})\otimes\mu\mapsto(a_{0},\dots,\hat{a}_{j},\dots,a_{p}) \otimes[a_{i}^{a_{i+1}\dots a_{p}}]\cdot\mu,\]
for \((a_{0},\dots,a_{p})\in Q^{p+1}\) and \(\mu\in M\).
Our interest in the chain complex \(K_{*}(H_{q}(\operatorname{Hur}(Q)))\) comes from the fact that it coincides with the \(q^{\text{th}}\) row of the \(E^{1}\)-page of the spectral sequence associated with the augmented semisimplicial space \(\mathcal{S}_{\bullet}\): this is a consequence of Proposition 4.8 and the general description of the first page of the spectral sequence associated with the skeletal filtration of an augmented semisimplicial space.
In [1], \(K_{*}(M)\) is referred to as a "Koszul-like complex", and in fact in [11] it is shown that \(K_{*}(M)\) is isomorphic to the Koszul complex of the augmented dg ring \(C_{*}(\operatorname{Hur}(Q);R)\) acting on the \(A\)-module \(M\), where the action is given by the dg ring map \(C_{*}(\operatorname{Hur}(Q);R)\to A\otimes R\) sending each \(0\)-chain to its homology class and vanishing in higher degree, and where the augmentation \(C_{*}(\operatorname{Hur}(Q);R)\to R\) is the composite of the previous dg ring map with the ring map \(A\otimes R\to R\) killing the positive-weight part of \(A\otimes R\). In particular \(H_{i}(K_{*}(M))\cong\operatorname{Tor}_{i}^{C_{*}(\operatorname{Hur}(Q);R)}(M,R)\). See also [1, Remark 7.2]: the Koszul dual of the augmented dga \(C_{*}(\operatorname{Hur}(Q);R)\), i.e. \(\operatorname{Ext}_{C_{*}(\operatorname{Hur}(Q);R)}^{*}(R;R)\), can be identified with the quantum shuffle \(R\)-algebra generated by the braided \(R\)-module \(\operatorname{Hom}_{R}(RQ,R)\), i.e. the \(R\)-linear dual of the free \(R\)-module \(RQ\) generated by the set \(Q\). The dual \(R\)-coalgebra can be canonically identified, as a graded \(R\)-module, with \(\bigoplus_{i\geq 0}(RQ)^{\otimes_{R}i}\).
### \(G\)-twists
In the following we consider each left/right/bimodule over \(A\otimes R\) as a left/right/bimodule over \(A\) and over \(R\) by considering the canonical ring homomorphisms \(A\to A\otimes R\) and \(R\to A\otimes R\).
**Definition 5.2**.: Let \(M\) be an \(A\otimes R\)-bimodule. A \(G\)_-twist_ on \(M\) is a right action of \(G\) on \(M\), denoted \((m,g)\mapsto m^{g}\) for \(m\in M\) and \(g\in G\), satisfying the following:
1. for all \(r\in R\) we have \(r\cdot m=m\cdot r\);
2. for all \(a,b\in Q\) we have \(([a]\cdot m)^{b}=[a^{b}]\cdot m^{b}\) and \((m\cdot[a])^{b}=m^{b}\cdot[a^{b}]\);
3. for all \(a\in Q\) we have \(m\cdot[a]=[a]\cdot m^{a}\).
For example, for \(i\geq 0\) the \(A\otimes R\)-bimodule \(H_{i}(\operatorname{Hur}(Q);R)\) admits a \(G\)-twist as follows. The group \(G\) acts on the right on the quandle \(Q\) by conjugation: the element \(g\) sends \(a\in Q\) to \(a^{g}=g^{-1}ag\in Q\). We can then let \(G\) act diagonally on right on the set \(Q^{n}\); as a matter of notation, the element \(g\in G\) sends \((a_{1},\dots,a_{n})\mapsto(a_{1},\dots,a_{n})^{g}=(a_{1}^{g},\dots,a_{n}^{g})\). This action commutes with the left action of \(\mathfrak{Br}_{n}\) on \(Q^{n}\), and thus it induces a right action of \(G\) on \(\operatorname{Hur}_{n}(Q)=Q^{n}\mathbin{/\!\!/}\mathfrak{Br}_{n}\). For \(i\geq 0\), we can then take \(i^{\text{th}}\) homology with coefficients in \(R\) and consider all values of \(n\) at the same time: we obtain a right action of \(G\) on the weighted \(A\otimes R\)-bimodule \(H_{i}(\operatorname{Hur}(Q);R)\).
**Lemma 5.3**.: _The right action of \(G\) on the \(A\otimes R\)-bimodule \(H_{i}(\operatorname{Hur}(Q);R)\) is a \(G\)-twist._
Proof.: The action of \(R\) on \(H_{*}(\operatorname{Hur}(Q);R)\) comes from the multiplication of \(R\), which we assume to be commutative: this ensures that condition (1) in Definition 5.2 is satisfied. We observe moreover that \(\operatorname{Hur}(Q)\) is a topological monoid and that the right action of \(G\) on \(\operatorname{Hur}(Q)\) is an action by automorphisms of topological monoids: indeed for all \(n,n\geq 0\), the concatenation map \(Q^{n}\times Q^{m}\stackrel{{\cong}}{{\to}}Q^{n+m}\) is \(G\)-equivariant; taking homotopy quotients with respect to the groups \(\mathfrak{B}\mathfrak{r}_{n}\times\mathfrak{B}\mathfrak{r}_{m}\subset \mathfrak{B}\mathfrak{r}_{n+m}\), we obtain that the multiplication \(\operatorname{Hur}_{n}(Q)\times\operatorname{Hur}_{m}(Q)\to\operatorname{Hur }_{n+m}(Q)\) is \(G\)-equivariant. Taking homology, we obtain that \(G\) acts on right on \(H_{*}(\operatorname{Hur}(Q);R)\) by automorphisms of rings, and this implies that condition (2) is satisfied.
Let now \(n\geq 0\) and let \(a\in Q\) be fixed. We denote by \(\bar{\mathfrak{b}}_{n+1}\in\mathfrak{B}\mathfrak{r}_{n+1}\) the product of standard generators \(\sigma_{1}\dots\sigma_{n}\); then we have a commutative square of sets, and a commutative square of groups
\[\begin{CD}Q^{n}@>{-\times a}>{}>Q^{n+1}>{}>\mathfrak{B}\mathfrak{r}_{n+1}\\ @V{(-)^{a}}V{}V@V{}V{\bar{\mathfrak{b}}_{n+1}\cdot-}V@V{}V{(-)^{\bar{b}_{n+1}^{-1}}}V\\ Q^{n}@>{}>{a\times-}>Q^{n+1}>{}>\mathfrak{B}\mathfrak{r}_{n+1},\end{CD}\]
such that each map of sets is equivariant with respect to the corresponding map of groups. Taking homotopy quotients, we obtain a square of spaces, that commutes on the nose
\[\begin{CD}\operatorname{Hur}_{n}(Q)@>{\operatorname{rst}(a)}>{}>\operatorname {Hur}_{n+1}(Q)\\ @V{(-)^{a}}V{}V@V{}V{\bar{\mathfrak{b}}_{n+1}\cdot-}V\\ \operatorname{Hur}_{n}(Q)@>{}>{\operatorname{lst}(a)}>{}>\operatorname{Hur}_{n+ 1}(Q).\end{CD}\]
The right vertical map in the last diagram is homotopic to the identity; we conclude that \(\operatorname{rst}(a)\colon\operatorname{Hur}_{n}(Q)\to\operatorname{Hur}_{n+ 1}(Q)\) is homotopic to the composition \(\operatorname{lst}(a)\circ(-)^{a}\); taking \(i^{\text{th}}\) homology, we obtain that also condition (3) in Definition 5.2 is satisfied.
**Lemma 5.4**.: _Let \(M\) be an \(A\otimes R\)-bimodule with a \(G\)-twist, and let \(j\geq 0\). Then \(H_{j}(K_{*}(M))\) is a trivial right \(A\otimes R\)-module, in the sense that right multiplication by any element of \(A\otimes R\) of positive weight is zero._
Proof.: This is similar to [5, Lemma 4.11], but we repeat here the argument. Since the positive-weight ideal \((A\otimes R)_{+}\subset A\otimes R\) is generated by the elements \([a]=[a]\otimes 1\) for \(a\in Q\), it suffices to prove that \(-\cdot[a]\) induces the zero map on \(H_{j}(K_{*}(M))\), and for this one proves that the chain map of chain complexes of abelian groups \(-\cdot[a]\colon K_{*}(M)\to K_{*}(M)\) is chain homotopic to the zero chain map.
One defines a chain homotopy \(\mathcal{H}_{a}\colon K_{*}(M)\to K_{*+1}(M)\) by sending
\[\mathcal{H}_{a}\colon(a_{0},\dots,a_{p})\otimes\mu\mapsto(-1)^{p+1}(a_{0}, \dots,a_{p},a)\otimes\mu^{a},\]
for \(p\geq-1\), \((a_{0},\dots,a_{p})\in Q^{p+1}\) and \(\mu\in M\). Using property (1) from Definition 5.2, one checks that \((d\circ\mathcal{H}_{a}+\mathcal{H}_{a}\circ d)((a_{0},\dots,a_{p})\otimes\mu)\) is equal to \((a_{0},\dots,a_{p})\otimes[a]\cdot\mu^{a}\); using property (2), one identifies the latter with \((a_{0},\dots,a_{p})\otimes\mu\cdot[a]\).
### Proof of Theorem A
Let \(R\) be a Noetherian ring. We start by observing that if \(M\) is an \(A\otimes R\)-bimodule that is finitely generated as a right \(A\otimes R\)-module, then \(K_{*}(M)\) is a chain complex of finitely generated right \(A\otimes R\)-modules. By Lemma 3.3\(A\otimes R\) is Noetherian, hence also \(H_{i}(K_{*}(M))\) is a finitely generated
right \(A\otimes R\)-module; it then follows from Lemma 5.4 that \(H_{j}(K_{*}(M))\) is finitely generated also as a module over the weight-zero subring \(R\subset A\otimes R\), and in particular \(H_{j}(K_{*}(M))\) vanishes in high enough weight, for all \(j\geq-1\). We also observe that if \(M\) is an \(A\otimes R\)-bimodule with a \(G\)-twist, then \(M\) is finitely generated as a left \(A\otimes R\)-module if and only if it is finitely generated as a right \(A\otimes R\)-module.
Recall now the inductive strategy that we have established at the beginning of Subsection 5.1. We can apply the above discussion applies to the modules \(M=H_{0}(\operatorname{Hur}(Q);R),\dots,H_{\nu}(\operatorname{Hur}(Q);R)\), and we obtain that for \(n\) large enough, with in particular \(n\geq\nu+3\), the spectral sequence associated with \(\mathcal{S}_{\bullet}^{n}\) vanishes on a large part of its \(E^{2}\)-page: we have \(E^{2}_{p,q}=0\) for \(q\leq\nu\) and \(p+q\leq\nu+1\). This implies in particular that the differential \(d_{0,\nu+1}^{1}\colon E^{1}_{0,\nu+1}\to E^{1}_{-1,\nu+1}\) must be surjective, otherwise we would have \(E^{2}_{-1,\nu+1}\neq 0\) and by the vanishing of \(E^{2}_{p,q}\) for all \(p\geq 1\), \(q\geq 0\) with \(p+q=\nu+1\) we would have no other non-trivial differential hitting \(E^{2}_{-1,\nu+1}\) and forcing its vanishing on the \(E^{\infty}\)-page, as must happen by Lemma 4.3.
The differential \(d_{0,\nu+1}^{1}\) can be identified with the map \(RQ\otimes H_{\nu+1}(\operatorname{Hur}(Q);R)\to H_{\nu+1}(\operatorname{Hur}(Q);R)\) sending \(q_{i}\otimes m\mapsto q_{i}\cdot m=(q_{i}\otimes 1)\cdot m\); if this map is surjective for \(n\) large enough, say \(n\geq\bar{n}\), then the left \(A\otimes R\)-module \(H_{\nu+1}(\operatorname{Hur}(Q);R)\) is generated by the sub-abelian group \(\bigoplus_{n=0}^{\bar{n}}H_{\nu+1}(\operatorname{Hur}_{n}(Q);R)\). We conclude the proof of Theorem A by observing that the latter is a finite direct sum of finitely generated abelian groups: indeed, for each \(n\geq 0\), the space \(\operatorname{Hur}_{n}(Q)\) has the homotopy type of a finite cover of \(B\mathfrak{B}\mathfrak{R}_{n}\), and \(B\mathfrak{B}\mathfrak{R}_{n}\) is homotopy equivalent to the \(n^{\text{th}}\) unordered configuration space of points in the plane, in particular it has the the homotopy type of a finite CW complex. This concludes the proof of Theorem A.
## 6. Quasi-polynomial growth of Betti numbers
In this section we analyse the growth of the homology groups \(H_{i}(\operatorname{Hur}_{n}(Q);\mathbb{F})\), where \(\mathbb{F}\) is a field, for fixed \(i\) and increasing \(n\), and prove Corollaries A', B'.
### Growth of weighted dimensions of \(B\) and \(A\)
We first analyse the growth of the weighted dimensions (over \(\mathbb{Z}\), aka rank) of \(B\) and \(A\). Recall that \(\pi_{0}(\operatorname{Hur}(Q))=\pi_{0}(\coprod_{n\geq 0}\operatorname{Hur}_{n}(Q))= \coprod_{n\geq 0}\pi_{0}(\operatorname{Hur}_{n}(Q))\) is a weighted set; also \(\pi_{0}^{\ell}(\operatorname{Hur}(Q))\) is a weighted set, concentrated in weights that are multiple of \(\ell\). Similarly, \(A\) and \(B\) are weighted rings, with \(B\) commutative and concentrated in weights multiple of \(\ell\). Notice also that both \(A\) and \(B\) are free as \(\mathbb{Z}\)-modules.
**Proposition 6.1**.: _There is a polynomial \(p_{B}(t)\in\mathbb{Q}[t]\) of degree \(k(G,Q)-1\) such that, for \(n\) large enough, \(\dim_{Z}B_{\ell n}=p_{B}(n)\)._
_Similarly, there is a quasi-polynomial \(p_{A}(t)\in\mathbb{Q}^{\mathbb{Z}}(t)\) of degree \(k(G,Q)-1\) and period dividing \(\ell\) such that \(\dim_{Z}A_{n}=p_{A}|_{n}(n)\) for \(n\) large enough._
Proof.: In the proof we set \(k=k(G,Q)\). Fix a field \(\mathbb{F}\) and let \(B_{\mathbb{F}}=B\otimes\mathbb{F}\); then \(\dim_{Z}B_{\ell n}=\dim_{\mathbb{F}}B_{\mathbb{F},\ell n}\), since \(B\) is a free \(\mathbb{Z}\)-module. The \(\mathbb{F}\)-algebra \(B_{\mathbb{F}}\) is a quotient of the weighted polynomial ring \(\mathbb{F}[x_{1},\dots,x_{m}]\), with variables \(x_{i}\) put in weight \(\ell\): this is witnessed by the surjective map of \(\mathbb{F}\)-algebras \(\mathbb{F}[x_{1},\dots,x_{m}]\twoheadrightarrow B_{\mathbb{F}}\) sending \(x_{i}\mapsto[q_{i}]^{\ell}\). By the classical theory of the Hilbert function of a finitely generated \(\mathbb{F}\)-algebra, there is a polynomial \(p_{B}(t)\in\mathbb{Q}[t]\) of degree at most \(m-1\) such that, for \(n\) large enough, \(\dim_{\mathbb{F}}B_{\mathbb{F},\ell n}=p_{B}(n)\).
We now want to argue that the degree of \(p_{B}(t)\) is precisely \(k-1\). For this, we will show that there exist polynomials \(p_{B}^{-}(t)\in\mathbb{Q}[t]\) and \(p_{B}^{+}(t)\in\mathbb{Q}[t]\) of degree \(k-1\) such that, for \(n\) large enough, \(p_{B}^{-}(n)\leq|\pi_{0}^{\ell}(\operatorname{Hur}(Q))_{n\ell}|\leq p_{B}^{+}(n)\).
* **Lower bound.** Let \(H\subseteq G\) be a subgroup such that \(H\cap Q\) is the union of precisely \(k\) conjugacy classes of \(H\). Let \(H^{\prime}\) be the subgroup of \(H\) generated by \(H\cap Q\); then \(Q\cap H^{\prime}=Q\cap H\) also splits as a union of at least, hence precisely \(k\) conjugacy classes, so we may assume to have chosen \(H=H^{\prime}\) at the beginning. Let \(Q\cap H=\{a_{1},\ldots,a_{r}\}\) be the list of all elements in \(Q\cap H\), and let \(J=\{b_{1},\ldots,b_{k}\}\subset Q\cap H\) be a set representatives of the \(k\) conjugacy classes in \(Q\cap H\subseteq H\). For \(n\geq r\) and for every splitting \(n=r+n_{1}+\cdots+n_{k}\), with \(n_{1},\ldots,n_{k}\geq 0\), we can form an element \[\hat{a}_{1}^{\ell}\cdots\hat{a}_{r}^{\ell}\cdot\hat{b}_{1}^{n_{1}\ell}\cdots \cdot\hat{b}_{k}^{n_{k}\ell}\in\pi_{0}^{\ell}(\operatorname{Hur}(Q))_{n\ell};\] these elements are all distinct, for different choices of \(n_{1},\ldots,n_{k}\), as witnessed by the _conjugacy class partition_ invariant from Subsection 2.5. Since there are \(\binom{n-r+k-1}{k-1}\) choices for the splitting \(n=r+n_{1}+\cdots+n_{k}\), we can define \(p_{B}^{-}(t)=\binom{t-r+k-1}{k-1}\) and obtain \(|\pi_{0}^{\ell}(\operatorname{Hur}(Q))_{n\ell}|\geq p_{B}^{-}(n)\) for \(n\geq r\).
* **Upper bound.** Let \(\hat{a}_{1}^{\ell}\cdots\hat{a}_{n}^{\ell}\) be an element in \(\pi_{0}^{\ell}(\operatorname{Hur}(Q))_{n\ell}\); we want to construct a "normal form" for this element. First, up to reordering the factors \(\hat{a}_{i}^{\ell}\), we may assume that there is \(1\leq r\leq n\) such that the elements \(a_{1},\ldots,a_{r}\in Q\) are all distinct, and such that for all \(r+1\leq j\leq n\) there is \(1\leq i\leq r\) with \(a_{j}=a_{i}\). It follows that the subgroup \(H=\langle a_{1},\ldots,a_{n}\rangle\subseteq G\), i.e. the _image subgroup_ invariant of the chosen element in \(\pi_{0}^{\ell}(\operatorname{Hur}(Q))_{n\ell}\), can also be described as \(\langle a_{1},\ldots,a_{r}\rangle\subseteq G\). Let now \(Q_{1},\ldots,Q_{s}\) be the conjugacy classes in \(H\) in which \(Q\cap H\) splits, and fix representatives \(b_{1}\in Q_{1},\ldots,b_{s}\in Q_{s}\). Using repeatedly the relations from Lemma 2.4 with \(q_{j}\) chosen among \(a_{1},\ldots,a_{r}\) and \(q_{i}\) chosen among \(a_{r+1},\ldots,a_{n}\), we can achieve the situation in which each of \(a_{r+1},\ldots,a_{n}\) is one of \(b_{1},\ldots,b_{s}\). Thus we have proved that each element in \(\pi_{0}^{\ell}(\operatorname{Hur}(Q))_{n\ell}\) can be written as \(\hat{a}_{1}^{\ell}\cdot\cdots\hat{a}_{r}^{\ell}\cdot\hat{b}_{1}^{\ell n_{1}} \cdot\ldots\cdot\hat{b}_{s}^{\ell n_{s}}\), for some \(r\leq m\), \(a_{1},\ldots,a_{r}\) distinct elements of \(Q\), and \(n_{1}+\cdots+n_{s}=n-r\). Making a very rough estimate, there are \(2^{m}\) subsets in \(Q\), among which one can choose \(\{a_{1},\ldots,a_{r}\}\), and for each choice there are at most \(n^{k-1}\) ways to choose the numbers \(n_{1},\ldots,n_{s}\), since \(s\leq k\) and each of \(n_{1},\ldots,n_{s-1}\) is \(\leq n\), and the last number \(n_{s}\) is forced by the sum condition. Setting \(p_{B}^{+}(t)=2^{m}t^{k-1}\), we have \(|\pi_{0}^{\ell}(\operatorname{Hur}(Q))_{n\ell}|\leq p_{B}^{+}(n)\) for all \(n\geq 0\).
This concludes the proof of the existence of \(p_{B}(t)\) of degree \(k-1\). To prove that there is similarly a quasi-polynomial \(p_{A}(t)\) such that \(p_{A}|_{n}(n)=\dim_{\mathbb{Z}}(A_{n})\), we use that \(A\) is finitely generated as a \(B\)-module and invoke again the classical theory of the Hilbert function of a finitely generated graded module over a graded algebra of finite type: this in particular ensures that the degree of \(p_{A}(t)\) is at most \(k-1\), and that the period divides \(\ell\) (as \(B\) is concentrated in degrees multiple of \(\ell\)). Finally, since \(B\subseteq A\), we have that \(\dim_{\mathbb{Z}}(A_{n\ell})\geq\dim_{\mathbb{Z}}(B_{n\ell})=p_{B}(n)\) grows in \(n\) at least as a polynomial of degree \(k-1\): this forces the degree of \(p_{A}(t)\) to be \(k-1\).
Similarly as in Proposition 6.1, one can show that the dimension of \((A_{1})_{n}\) grows like a quasi-polynomial of degree precisely \(k(Q,G)-1\). This result has been proved independently by Seguin [22], who has studied extensively the \(\mathbb{F}\)-algebra \(A_{1}\otimes\mathbb{F}\), for \(\mathbb{F}\) a field of characteristic coprime with \(|G|\).
We observe that Proposition 6.1 establishes Corollary A' for \(i=0\); the general case will use Theorem A.
Proof of Corollary A'.: Let \(i\geq 0\) and fix a field \(\mathbb{F}\); by Theorem A the left \(A\otimes\mathbb{F}\)-module \(H_{i}(\operatorname{Hur}(Q);\mathbb{F})\) is finitely generated; recall also Definition 3.1 and Lemma 3.3, and observe that, since \(A\otimes\mathbb{F}\) is finitely generated as a \(B\otimes\mathbb{F}\)-module, we have that \(H_{i}(\operatorname{Hur}(Q);\mathbb{F})\) is also finitely generated as a \(B\otimes\mathbb{F}\)-module. We now appeal again to the classical theory of the Hilbert function of a finitely generated graded module over a graded algebra of finite type, together with the fact that \(B\otimes\mathbb{F}\) is concentrated in weights multiple of \(\ell\) and has weighted dimension given by a polynomial of degree \(k(G,Q)\).
### Proof of Corollary B'
We fix an element \(\omega\in G\) and \(i\geq 0\) throughout the subsection.
**Definition 6.2**.: We denote by \(B_{\omega}\) the quotient of \(B\) by the ideal generated by the elements \([q_{i}]^{\ell}-[q_{i}^{\omega}]^{\ell}\).
We observe that \(B_{\omega}\) is the monoid ring of the quotient of the abelian monoid \(\pi_{0}^{\ell}(\operatorname{Hur}(Q))\) by the relations \(\hat{q}_{i}^{\ell}=\widehat{q_{i}^{\omega}}^{\ell}\); in particular \(B_{\omega}\) is again a weighted ring and it is free as a \(\mathbb{Z}\)-module. Similarly, \(B_{\omega}\otimes R\) is free as an \(R\)-module, for any commutative ring \(R\).
**Lemma 6.3**.: _Let \(g\in G\) and let \(a\in Q\); then the maps \(\operatorname{lst}(a)\colon\operatorname{Hur}(Q)_{g}\to\operatorname{Hur}(Q)_{ ag}\) and \(\operatorname{rst}(a^{g})\colon\operatorname{Hur}(Q)_{g}\to\operatorname{Hur}(Q) _{ag}\) are homotopic._
Proof.: The argument is similar to the one in the proof of Lemma 5.3. We fix \(n\geq 0\) and prove that \(\operatorname{lst}(a)\) and \(\operatorname{rst}(a^{g})\) are homotopic as maps \(\operatorname{Hur}_{n}(Q)_{g}\to\operatorname{Hur}_{n+1}(Q)_{ag}\). Let \(\tilde{\mathfrak{b}}_{n+1}\in\mathfrak{B}\mathfrak{r}_{n+1}\) be the product of standard generators \(\sigma_{n}\dots\sigma_{1}\). Denote by \(Q_{g}^{n}\) the subset of \(Q^{n}\) of sequences \((a_{1},\dots,a_{n})\) with total monodromy \(a_{1}\dots a_{n}=g\), and define similarly \(Q_{ag}^{n+1}\); then we have commutative triangles of sets and groups
such that each map of sets is equivariant with respect to the corresponding map of groups. Taking homotopy quotients, we obtain a commutative triangle of spaces
and we conclude by noticing that the right vertical map in the last diagram is homotopic to the identity.
In the following proposition it is helpful to notice that if \(M\) is an \(A\otimes R\)-bimodule with a \(G\)-twist (see Definition 5.2), then the left and the right actions of \(B\otimes R\) on \(M\) coincide.
**Proposition 6.4**.: _Let \(R\) be a commutative ring; then the action of \(B\otimes R\) on \(H_{i}(\operatorname{Hur}(Q)_{\omega};R)\) factors through \(B_{\omega}\otimes R\)._
Proof.: It suffices to prove that for all \(a\in Q\), the maps
\[-\cdot[a]^{\ell},\ -\cdot[a^{\omega}]^{\ell}\colon H_{i}(\operatorname{Hur}(Q)_{ \omega};R)\to H_{i}(\operatorname{Hur}(Q)_{\omega};R)\]
coincide; the previous maps can be identified with the maps induced in homology by the maps of spaces
\[\operatorname{rst}(a)^{\ell},\ \operatorname{rst}(a^{\omega})^{\ell}\colon \operatorname{Hur}(Q)_{\omega}\to\operatorname{Hur}(Q)_{\omega},\]
and hence it will suffice to prove that these two maps are homotopic. As shown in the proof of Lemma 5.3, the map \(\operatorname{rst}(a)\colon\operatorname{Hur}(Q)\to\operatorname{Hur}(Q)\) is homotopic to the composition \(\operatorname{lst}(a)\circ(-)^{a}\); we also notice that \(\operatorname{lst}(a)\circ(-)^{a}=(-)^{a}\circ\operatorname{lst}(a)\), as a consequence of the fact that \(a^{a}=a\). Taking \(\ell\)-fold compositions, we obtain that \(\operatorname{rst}(a)^{\ell}\) is homotopic to \(((-)^{a})^{\ell}\circ\operatorname{lst}(a)^{\ell}\); we then notice that \(((-a)^{a})^{\ell}\) coincides with \((-)^{a^{\ell}}\), which is the identity of \(\operatorname{Hur}(Q)\). Thus we have shown that \(\operatorname{rst}(a)^{\ell}\) is homotopic to \(\operatorname{lst}(a)^{\ell}\) as a map \(\operatorname{Hur}(Q)\to\operatorname{Hur}(Q)\), and in particular the restricted maps \(\operatorname{Hur}(Q)_{\omega}\to\operatorname{Hur}(Q)_{\omega}\) are homotopic.
By Lemma 6.3 we moreover have that the map \(\operatorname{lst}(a)^{\ell}\colon\operatorname{Hur}(Q)_{\omega}\to \operatorname{Hur}(Q)_{\omega}\) is homotopic to the composition \(\operatorname{rst}(a^{a^{\ell-1}\omega})\circ\cdots\circ\operatorname{rst}(a^ {a\omega})\circ\operatorname{rst}(a^{\omega})\); since \(a^{a^{i}}=a\) for all \(i\geq 0\), we also have \(a^{a^{i}\omega}=a^{\omega}\), and eventually we obtain that \(\operatorname{lst}(a)^{\ell}\) is homotopic to \(\operatorname{rst}(a^{\omega})^{\ell}\).
We can now analyse the growth of the weighted dimension of the algebra \(B_{\omega}\), in a similar way as we did for \(B\) in Proposition 6.1.
**Proposition 6.5**.: _There is a polynomial \(p_{B_{\omega}}(t)\in\mathbb{Q}[t]\) of degree \(\leq k(G,Q,\omega)-1\) such that, for \(n\) large enough, \(\dim_{\mathbb{Z}}(B_{\omega})_{\ell n}=p_{B_{\omega}}(n)\)._
Proof.: The existence of a polynomial \(p_{B_{\omega}}(t)\), possibly of degree \(\geq k(G,Q,\omega)\), but satisfying \(\dim_{\mathbb{Z}}(B_{\omega})_{\ell n}=p_{B_{\omega}}(n)\) for \(n\) large enough, is guaranteed by the classical theory of the Hilbert function of a finitely generated \(\mathbb{F}\)-algebra, after tensoring \(B_{\omega}\) by some field \(\mathbb{F}\); in fact, since \(B_{\omega}\) is a quotient of \(B\), we immediately obtain that the degree of \(p_{B_{\omega}}(t)\) is at most the degree of \(p_{B}(t)\), which is \(k(G,Q)\).
We now want to improve the upper bound on the degree of \(p_{B_{\omega}}(t)\) to \(K(G,Q,\omega)\). Recall from the proof of the upper bound in Proposition 6.1 that for \(n\geq m=|Q|\), the abelian group \(B_{\ell n}\) is generated by the products \([a_{1}]^{\ell}\dots[a_{r}]^{\ell}[b_{1}]^{\ell n_{1}}\dots[b_{s}]^{\ell n_{s}}\), for varying choices of:
* an integers \(0\leq r\leq m=|Q|\);
* elements \(a_{1},\dots,a_{r}\in Q\);
* partitions \(n-r=n_{1}+\dots+n_{s}\),
where we set \(H=\langle a_{1},\dots,a_{r}\rangle\subseteq G\) (depending on the choice of \(r\) and \(a_{1},\dots,a_{r}\)), and we set \(s\leq k(G,Q)\) to be the number of conjugacy classes of \(Q\cap H\) in \(H\), and we let \(b_{1},\dots,b_{s}\) be a system of representatives of these conjugacy classes (depending on \(H\); we can choose a priori such a system of representatives for any subgroup \(H\subseteq G\)).
Fix now \(0\leq r\leq m\) and elements \(a_{1},\dots,a_{r}\in Q\), and let \(H\), \(s\) and \(b_{1},\dots,b_{s}\) be as above. Let also \(L=\langle H,\omega\rangle\subseteq G\) and let \(P_{1},\dots,P_{u}\) be the conjugacy classes in which the conjugation invariant subset \(Q\cap L\) of \(L\) splits, for some \(0\leq u\leq k(G,Q,\omega)\); finally, let \(c_{1},\dots,c_{u}\in Q\) be representatives. Each element \(b_{i}\) belongs to \(L\cap Q\), and inside \(L\) is conjugate to some element \(c_{\iota(i)}\), for a suitable function
\(\iota\colon\{1,\ldots,s\}\to\{1,\ldots,u\}\); using the relations of \(B\) together with the additional relations of \(B_{\omega}\) from Definition 6.2, we have in \(B_{\omega}\) the equality
\[[a_{1}]^{\ell}\ldots[a_{r}]^{\ell}[b_{1}]^{\ell n_{1}}\ldots[b_{s}]^{\ell n_{s} }=[a_{1}]^{\ell}\ldots[a_{r}]^{\ell}[c_{\iota(1)}]^{\ell n_{1}}\ldots[c_{ \iota(s)}]^{\ell n_{s}}.\]
It follows that \((B_{\omega})_{\ell n}\) is generated by all products \([a_{1}]^{\ell}\ldots[a_{r}]^{\ell}[c_{1}]^{\ell\nu_{1}}\ldots[c_{u}]^{\ell\nu_ {u}}\), for varying \(r\geq 0\), \(a_{1},\ldots,a_{r}\in Q\), and partitions \(n-r=\nu_{1}+\cdots+\nu_{u}\), where \(0\leq u\leq k(G,Q,\omega)\) and \(c_{1},\ldots,c_{n}\in Q\) depend on \(r\geq 0\) and \(a_{1},\ldots,a_{r}\in Q\). Again by a very rough estimate, there are at most \(2^{m}n^{k(G,Q,\omega)-1}\) such products, and this proves that \(\pi_{B_{\omega}}(n)\leq 2^{m}n^{k(G,Q,\omega)-1}\) for \(n\) large enough, in particular the degree of \(\pi_{B_{\omega}}(t)\) is at most \(k(G,Q,\omega)-1\).
Since \(B_{\omega}\) is free as an abelian group, for any field \(\mathbb{F}\) we have \(\dim_{\mathbb{F}}(B_{\omega}\otimes\mathbb{F})_{\ell n}=\dim_{\mathbb{Z}}(B_{ \omega})_{\ell n}\).
Proof of Corollary B'.: This is now analogous to the proof of Corollary A'. For a field \(\mathbb{F}\) and \(i\geq 0\), by Theorem B we have that \(H_{i}(\operatorname{Hur}(Q);\mathbb{F})\) is a finitely generated \(A_{\mathbb{1}}\otimes\mathbb{F}\)-module, and by Lemma 3.3 we have that \(A_{\mathbb{1}}\otimes\mathbb{F}\) is finitely generated over \(B\otimes\mathbb{F}\); it follows that \(H_{i}(\operatorname{Hur}(Q);\mathbb{F})\) is a finitely generated \(B\otimes\mathbb{F}\)-module, and since by Proposition 6.4 the action of \(B\otimes\mathbb{F}\) on \(H_{i}(\operatorname{Hur}(Q);\mathbb{F})\) factors through the projection of rings \(B\otimes\mathbb{F}\twoheadrightarrow B_{\omega}\otimes\mathbb{F}\), we have that \(H_{i}(\operatorname{Hur}(Q);\mathbb{F})\) is a finitely generated \(B_{\omega}\otimes\mathbb{F}\)-module; the statement is now a consequence of Proposition 6.5 and the classical theory of the Hilbert function of a finitely generated graded module over a graded algebra of finite type, together with the observation that \(B_{\omega}\otimes\mathbb{F}\) is concentrated in weights multiple of \(\ell\).
## 7. Large elements as total monodromy
In this section we prove Theorem C and Corollary C'. We fix a commutative ring \(R\) throughout the section, as well as a homological degree \(i\geq 0\). We assume that \(Q\subset G\) is a single conjugacy class and fix a large element \(\omega\in G\) as in Definition 2.7.
### The algebra \(C\)
In this subsection we prove Theorem C assuming that \(R\) is Noetherian. We fix a Noetherian ring \(R\) throughout the subsection.
**Definition 7.1**.: We denote by \(C\) the quotient of \(B\) by the relations \([q_{i}]^{\ell}=[q_{j}]^{\ell}\) for all \(1\leq i,j\leq m\).
Notice that \(C\) is also a quotient of \(B_{\omega}\); as a ring, it is isomorphic to a weighted polynomial ring in a variable of weight \(\ell\); for instance, one can take the image of \([a]^{\ell}\) in the quotient to be such a variable, for any \(a\in Q\).
Let \(\bar{n}\geq\ell\) be such that the finitely generated \(B\otimes R\)-module \(H_{i}(\operatorname{Hur}(Q)_{\omega};R)\) is generated by the direct sum \(\bigoplus_{n=0}^{\bar{n}-\ell}H_{i}(\operatorname{Hur}_{n}(Q)_{\omega};R)\): the existence of such \(\bar{n}\) is guaranteed by the assumption that \(R\) is Noetherian and Theorem B. Then we can consider the direct sum
\[H_{i}(\operatorname{Hur}_{\geq\bar{n}}(Q)_{\omega};R):=\bigoplus_{n\geq\bar{n }}H_{i}(\operatorname{Hur}_{n}(Q)_{\omega};R)\]
as a sub-\(B\otimes R\)-module of \(H_{i}(\operatorname{Hur}(Q)_{\omega};R)\).
**Lemma 7.2**.: _The action of \(B\otimes R\) on \(H_{i}(\operatorname{Hur}_{\geq\bar{n}}(Q)_{\omega};R)\) factors through \(C\otimes R\)._
Proof.: Let \(n\geq\bar{n}\) and let \(x\in H_{i}(\operatorname{Hur}_{n}(Q)_{\omega};R)\) be a homology class of the form \([a]^{\ell}y\), for some \(y\in H_{i}(\operatorname{Hur}_{n-\ell}(Q)_{\omega};R)\); then for all \(b\in\mathbb{Q}\) we have the following equalities:
* \([b]^{\ell}x=[b^{\omega}]^{\ell}x\), by Proposition 6.4;
* \([b]^{\ell}x=[b]^{\ell}[a]^{\ell}y=[b^{a}]^{\ell}[a]^{\ell}y=[b^{a}]^{\ell}x\), using the relations of \(B\).
It follows that if \(b,c\in Q\) can be obtained from another by a sequence of conjugations by \(\omega^{\pm 1}\) or \(a^{\pm 1}\), then \([b]^{\ell}x=[c]^{\ell}x\). Since \(\omega\) is large, the group \(\langle\omega,a\rangle\) is the entire \(G\), and by assumption \(Q\) is a unique conjugacy class in \(G\). We conclude for any homology class \(x\) of the form \([a]^{\ell}y\) and for any \(b,c\in Q\) we have \([b]^{\ell}x=[c]^{\ell}x\).
The claim now follows from the definition of \(\bar{n}\): for \(n\geq\bar{n}\), every class in \(H_{i}(\operatorname{Hur}_{n}(Q)_{\omega};R)\) is a linear combination of classes of the form \([a]^{\ell}y\), for possibly different values of \(a\in Q\) and \(y\in H_{i}(\operatorname{Hur}_{n-\ell}(Q)_{\omega};R)\).
Proof of Theorem C for \(R\) Noetherian.: By Theorem B\(H_{i}(\operatorname{Hur}(Q)_{\omega};R)\) is a finitely generated \(A\otimes R\)-module, and by Lemma 3.3 we have that \(A\otimes R\) is finitely generated over \(B\otimes R\); it follows that \(H_{i}(\operatorname{Hur}(Q)_{\omega};R)\) is finitely generated over \(B\otimes R\); since \(B\otimes R\) is Noetherian, and since \(H_{i}(\operatorname{Hur}_{\geq\bar{n}}(Q)_{\omega};R)\) is a sub-\(B\otimes R\)-module of \(H_{i}(\operatorname{Hur}(Q)_{\omega};R)\), we obtain that also \(H_{i}(\operatorname{Hur}_{\geq\bar{n}}(Q)_{\omega};R)\) is finitely generated over \(B\otimes R\). It is then a consequence of Lemma 7.2 that \(H_{i}(\operatorname{Hur}_{\geq\bar{n}}(Q)_{\omega};R)\) is in fact a finitely generated \(C\otimes R\)-module.
Let now \(a\in Q\); the ring \(C\otimes R\) is isomorphic to the polynomial algebra over \(R\) generated by \([a]^{\ell}\), and in particular also \(C\otimes R\) is Noetherian. We conclude that \(H_{i}(\operatorname{Hur}_{\geq\bar{n}}(Q)_{\omega};R)\) is not only finitely generated, but also finitely presented. Choosing \(\tilde{n}\geq\bar{n}\) such that \(H_{i}(\operatorname{Hur}_{\geq\bar{n}}(Q)_{\omega};R)\) admits a presentation in weights \(\leq\tilde{n}\), we obtain that for \(n>\tilde{n}\) the multiplication \([a]^{\ell}\cdot-\) induces a bijection \(H_{i}(\operatorname{Hur}_{n}(Q)_{\omega};R)\cong H_{i}(\operatorname{Hur}_{n +\ell}(Q)_{\omega};R)\); this multiplication coincides with \(\operatorname{lst}(a)_{*}^{\ell}\).
We conclude by observing that, by Lemma 7.2, for \(n\geq\bar{n}\) and for varying \(a\in Q\), the stabilisation maps \(\operatorname{lst}(a)_{*}^{\ell}\colon H_{i}(\operatorname{Hur}_{n}(Q)_{ \omega};R)\to H_{i}(\operatorname{Hur}_{n+\ell}(Q)_{\omega};R)\) are equal to each other.
We conclude the subsection by proving Theorem C for a general ring \(R\); so in the following proof we drop the hypothesis that \(R\) is Noetherian.
Proof of Theorem C in the general case.: Let \(i\geq 0\), let \(G,Q,\omega\) as in the statement of Theorem C, and let \(a\in Q\). Then the proof of Theorem C for integral homology has the following direct consequence: there is \(\bar{n}\geq 0\) such that for \(n\leq\bar{n}\) the map \(\operatorname{lst}(a)^{\ell}\colon\operatorname{Hur}_{n}(Q)_{\omega}\to \operatorname{Hur}_{n+\ell}(Q)_{\omega}\) is an integral homology isomorphism in all homological degrees \(\leq i\); by the universal coefficient theorem for homology, it follows that for \(n\geq\bar{n}\) the same map induces an isomorphism in \(R\)-homology in the same range of degrees, for any ring \(R\).
In particular, the direct sum \(\bigoplus_{n\leq\bar{n}}H_{i}(\operatorname{Hur}_{n}(Q)_{\omega};R)\) generates \(H_{i}(\operatorname{Hur}(Q)_{\omega};R)\) over \(B\otimes R\); we can now repeat the argument of Lemma 7.2, and show that the sub-\(B\otimes R\)-module \(\bigoplus_{n\geq\bar{n}+\ell}H_{i}(\operatorname{Hur}_{n}(Q)_{\omega};R)\) is in fact a \(C\otimes R\)-module; this implies that for \(n\geq\bar{n}+\ell\) the stabilisation map \(\operatorname{lst}(a)_{*}^{\ell}\colon H_{i}(\operatorname{Hur}_{n}(Q)_{ \omega};R)\to H_{i}(\operatorname{Hur}_{n+\ell}(Q)_{\omega};R)\) is independent of \(a\in Q\).
### Homology of the group completion
We recall the group-completion theorem by McDuff and Segal [13] (see also [12, Theorem Q.4]).
**Theorem 7.3** (Group-completion theorem).: _Let \(R\) be a ring and let \(M\) be a topological monoid; suppose that the localisation \(H_{*}(M;R)[\pi_{0}(M)^{-1}]\) can be constructed by right fractions. Then the canonical map_
\[H_{*}(M;R)[\pi_{0}(M)^{-1}]\to H_{*}(\Omega BM;R)\]
_is an isomorphism of rings._
Lemmas 5.3 and 6.3 imply that the multiplicative set \(\pi_{0}(\operatorname{Hur}(Q))\) of the ring \(H_{*}(\operatorname{Hur}(Q);R)\) satisfies the Ore condition; hence the localisation
\[H_{*}(\operatorname{Hur}(Q);R)[\pi_{0}(\operatorname{Hur}(Q))^{-1}]\]
can be constructed by right fractions, and Theorem 7.3 is thus applicable to compute \(H_{*}(\Omega B\operatorname{Hur}(Q))\).
Proof of Corollary C.: For a fixed homological degree \(i\geq 0\), the localisation of the \(A\otimes R\)-module \(H_{i}(\operatorname{Hur}(Q);R)\) at the multiplicative subset \(\pi_{0}(\operatorname{Hur}(Q))\subset A\otimes R\) coincides with the homology \(H_{i}(\Omega B\operatorname{Hur}(Q);R)\). This localisation can be constructed by right fractions; moreover the multiplicative set \(\pi_{0}(\operatorname{Hur}(Q))\subset A\otimes R\) is generated multiplicatively by the finite set of elements \([a]=[a]\otimes 1\in A\otimes R\).
We define \(\mathfrak{w}=\hat{q}_{1}^{\ell}\dots\hat{q}_{m}^{\ell}\in\pi_{0}^{\ell}( \operatorname{Hur}(Q))\), and denote by \([\mathfrak{w}^{-1}]\in B\otimes R\subset A\otimes R\) the corresponding generator; this allows us to identify the module localisation \(H_{i}(\operatorname{Hur}(Q);R)[\pi_{0}(\operatorname{Hur}(Q))^{-1}]\) with \(H_{i}(\operatorname{Hur}(Q);R)[\mathfrak{w}^{-}1]\). Taking the zero component \(\Omega_{0}B\operatorname{Hur}(Q)\), we can identify \(H_{i}(\Omega_{0}B\operatorname{Hur}(Q);R)\) with the colimit of following sequential diagram, where \(\alpha\) is the chosen element in \(\pi_{0}(\operatorname{Hur}(Q)_{\omega})\) (but for the following diagram any other \(\alpha^{\prime}\in\pi_{0}(\operatorname{Hur}(Q))\) would work):
\[H_{i}(\operatorname{Hur}_{\alpha}(Q);R)\xrightarrow{[\mathfrak{w}]\cdot-}H_{ i}(\operatorname{Hur}_{\mathfrak{w}\alpha}(Q);R)\xrightarrow{[\mathfrak{w}]\cdot-}H_{ i}(\operatorname{Hur}_{\mathfrak{w}^{2}\alpha}(Q);R)\xrightarrow{[\mathfrak{w}]\cdot-}\dots.\]
The map \([\mathfrak{w}]\cdot-:H_{i}(\operatorname{Hur}(Q)_{\omega})\to H_{i}( \operatorname{Hur}(Q)_{\omega})\) can be written as a composition \(\operatorname{lst}(q_{m})_{*}^{\ell}\circ\dots\circ\operatorname{lst}(q_{1} )_{*}^{m}\), and by Theorem C, for \(n\) large enough and for fixed \(a\in Q\), each of the maps \(\operatorname{lst}(q_{i})_{*}^{\ell}\colon H_{i}(\operatorname{Hur}_{n}(Q)_{ \omega};R)\to H_{i}(\operatorname{Hur}_{n+\ell}(Q)_{\omega};R)\) is an isomorphism and coincides with \(\operatorname{lst}(a)_{*}^{\ell}\), where \(a\in Q\) is our fixed element. It follows that the sequential diagram above can be regarded as an "index-\(m\)" subdiagram of the following diagram
\[H_{i}(\operatorname{Hur}_{\alpha}(Q);R)\xrightarrow{[a]^{\epsilon}ll\cdot-}H _{i}(\operatorname{Hur}_{\hat{a}^{\ell}\alpha}(Q);R)\xrightarrow{[a]^{\ell} \cdot-}H_{i}(\operatorname{Hur}_{\hat{a}^{2\ell}\alpha}(Q);R)\xrightarrow{[a ]^{\ell}\cdot-}\dots.\]
The last diagram stabilises by Theorem C, and its colimit is \(H_{i}(\Omega_{0}B\operatorname{Hur}(Q);R)\).
|
2306.11992 | Formation of first star clusters under the supersonic gas flow -- I.
Morphology of the massive metal-free gas cloud | We performed $42$ simulations of the first star formation with initial
supersonic gas flows relative to the dark matter at the cosmic recombination
era. Increasing the initial streaming velocities led to delayed halo formation
and increased halo mass, enhancing the mass of the gravitationally shrinking
gas cloud. For more massive gas clouds, the rate of temperature drop during
contraction, in other words, the structure asymmetry, becomes more significant.
When the maximum and minimum gas temperature ratios before and after
contraction exceed about ten, the asymmetric structure of the gas cloud
prevails, inducing fragmentation into multiple dense gas clouds. We continued
our simulations until $10^5$ years after the first dense core formation to
examine the final fate of the massive star-forming gas cloud. Among the $42$
models studied, we find the simultaneous formation of up to four dense gas
clouds, with a total mass of about $2254\,M_\odot$. While the gas mass in the
host halo increases with increasing the initial streaming velocity, the mass of
the dense cores does not change significantly. The star formation efficiency
decreases by more than one order of magnitude from $\epsilon_{\rm III} \sim
10^{-2}$ to $10^{-4}$ when the initial streaming velocity, normalised by the
root mean square value, increases from 0 to 3. | Shingo Hirano, Youcheng Shen, Sho Nishijima, Yusuke Sakai, Hideyuki Umeda | 2023-06-21T03:07:29Z | http://arxiv.org/abs/2306.11992v2 | Formation of first star clusters under the supersonic gas flow - I. Morphology of the massive metal-free gas cloud
###### Abstract
We performed 42 simulations of first star formation with initial supersonic gas flows relative to the dark matter at the cosmic recombination era. Increasing the initial streaming velocities led to delayed halo formation and increased halo mass, enhancing the mass of the gravitationally shrinking gas cloud. For more massive gas clouds, the rate of temperature drop during contraction, in other words, the structure asymmetry, becomes more significant. When the maximum and minimum gas temperature ratios before and after contraction exceed about ten, the asymmetric structure of the gas cloud prevails, inducing fragmentation into multiple dense gas clouds. We continued our simulations until \(10^{5}\) years after the first dense core formation to examine the final fate of the massive star-forming gas cloud. Among the 42 models studied, we find the simultaneous formation of up to four dense gas clouds, with a total mass of about 2254 \(M_{\odot}\). While the gas mass in the host halo increases with increasing the initial streaming velocity, the mass of the dense cores does not change significantly. The star formation efficiency decreases by more than one order of magnitude from \(\epsilon_{\rm III}\sim 10^{-2}\) to \(10^{-4}\) when the initial streaming velocity, normalised by the root mean square value, increases from 0 to 3.
keywords: methods: numerical - dark ages, reionization, first stars - stars: Population III - stars: formation - stars: black holes
## 1 Introduction
The first stars, Population III (Pop III) stars, formed from the metal-free gas cloud and brought the first light and heavy elements to the universe (see Klessen & Glover, 2023, for a recent review). The cosmological simulations show that small dark matter (DM) halos of \(\sim 10^{5}-10^{6}\,M_{\odot}\) forming at redshift \(z\sim 20-30\) become cradles of the first stars (e.g. Tegmark et al., 1997; Yoshida et al., 2003). Typically, a massive star-forming gas cloud of \(\sim 1000\,M_{\odot}\) is formed at the density peak of the host halo (e.g. Abel et al., 2002; Bromm et al., 2002). The cloud gravitationally collapses until a quasi-hydrostatic protostellar core of \(\sim\)0.01 \(M_{\odot}\) is formed (e.g. Omukai & Nishi, 1998; Yoshida et al., 2008). The tiny protostellar core grows via accretion of the surrounding gas until the protostellar radiative feedback halts the gas accretion (e.g. McKee & Tan, 2008; Hosokawa et al., 2011, 2016). Previous simulations follow the accretion phase to determine the final stellar mass and construct the initial mass function of Pop III stars (e.g. Hosokawa et al., 2011, 2016; Hirano et al., 2014, 2015). Some simulations reported the formation of multiple-star systems (e.g. Susa, 2013; Susa et al., 2014; Susa, 2019; Stacy & Bromm, 2013; Stacy et al., 2016; Sugimura et al., 2020).
Environmental effects are also known to affect the first star formation process. Various effects have been investigated using numerical simulations: (dynamical) baryonic supersonic motions relative to DM (e.g. Tseliakhovich & Hirata, 2010), violent halo mergers (e.g. Inayoshi et al., 2015; Wise et al., 2019), (radiative) far-ultraviolet radiation in the Lyman-Werner bands (e.g. Omukai, 2001; Latif et al., 2013), X-ray radiation (e.g. Hummel et al., 2015; Park et al., 2021). In recent years, research has been conducted in a more realistic cosmological setting that deals with these influences simultaneously (e.g. Schauer et al., 2021; Kulkarni et al., 2021). Because these effects support the formation of supermassive clouds by pausing the first star formation, researchers have investigated in the context of supermassive first star formation, which is the candidate of a seed object of supermassive black holes (SMBHs) observed in the distant universe (e.g. Inayoshi et al., 2020).
We focus on the final fate of the massive star-forming clouds formed under the baryonic streaming velocity (SV; e.g. Stacy et al., 2011; Greif et al., 2011). We presented that, in regions with a large SV, gas condensation is suppressed until DM halo generates a deep gravitational potential with a mass of \(10^{7}\,M_{\odot}\), and a protostar which is formed in the massive gas cloud and grows its mass via episodically burst accretion (Hirano et al., 2017). In regions with low-to-moderate SV, on the other hand, a sizeable filamentary gas cloud of \(10^{4}-10^{5}\,M_{\odot}\) forms and fragments to yield multiple star-forming gas clouds of \(100-1000\,M_{\odot}\)(Hirano et al., 2018, hereafter H18). If each of these cloud clusters forms a multiple first star system, it will form a first star association than the clusters that have been discussed in the past in a single gas cloud.
We aim to identify the formation conditions of the first stars born as a single star, binary, or cluster. H18, however, simulated the first star formation only in the same halo under different initial SVs, despite the known diversity of the star formation (e.g. Hirano et al., 2014, 2015). To determine the critical initial SV for the multiple cloud formation and the typical number of clouds, we study the statistical properties of massive star-forming clouds in the early universe using 42 clouds produced by cosmological simulations. We determine the
mass function of the metal-free star-forming clouds at \(10^{5}\,\)yr after the first cloud formation by adopting the opaque core methodology (Hirano & Bromm, 2017). We find that the rate of temperature drop during contraction determines the morphology of the gas cloud. The more asymmetric the shape, the more likely it fragments and forms a massive cloud association.
Before moving on to the next section, another first star formation process due to a rapid SV is worth mentioning. The rapid SV causes a spatial offset between DM and baryon density perturbations, forming baryonic clumps that collapse outside the counterpart DM halos (Naoz & Narayan, 2014). Such supersonically induced gas objects (SIGOs) could survive as DM-free objects and might become globular clusters (e.g. Popa et al., 2016; Chiou et al., 2019; Lake et al., 2023). This study restricts the computational domain within the DM halo to resolve the supermassive clouds by low-mass particles. We have yet to trace the formation of SIGO outside the DM halo adequately. Simultaneously investigating the effects of SV on the inside and outside of the halo is a challenge that will require future, larger-scale calculations.
We describe the calculation methods in Section 2. Section 3 shows the results of the metal-free star-forming gas cloud formation inside 7 different DM halos under 6 different initial SVs, 42 models in total. Section 4 discusses the dependence of first star formation efficiency on the formation environment. Section 5 summarises the parameterised study and provides an outlook for future research.
## 2 Numerical Methodology
We perform a set of cosmological simulations under different initial baryonic streaming velocities to study the effect of the early streaming motions on the first star formation. We calculate the first \(10^{5}\,\)yr evolution of the star-forming region after the first cloud formation (\(\sim\) protostar formation epoch) to examine the cloud formation until the protostellar radiative feedback from massive first stars evaporates the accreting gas material. We discuss the effect of the initial streaming velocities on the physical properties of star-forming clouds.
### Cosmological Initial Conditions
We use the publicly available code MUSIC (Hahn & Abel, 2011) to generate the cosmological initial conditions (ICs) with a volume of \(L_{\rm box}=10\,h^{-1}\) comoving megamagree (cMpc) on a side at redshift \(z_{\rm ini}=499\). We adopt the standard \(\Lambda\)-cold DM (\(\Lambda\)CDM) cosmology with total matter density \(\Omega_{\rm m}=0.31\), baryon density, \(\Omega_{\rm b}=0.048\), dark energy density \(\Omega_{\Lambda}=0.69\) in units of the critical density, Hubble constant \(h=0.68\), density fluctuation amplitude \(\sigma_{\rm B}=0.83\), and primordial spectral index \(n_{\rm s}=0.96\).
We perform a set of cosmological simulations with two parameters: (1) DM halo which hosts the metal-free star-forming cloud and (2) initial velocities of the baryonic streaming motion. First, we simulate seven cosmological simulations with zero streaming velocity (SV) initiated by seven cosmological ICs. We identify the virial DM minihalos first formed in each simulation volume. We label these target halos as Halos A-G in order of the formation redshift (\(z=21.08-36.80\)). This is the first parameter in this study. For comparison, Figure 1 overplots the redshift-halo mass diagram of our target halos on the scatter plots of the samples examined in the previous studies (Stacy et al., 2011; Greif et al., 2011; Hirano et al., 2015, H18). Halos A-G are more massive and form earlier than the previous samples since we select the first formed halo in a larger cosmological box. Because SV decreases with time as \(v_{\rm SV}\propto(1+z)\), the influence of SV increases at a higher redshift.
Next, we generate the cosmological ICs (labels as A-G) by adding a uniform initial relative velocity between DM and baryonic components along the \(x\)-axis. We adopt a uniform initial relative velocity since the distribution of the baryonic streaming motion is coherent over a length of a few megamagrees, which is sufficiently larger than a typical scale that contains target DM haloes. We select six different initial SVs to study the dependence: \(v_{\rm SV}/\sigma_{\rm SV}^{\rm rec}=1.0\), \(1.5\), \(2.0\), \(2.5\), and \(3.0\), normalised by the root-mean-square value, \(\sigma_{\rm SV}^{\rm rec}=30\,{\rm km\,s^{-1}}\), at the epoch of the cosmological recombination, \(z_{\rm rec}=1089\).1 The probability fraction of the cosmological volume where the streaming velocities \(v_{\rm SV}/\sigma_{\rm SV}^{\rm rec}=1\), \(2\), and \(3\) are \(0.39\), \(7.4\times 10^{-3}\), and \(5.9\times 10^{-6}\), respectively (Tseliakhovich et al., 2011). Then the range of the initial SV in this study covers the possible formation environment. This study investigates \(7\times 6=42\) models in total (Table 1). Hereafter, we refer to each model by a combination of one capital alphabet letter and two numbers; for example, C15 represents Halo C with \(v_{\rm SV}/\sigma_{\rm SV}^{\rm rec}=1.5\).
Footnote 1: Uysal & Hartwig (2023) estimated the local value of SV in which the Milky Way was formed as \(v_{\rm SV}/\sigma_{\rm SV}^{\rm rec}=1.75\), which is an extremely high value, and if valid, its impact on the structure formation in the early universe is inescapable.
We regenerate the cosmological ICs with a hierarchical zoom-in region with a volume of \(L_{\rm zoom}=0.3\,h^{-1}\) cMpc on a side. In the high-resolution regions, the particle masses of DM and gas components are \(m_{\rm DM}=16.4\,M_{\odot}\) and \(m_{\rm gas}=3.0\,M_{\odot}\), respectively. This particle mass is enough to resolve the host DM halo with \(>10^{6}\,M_{\odot}\).
### Cosmological Simulations
We perform a set of cosmological simulations using the parallel \(N\)-body/smoothed particle hydrodynamics (SPH) code GADGET-2(Springel, 2005) suitably adopted for the metal-free star formation (H18). We solve chemical reactions for 14 species in the primordial gas (e\({}^{-}\), H, H\({}^{+}\), H\({}^{-}\), He, He\({}^{+}\), He\({}^{++}\), H\({}_{2}\), H\({}_{2}^{+}\), D, D\({}^{+}\), HD, HD\({}^{+}\), and HD\({}^{-}\)) as in (Yoshida et al., 2007, 2008). We use the updated cooling
Figure 1: Redshift and virial halo mass distribution obtained from the cosmological simulations with zero streaming velocity. The red circles indicate the seven target halos in this study (Halos A-G). The grey dots, yellow triangles, diamond, and square indicate models in the previous works, Hirano et al. (2015), Stacy et al. (2011), Greif et al. (2011), and H18, respectively.
rates for H\({}_{2}\) and HD (Galli & Palla, 2013) and the three-body H\({}_{2}\) formation rates (Forrey, 2013b,a).
We employ a hierarchical refinement technique to follow the first star formation process. We adopt the refinement criterion that the local Jeans length of the SPH particle is always resolved by 15 times the local smoothing length by increasing the spatial resolution using the particle-splitting technique (Kitsionas & Whitworth, 2002), which places the 13 child particles on a hexagonal close-packed array.2
Footnote 2: Chiaki & Yoshida (2015) pointed out that the particle splitting method, which places the particles spherically symmetrical, could wipe out the original non-spherically symmetrical density structure. This study adopted a stricter condition as \(R_{\rm cr}=M_{\rm Jeans}/m=15^{3}\) than \(R_{\rm cr}=10^{3}\) in our previous simulations to complete the particle splitting before forming small density structures, thereby reducing the influence of particle splitting methods that assume spherically symmetric structures on non-spherically symmetric density structures. In addition, this study focuses on cloud-scale fragmentation, which occurred in lower dense and more spherical region than disk-scale fragmentation discussed in Chiaki & Yoshida (2015). Therefore, the influence of using the spherical symmetry particle splitting method is considered small.
We employ a hierarchical refinement technique to follow the first star formation process. We adopt the refinement criterion that the local Jeans length of the SPH particle is always resolved by 15 times the local smoothing length by increasing the spatial resolution using the particle-splitting technique (Kitsionas & Whitworth, 2002), which places the 13 child particles on a hexagonal close-packed array.2
Footnote 2: Chiaki & Yoshida (2015) pointed out that the particle splitting method, which places the particles spherically symmetrical, could wipe out the original non-spherically symmetrical density structure. This study adopted a stricter condition as \(R_{\rm cr}=M_{\rm Jeans}/m=15^{3}\) than \(R_{\rm cr}=10^{3}\) in our previous simulations to complete the particle splitting before forming small density structures, thereby reducing the influence of particle splitting methods that assume spherically symmetric structures on non-spherically symmetric density structures. In addition, this study focuses on cloud-scale fragmentation, which occurred in lower dense and more spherical region than disk-scale fragmentation discussed in Chiaki & Yoshida (2015). Therefore, the influence of using the spherical symmetry particle splitting method is considered small.
## Hydrodynamic Simulations
The cosmological simulations end before the protostar formation. To examine the formation and long-term evolution of the star-forming cloud, we rerun all models using an opaque core methodology (Hirano & Bromm, 2017). Our method artificially reduces the radiative cooling for gas particles whose density exceeds a threshold value, \(n_{\rm th}=10^{8}\,{\rm cm^{-3}}\), as
\[\Lambda_{\rm red}=\beta_{\rm esc,att}\cdot\Lambda_{\rm thin}\,, \tag{1}\]
with an artificial escape fraction and an artificial optical depth as
\[\beta_{\rm esc,att}=\frac{1-\exp(\tau_{\rm art})}{\tau_{\rm art}},\tau_{\rm art }=\left(\frac{n}{n_{\rm th}}\right)^{2}\,. \tag{2}\]
Dense regions exceeding the threshold density are experiencing compression heating and forming a hydrostatic core. In addition, we skip the calculations of chemical reactions for gas particles whose density exceeds the threshold value. We clarify that the radius of the dense core is less than 0.05 pc in all models, which does not affect the Jeans-scale structure analysed later.
We stop all runs with the opaque core methodology \(10^{5}\) yr after the gas particles reach the threshold density (\(n_{\rm th}=10^{8}\,{\rm cm^{-3}}\) at \(t_{\rm th}=0\,{\rm yr}\)). This calculation time is sufficiently longer than the free-fall time at the threshold density, \(t_{\rm ff}=5.2\times 10^{3}(n_{\rm th}/10^{8}\,{\rm cm^{-3}})^{-1/2}\,{\rm yr}\), so the protostar can form inside the opaque core in this time. The newly born protostar evolves to the zero-age main sequence phase on
Figure 2: Distribution of redshift and virial halo mass. The circles indicate 42 models in this study. The triangles, diamonds, stars, and squares indicate models in the previous works, Stacy et al. (2011), Greif et al. (2011), Hirano et al. (2017), and H18, respectively. The horizontal dotted and curving dashed lines show fitting functions of the minimum halo mass in Schauer et al. (2021, Equations 11–13) and Kulkarni et al. (2021, Equations 2-12) with no Lyman-Werner background. The lines connect models initiated by the same cosmological region by adding different initial streaming velocities. As shown in the legend, the colours of symbols and lines correspond to the magnitude of the initial streaming velocity, except for the red triangle where \(v_{\rm SV}/\sigma_{\rm SV}=3.3\)(Stacy et al., 2011).
average \(\sim\!10^{5}\) yr in the case of the first star formation (see Figure 1 in Hirano and Bromm, 2017) and begins to blow off the surrounding gas due to the UV radiative feedback (McKee and Tan, 2008). We end the calculations in this study at this time because our simulations ignore the UV radiative feedback.
### Cloud and Core
We analyse the time-series data from the long-term simulations to determine the number of high-density regions inside which the first star can form. We define dense regions at two scales using different critical densities: (1) \(n_{\rm H}=10^{6}\,{\rm cm}^{-3}\) above which the collapsing cloud is already gravitationally unstable and (2) \(n_{\rm H}=10^{8}\,{\rm cm}^{-3}\) above which the opaque core methodology suppresses the gravitational collapse. This study refers to the former as the collapsing "cloud" and the latter as the star-forming "core".
If the mass of the high-density region exceeded the local Jeans mass, we judged them to be cloud/core. We adopt the Bonner-Ebert mass (Bonnor, 1956; Ebert, 1955), \(M_{\rm BE}\), as the local Jeans mass:
\[M_{\rm BE} = \frac{1.18c_{\rm s}^{4}}{G^{3/2}P_{\rm ext}^{1/2}}\,M_{\odot}\,, \tag{3}\] \[\approx 1050\,M_{\odot}\left(\frac{T}{200\,{\rm K}}\right)^{3/2}\left( \frac{n_{\rm H}}{10^{4}\,{\rm cm}^{-3}}\right)^{-1/2}\,, \tag{4}\]
where \(c_{\rm s}\) is the speed of sound, \(G\) is the gravitational constant, \(P_{\rm ext}\) is the external pressure, \(T\) is the gas temperature, and \(n_{\rm H}\) is the hydrogen number density. We calculate the local Jeans radius where \(M_{\rm BE}(r)/M_{\rm enc}(r)\) is maximal, where \(M_{\rm enc}(r)\) is the enclosed mass within a radius \(r\) from the density centre of cloud/core.
We exclude the gravitationally trapped clouds/cores that move into the Jeans radius from the primary cloud/core because they eventually merge into the primary one. We adopt \(r_{\rm J}=0.25\,{\rm pc}\) for the above classification, which is obtained by averaging the Jeans radius of the gas cloud obtained from these calculations.
## 3 Numerical Simulations
We study the number of first star-forming regions (clouds and cores) in 42 cosmological simulations. First, we summarise the effect of SV on physical properties at the virial scale (Section 3.1) and the Jeans scale (Section 3.2), respectively. Then we classify gas clouds into three types according to their morphology (Section 3.3) and show the number of clouds and cores (Section 3.4). Table 1 summarises the analysis results of 42 models discussed in this section.
### Virial halo
Figure 2 shows the dependence of formation redshift and halo mass on the initial SV values. As SV increases, star formation delays and halo
Figure 4: Baryon fraction of the viral halo as a function of the streaming velocity at the time of the collapse, \((v_{\rm SV}/\sigma_{\rm SV})(1+z)/(1+z_{\rm rec})\). The horizontal dashed line indicates the cosmological mean baryon fraction, \(\Omega_{\rm b}/\Omega_{\rm m}=0.155\).
Figure 3: Physical properties of the virial halos as a function of the initial streaming velocity. Panels: (a) formation redshift and corresponding cosmic age, (b) virial halo mass, and (c) baryon fraction, respectively. The horizontal dashed line in panel (c) indicates the cosmological mean baryon fraction, \(\Omega_{\rm b}/\Omega_{\rm m}=0.155\).
mass increases (upper left direction).3 This change is more significant for models that form at higher redshifts (i.e. in the order of Halo A to G). Figures 3(a) and 3(b) show the dependence of the redshift and halo mass on the model halos. For example, the formation redshift is \(dz=13.37\) smaller, and the halo mass is \(\Delta M_{\rm v}=175\) times larger in Model A30 compared with Model A00, whereas \(dz=5.08\) and \(\Delta M_{\rm v}=8.5\) in Model G30 compared with Model G00. As a result, the rates of \(\Delta M_{\rm v}\) to \(dz\) (the slope of lines in Figure 2) are almost the same for A-G models, and on average, \(\Delta M_{\rm v}\sim 100\) for \(dz\sim 10\) between models with \(v_{\rm SV}/\sigma_{\rm SV}^{\rm rec}=0\) and 3.0.
Footnote 3: Exceptionally, for models B25 and B30, this relationship is reversed with earlier formation epochs and smaller halo masses.
Besides delaying the star formation epoch, another effect of SV on the halo scale phenomena is to change the gas mass fraction, the baryon fraction \(f_{\rm b}=M_{\rm v,b}/M_{\rm v}\). Figure 3(c) shows the dependence of the baryon fraction on the initial SV. The baryon fraction becomes smaller for halos born at higher redshift. The higher redshift models, e.g. Halos A and B, have lower baryon fractions regardless of SV, while the lower redshift models, e.g. Halo G, have higher baryon fractions. In the intermediate models, Halos C to F, the baryon fractions tend to decrease with SV (negative correlation). Figure 4 shows the dependence on SV at the collapse time (redshift in Table 1). The baryon fraction decreases with SV where SV at the collapse time is
Figure 5: Distributions of the projected gas number density around the density peak for all models when the density first reaches \(10^{6}\,{\rm cm^{-3}}\). The box sizes are \(50\,{\rm pc}\) on a side. Each model is placed on the parameter space of Halo A-G (vertical axis) and normalised initial SV \({\rm vs}/\sigma_{SV}=0-3.0\) (horizontal axis). The direction of the initial streaming velocity is aligned with the panel’s horizontal axis (from left to right). The letter at the bottom right of each panel indicates the classification of the gas cloud structure (Types S, F, and C; see Section 3.3).
above about 0.03, except in some models. We attribute the increase in baryon fractions with SV in some models to the increase in the gravitational potential of the halo, which allows the inflow gas to remain in the halo without leaking out.
Figure 5 shows the gas density distribution inside each halo. In the absence of SV (left-most column), the gas in the halo contracts while maintaining a spherically symmetric structure. As SV increases, the gas structure deviates from spherical symmetry, and an elongated filamentary or sheet-like structure appears. There are two possible mechanisms by which SV changes the density structure of the gas inside the DM halo.
I. Crushing orthogonal to the initial SV direction by the gravitationally bound gas flowing into the halo.
II. Spreading parallel to the initial SV direction by gas that is not gravitationally bound and flows out of the halo.
Two model parameters change the degree of the above two effects: the larger \(M_{\rm v}\), the more substantial effect I appears, while the faster
Figure 6: Phase diagrams of the gas temperature (\(T\)) as a function of the gas number density (\(n_{\rm H}\)) for all models at the end of the simulations, \(10^{5}\) yr after the first core formation. Each model is placed on the parameter space of Halo A-G (vertical axis) and normalised initial SV \(v_{\rm SV}/\sigma_{SV}=0-3.0\) (horizontal axis). The colour map shows the distribution of the gas mass \(\Delta M\) contained in the region of the logarithmic width \(\Delta(\log n_{\rm H})\) and height \(\Delta(\log T)\), where the redder the region, the larger the gas mass contained in it. The black lines show the logarithmic mean temperature weighted by the gas mass. The stars indicate the \(n_{\rm H}\)-\(T\) points where the collapsing cloud becomes gravitationally unstable. The three dashed lines in each panel show the \(n_{\rm H}\)-\(T\) relation for the Jeans masses with \(M_{\rm J}=10^{6}\), \(10^{4}\), and \(10^{2}\,M_{\odot}\) (left to right), respectively. The letter at the bottom right of each panel indicates the classification of the gas cloud structure (Types S, F, and C; see Section 3.3).
SV, the more substantial effect II appears. As a result, various gas density structures appear.
### Jeans cloud
We set \(t_{\rm th}=0\) when the gas number density at the centre of the collapsing cloud first reaches \(n_{\rm H}=n_{\rm th}=10^{8}\,{\rm cm}^{-3}\) and continue the simulation until \(t_{\rm th}=10^{5}\,{\rm yr}\) using the opaque core methodology (Section 2.3).
Figure 6 displays the phase diagrams on the density-temperature (\(n_{\rm H}-T\)) plane of all models at the end of the simulation (\(t_{\rm th}=10^{5}\,{\rm yr}\)). On the lower density region (\(n_{\rm H}\lesssim 10\,{\rm cm}^{-3}\)), the gas fluid is contracted by the gravity of the DM halo and is adiabatically compressed. Since the DM halo mass increases with SV, both the mass and temperature of the gas material inside the DM halo increase with SV (from left to right panels in Figure 6).
After the molecular hydrogen (H\({}_{2}\)) forms, H\({}_{2}\) radiative cooling exceeds the compression heating, and the gas temperature decreases (\(10\lesssim n_{\rm H}/{\rm cm}^{-3}\lesssim 10^{5}\)). As a result of the temperature drop, the pressure also decreases, which leads to a self-gravitational contraction of the gas cloud.
The thermal evolution of the collapsing gas cloud goes from temperature decreasing to increasing around \(n_{\rm H}\sim 10^{3}\,{\rm cm}^{-3}\) At the point at which the gas temperature reaches a minimum ("loitering point"), the gas cloud becomes a gravitationally unstable state (see star symbols in Figure 6). After that, the gas cloud can gravitationally contract while increasing gas temperature.
Besides H\({}_{2}\)-cooling, hydrogen deuteride (HD)-cooling is also vital for metal-free star formation. Previous studies suggested that HD-cooling becomes influential on the thermal evolution of the collapsing
Figure 7: Same as Figure 6 but the vertical axis shows the abundance ratio \(f_{\rm HD}/f_{\rm H_{2}}\).
cloud if the abundance ratio \(f_{\rm HD}/f_{\rm H_{2}}\) overcomes \(10^{-3}\)(Ripamonti, 2007). Figure 7 shows the phase diagrams of the ratio for all models. The abundance ratio exceeds the threshold value in some models (e.g. A30, B15, D30). The temperature in those models sharply decreases at \(n_{\rm H}=10^{4}-10^{6}\,{\rm cm}^{-3}\) (Figure 6), which can affect the fragmentation scale during the long-term evolution.
The above phenomena occur in a density lower than the threshold value \(n_{\rm th}=10^{8}\,{\rm cm}^{-3}\). When the gas density exceeds the threshold density, the temperature artificially increases due to the opaque core methodology.
### Structure classification
Figure 5 displays various 3D morphologies of the collapsing gas clouds depending on the initial SV values. The shape of the gas cloud, particularly the degree of elongation, relates to the (filament) fragmentation process. We classify the shape of gas clouds into the following three types, Types S, F, and C. We distinguish them using
Figure 8: 3D Distributions of the gas (SPH) particles with \(n_{\rm H}\simeq 10^{3}\,{\rm cm}^{-3}\), which are used to classify the cloud’s morphology. The left panels show Type S (spherical), Models C00 and E00. The middle panels show Type F (filamentary), Models A15 and F25. The right panels show Type C (complex), Models D25 and G20. The unit of the axis in all panels is parsec.
Figure 10: The ratio of the maximum and minimum temperatures of the collapsing gas cloud, \(T_{\rm max}/T_{\rm min}\), as a function of the redshift. The temperatures are calculated from the average track (black solid lines in Figure 6) with \(n_{\rm H}=1-10^{6}\,{\rm cm}^{-3}\). The circles, triangles, and squares correspond to Type S, F, and C, respectively. The filled symbols are for the models in this study. The open symbols are for the models in H18, for comparison. Models in which multiple clouds (\(n_{\rm H}\geq 10^{6}\,{\rm cm}^{-3}\)) are detected have the number of clumps beside the symbol.
Figure 9: Fraction of the cloud’s structure classes (Types S, F, and C) as a function of SV. The solid, dotted, and dashed lines correspond to Type S, F, and C, respectively.
the ratio of the major and minor axes (\(a/b\)) of the iso-density surface at \(n_{\rm H}=10^{3}\,{\rm cm}^{-3}\). Figure 8 shows examples of the three distinct shapes.
S (spherical): iso-density surfaces with \(a/b<3\), which includes structures from spheres to ellipsoids. We henceforth classify them as spherical structures to distinguish them from the filamentary structure discussed below.
F (filamentary): iso-density surfaces are approximated as an elongated cylinder with \(a/b>3\).
C (complex): iso-density surfaces consist of multiple filaments and/or sheets and are approximated as neither sphere nor cylinder.
Figure 9 summarises the dependence of the cloud structure on SV. The cloud structure is strongly influenced by the initial SV of the forming region: Types S, F, and C are dominant at low (\(v_{\rm SV}/\sigma_{SV}\leq 1.0\)), intermediate (\(1.0\leq v_{\rm SV}/\sigma_{SV}\leq 2.5\)), and high (\(v_{\rm SV}/\sigma_{SV}\geq 2.5\)) SV models, respectively. The clouds of Type S have the same structure, and the iso-density surface sizes are within the radius of 5 pc. In contrast, for the clouds of Type F, which have elongated structures, the thickness of the filaments is about \(3-5\) pc and the length is distributed around \(20-40\) pc. Some of the filaments had large curvatures or were split in the middle. For the clouds of Type C, we find multiple filaments intertwined within a structure extending over several tens of pc.
When the cloud contracts while lowering its temperature, the asymmetry of the cloud structure increases due to progressive contraction in a specific direction (dimension). We focus on the magnitude of the temperature drop of the collapsing cloud to explain the three types of cloud morphologies. The maximum temperature at the virial scale depends on the halo mass and increases with SV (\(\sim 10^{3}-10^{4}\) K). The minimum temperature at the Jeans scale (the "loitering point") depends on the coolants; H\({}_{2}\) can cool down to 200 K and HD to 50 K. Therefore, the temperature drops generally increase as SV increases (Figure 6).
Figure 10 shows the ratio of the maximum and minimum temperatures on the average track (solid black lines in Figure 6) during \(n_{\rm H}=1-10^{6}\) cm\({}^{-3}\). The asymmetry/complexity of gas clouds tends to increase (Type S \(\rightarrow\) F \(\rightarrow\) C) as the temperature ratio increases. The temperature ratio \(T_{\rm max}/T_{\rm min}=10-15\) is a threshold, below which the cloud is symmetry (Type S) whereas above which asymmetry (Types F and C). Above the threshold ratio, the formation redshift becomes an additional parameter: the lower the redshift, we classify the models with relatively small temperature ratios as Type C.4
Footnote 4: Model F10 (\(z=22.70\) and \(T_{\rm max}/T_{\rm min}=15\)) is classified as Type C despite having a small SV and a relatively small temperature ratio. It has a short side chain branching from the centre of the filament, but the actual structure is similar to Type F.
### Numbers of clouds and cores
To identify the formation sites of the first stars, we adopt two physical conditions: (1) "clouds" with \(n_{\rm H}\geq 10^{6}\) cm\({}^{-3}\) where the collapsing cloud becomes gravitationally unstable and (2) "cores" with \(n_{\rm H}\geq 10^{8}\) cm\({}^{-3}\) which is the threshold density of the opaque core technique. Figure 11 shows distributions of the clouds (blue circles) and cores (red circles) at the end of the simulations. Multiple clouds and cores formations occur, especially on elongated filamentary structures, even in Type C models.
Figure 12 summarises the number of clouds and cores. The colours of cells distinguish the classes of cloud structure. The more complex the cloud structure is (Type S \(\rightarrow\) F \(\rightarrow\) C), the larger the number of clouds and cores.
S (spherical): Most of the gas clouds in this class form a single core at the centre of the spherically symmetric collapsing gas clouds. This is consistent with the typical scenario in the first star formation. The exceptions are Models B15 and C00. Model B15, in which two clouds and one core are detected, is considered an intermediate state between Types S and F. The gas cloud of Model B15 has a higher oblateness than other Type S models and fragments into two clouds. Model C00, in which three clouds and two cores are detected, on the other hand, is exceptional. The multiple clouds and cores form inside the spherically symmetric collapsing gas cloud despite the absence of SV. Because two cores are born near the density centre, they approach a distance less than the local Jeans length (\(r_{\rm J}=0.25\) pc is the average value for the collapsing gas clouds in this study) at \(t_{\rm rh}=0.8\times 10^{5}\) yr. We distinguish them as a close pair of cores as the black circles in Figure 11.
F (filamentary): A maximum of two objects are formed along the filamentary gas cloud. Models A20, B30, and E15 form a pair of cloud and core with distances 0.66, 0.63, and 5.2 pc, respectively. Models A25 and A30 form a pair of two cores with distances 1.9 and 2.6 pc, respectively. The fragmentation scale corresponds to the Jeans length \(r_{\rm J}=2\) pc at \(n_{\rm H}=10^{4}\) cm\({}^{-3}\).
C (complex): A maximum of six objects are formed in the complex gas cloud. In these cases, clouds and cores form in a row on a dense filament, ranging from about 5 pc to more than 20 pc in length. Model D25 has the most significant number of clouds (six) and cores (four) examined in this paper. Model D30 has a close pair of cores formed by approaching within the Jeans length (\(r_{\rm J}=0.25\) pc) at \(t_{\rm th}=0.9\times 10^{5}\) yr
The analytical study of the filament fragmentation (Inutsuka & Miyama 1992, 1997) showed that the filament thickness is approximately equal to the Jeans length and the filament fragments at every Jeans length. The models classified as Types F and C in this study have dense filaments. However, the number of clouds and cores is smaller than that for the case of splitting at each Jeans length. In most models, dense filaments form only one core without fragmentation.
## 4 Discussion
### Star formation efficiency
To calculate the long-time thermodynamic evolution of the massive gas cloud of \(10^{4}-10^{5}\,M_{\odot}\), we limit the maximum computational density to \(n_{\rm th}=10^{8}\) cm\({}^{-3}\). Since this is lower than the formation density of protostars (\(\sim 10^{20}\) cm\({}^{-3}\)), the current simulations can not indicate the masses of primary stars born inside individual cores.
On the other hand, since the first stars form at the centre of the identified cores, the core mass constrains an upper limit of the first star mass. If we consider that the core mass correlates with the masses of the first stars, it makes sense to examine the dependence of the core mass on the model parameters (host halo and initial SV).
Figure 13 shows the enclosed gas mass for each model as a function of the gas number density. On the low-density side (large-scale), which corresponds to the early stages of star formation, the enclosed gas mass increases with increasing initial SV for all models. The delay of the gas contraction (beginning of the star formation) in the halo due to SV causes this correlation and increases gas mass at the virial scale (\(M_{\rm v,b}\) in Figure 14(a)).
However, as the gas contraction proceeds, the SV dependence of
the gas mass is more varied on the high-density side (small-scale) in Figure 13. For models related to Halo A, the gas clouds contract with maintaining the positive SV correlation. For Halos B-D, models initiated with high SV also show a positive correlation. On the other hand, gas masses for models initiated with intermediate SV go below that for the model without SV on the high-density side. Halos E-G show that the SV dependence of the gas mass reverses from positive to negative during contraction.
At the scale where the contracting gas cloud first becomes gravitationally unstable as with sphere, filament, and sheet-like structures, the gas mass of such objects (\(M_{\rm J,whole}\) in Table 1) shows a monotonic increase with SV. For larger SV, \(M_{\rm J,whole}\) increases, and \(M_{\rm J,whole}\) increases about 40 times on average for \(v_{\rm sv}/\sigma_{\rm sv}=0\to 3.0\). Gravity contraction progresses, and clouds/cores form inside these objects. We can confirm that local Jeans mass around each core (\(M_{\rm J}\)) does not always correlate to the Jeans mass of the parent cloud (\(M_{\rm J,whole}\)).
Figure 14(b) summarises the total core mass, summation of core masses for each model, at the end of simulations. On average, the dependence on SV for total core mass is weak. The weakening dependence on SV means that the ratio of the total core mass to the virial gas mass decreases with SV (Figure 14(c)). Let us consider this ratio to represent the upper limit of star formation efficiency used in the context of galaxy formation. The formation efficiency decreases with SV by more than one order of magnitude (see the grey-shaded region in Figure 14(c)),
\[\epsilon_{\rm III}\propto\frac{M_{\rm core,tot}}{M_{\rm v,b}}=(0.004\sim 0. 01)\cdot\exp\left(-\frac{v_{\rm sv}}{\sigma_{\rm sv}}\right)\,. \tag{5}\]
Figure 11: Distributions of the projected gas number density around the density peak at the end of the simulations (\(t_{\rm th}=10^{5}\,\rm yr\)). We plot each model on the parameter space of Halos A-G (vertical axis) and normalised initial SV \(v_{\rm SV}/\sigma_{\rm SV}=0-3.0\) (horizontal axis). The box sizes are 10 pc on a side. The red circles show cores (with \(n_{\rm H}\geq 10^{8}\,\rm cm^{-3}\)) and mean that a core forms by contracting inside the cloud. The blue circles show clouds (with \(n_{\rm H}\geq 10^{6}\,\rm cm^{-3}\)) and mean that a cloud forms, but a core has not yet formed inside the cloud. We blacked out the inside of the circle to indicate close pairs of cores (\(r<r_{\rm J}=0.25\,\rm pc\)) found in Models C00 and D30. Model D25 has an elongated cloud then we show the entire structure in the widened panel (30 pc along the vertical axis) outside the right side of the figure. The letter at the bottom right of each panel indicates the classification of the cloud structure (Types S, F, and C).
Massive gas clouds born under large SVs do not necessarily form many first stars.
### Supermassive star
Among the models in this study, the maximum mass of the dense core is \(M_{\rm core}\sim 1400\,M_{\odot}\), and the total value for each model is \(M_{\rm core,tot}\sim 3000\,M_{\odot}\) at most. The studied models have no supermassive stars with \(\sim 10^{5}\,M_{\odot}\) that collapse directly into the intermediate-mass BHs (IMBHs).
There are several models with \(M_{\rm J,whole}=10^{4}-10^{5}\,M_{\odot}\) (Table 1). If a protostar efficiently acquires mass in such a gas cloud, the protostellar radiative feedback, which can set the final stellar mass by terminating the mass accretion, will not work (McKee & Tan, 2008; Hosokawa et al., 2011). In this case, the protostar may grow to a large mass over 1 million years. However, the present calculation only estimates the core mass at \(10^{5}\,\)yr after the formation of the high-density region (which corresponds to the protostar formation epoch), and these simulations do not suppress the upper limit of the final stellar mass in a high-accretion environment.
We analysed the radially averaged accretion rate centred on the dense core to obtain the total gas mass in the range above the critical mass accretion rate required to suppress the protostellar radiative feedback, \(\dot{M}_{\rm cr}=0.047\,M_{\odot}\,\)yr\({}^{-1}\)(Hosokawa et al., 2012). Since there is no mechanism to suppress gas accretion without radiation feedback while this gas is accreting to the protostar, we can consider this gas mass as the stellar mass, assuming that the protostar continues to grow in mass. Among the masses estimated in this way, Model B30 has the most enormous value, \(9233\,M_{\odot}\), similar to the intermediate massive first stars found in the previous studies (e.g., Hirano et al., 2017; Wise et al., 2019; Regan et al., 2020). Some of the models investigated in this study may form IMBHs, which will be clarified by examining the results of long-time calculations to be performed in the future.
Hirano et al. (2017), directly calculated the formation of a supermassive first star, selected a target DM halo whose central velocity dispersion is \(\sim 160\,\)km s\({}^{-1}\) at \(z=7\), which is consistent with the estimated value of the host galaxies of observed high-\(z\) SMBHs. In the target halo initiated with \(v_{\rm SV}/\sigma_{\rm SV}=3.0\), a massive first star with \(3.4\times 10^{4}\,M_{\odot}\) was formed at \(z=30.5\) inside a massive DM halo with \(M_{\rm v}=2.2\times 10^{7}\,M_{\odot}\) (a star symbol in Figure 2). If we find samples of supermassive first stars forming in the models we have investigated in this paper, we can relax the formation conditions. Further systematic studies calculating longer-term evolution are needed to clarify to what extent the conditions for forming supermassive first stars are acceptable.
### Multiple and binary
Another issue of this series of studies is whether massive gas clouds formed by delayed star formation due to SV become gravitationally unstable and fragment. Five models in this study show the formation of multiple cores (\(N_{\rm B}>1\); Table 1). Four cores formed in Model D25 (\(N_{\rm B}=4\)), the maximum number of multiple cores for each model in this study, and their masses vary from 52 to 792 \(M_{\odot}\) each. The four cores exist over a long filament (Figure 11). The cores (red circles) and clouds (blue circles, overplotted by red ones) are all found on filamentary structures.
Two of the models with multiple core formations identified two cores approaching within the Jeans length (\(r_{\rm J}=0.25\,\)pc) at the end of the calculation (asterisks in Table 1 and black circles in Figure 11).
* Model C00: \(M_{\rm core}=234\) and 107 \(M_{\odot}\)
* Model D30: \(M_{\rm core}=759\) and 524 \(M_{\odot}\)
In Model D30, given the large SV, the core mass, which is the upper limit of stellar mass, is also increased.
Since these simulations do not numerically resolve the individual stellar scales, whether these will eventually build a close binary pair is under debate. Also, due to the limitations of this study's numerical resolution and output time increments, we may have missed clouds/cores that merged in the process. To uncover the formation and evolution of multiple clouds/cores, high-resolution and long-term simulations are necessary for the future.
### Numerical resolution
Finally, we discuss the dependence of the results on the numerical resolution. In this study, we limit the maximum density to the threshold density \(n_{\rm th}\), whose corresponding scale is smaller than 0.1 pc, to perform the calculation of the first \(10^{5}\,\)yr evolution after during the accretion phase. Whether the number of fragments increases when calculating up to higher densities (smaller scales) is a factor that determines the applicability of the results of this study.
To confirm the dependence on numerical resolution, we simulate Model B10 with \(n_{\rm th}=10^{10}\,\)cm\({}^{-3}\). Figure 15 compares the gas density distributions for simulations with different \(n_{\rm th}\), and there is no apparent difference in the overall structure. Since the higher computational resolution allowed for smaller structures to be constructed, the density contrast was higher for models with higher \(n_{\rm th}\) due to more shrinkage in the direction of filament crushing (Figure 16). However, this contributes little to fragmentation.
## 5 Conclusions
In this paper, we have performed a suite of 42 simulations of the first star formation in a \(\Lambda\)CDM universe but under different initial streaming velocities (SVs). We continue our simulation over 100,000 years after the formation of the first dense cloud, intending to study the final fate of the massive gravitationally unstable cloud hosted by a massive halo. Our principal results are as follows:
Figure 12: Numbers of clouds (\(N_{6}\) where \(n_{\rm H}\geq 10^{6}\,\)cm\({}^{-3}\)) and cores (\(N_{8}\) where \(n_{\rm H}\geq 10^{8}\,\)cm\({}^{-3}\)) at the end of the simulations (\(t_{\rm th}=10^{5}\,\)yr). We omit the number “1” and leave the cell blank. The cell colours indicate the structure class of the clouds: blue, green, and red correspond to Types S, F, and C, respectively.
* As initial SV increases (\(v_{\rm SV}/\sigma_{\rm SV}^{\rm rec}=0\to 3\)), the halo forms later (\(dz\sim 10\)) and its mass increases (\(\Delta M_{\rm v}\sim 100\)). The mass of a gravitationally contracting gas inside a halo when it becomes gravitationally unstable for the first time also increases (\(\Delta M_{\rm J,whole}\sim 40\)). On the other hand, the total mass of dense cores However, the mass of the dense core did not necessarily increase, resulting in a decrease in star formation efficiency (\(\Delta(M_{\rm core,tot}/M_{\rm v,b})\sim 10^{-2}\)).
* As initial SV increases, the more complex the morphology of the massive gas cloud that forms in the halo: spherical, filamentary, and complex. The degree of temperature drop during contraction determines the morphological change of the massive gas cloud. When the maximum to minimum temperature ratio during contraction exceeds \(T_{\rm max}/T_{\rm min}\sim 10\), the shape changes from symmetric to asymmetric.
* As initial SV increases, the more easily the asymmetric massive gas cloud fragment, forming an association of dense clouds. Among 42 models in this paper, we confirmed the simultaneous formation of up to four dense gas clouds. Their total mass is about \(2254\,M_{\odot}\), corresponding to the upper limit of stellar mass among examined models.
## Acknowledgements
We want to thank Kei Kanaoka for his contributions to the early stages of this study. Numerical computations were carried out on Cray XC50 at CfCA in National Astronomical Observatory of Japan and Yukawa-21 at YITP in Kyoto University. Numerical analyses were in part carried out on the analysis servers at CfCA in National Astronomical Observatory of Japan. This work was supported by JSPS KAKENHI Grant Numbers JP18H05222, JP21K13960, JP22H01259 (S.H.) and JP21H01123 (H.U. and S.H.), Qdai-jump Research Program 02217 (S.H.), and MEXT as "Program for Promoting Researches on the Supercomputer Fugaku" (Structure and Evolution of the Universe Unraveled by Fusion of Simulation and AI; Grant Number JPMXP1020230406, Project ID hp230204) (S.H.).
Figure 14: Gas mass as a possible mass fuel for the first star formation as a function of the initial SV. Panels: (a) gas mass inside the virial radius, (b) total core masses where \(m_{\rm H}\geq 10^{8}\,{\rm cm}^{-3}\), and (c) ratio of the above two masses. The grey-shaded region in panel (c) corresponds to Equation 5.
Figure 13: Distributions of the enclosed gas mass as a function of the gas number density, \(M_{\rm enc,gas}(>n_{\rm H})\), at the end of the simulation (\(t_{\rm h}=10^{5}\,{\rm yr}\)). Each panel shows Halos A to G results with different initial SV \(v_{\rm SV}/\sigma_{\rm SV}=0-3.0\).
## Data Availability
The data underlying this article will be shared on reasonable request to the corresponding author.
|
2302.03861 | SwinCross: Cross-modal Swin Transformer for Head-and-Neck Tumor
Segmentation in PET/CT Images | Radiotherapy (RT) combined with cetuximab is the standard treatment for
patients with inoperable head and neck cancers. Segmentation of head and neck
(H&N) tumors is a prerequisite for radiotherapy planning but a time-consuming
process. In recent years, deep convolutional neural networks have become the de
facto standard for automated image segmentation. However, due to the expensive
computational cost associated with enlarging the field of view in DCNNs, their
ability to model long-range dependency is still limited, and this can result in
sub-optimal segmentation performance for objects with background context
spanning over long distances. On the other hand, Transformer models have
demonstrated excellent capabilities in capturing such long-range information in
several semantic segmentation tasks performed on medical images. Inspired by
the recent success of Vision Transformers and advances in multi-modal image
analysis, we propose a novel segmentation model, debuted, Cross-Modal Swin
Transformer (SwinCross), with cross-modal attention (CMA) module to incorporate
cross-modal feature extraction at multiple resolutions.To validate the
effectiveness of the proposed method, we performed experiments on the HECKTOR
2021 challenge dataset and compared it with the nnU-Net (the backbone of the
top-5 methods in HECKTOR 2021) and other state-of-the-art transformer-based
methods such as UNETR, and Swin UNETR. The proposed method is experimentally
shown to outperform these comparing methods thanks to the ability of the CMA
module to capture better inter-modality complimentary feature representations
between PET and CT, for the task of head-and-neck tumor segmentation. | Gary Y. Li, Junyu Chen, Se-In Jang, Kuang Gong, Quanzheng Li | 2023-02-08T03:36:57Z | http://arxiv.org/abs/2302.03861v1 | # SwinCross: Cross-modal Swin Transformer for Head-and-Neck Tumor Segmentation in PET/CT Images
###### Abstract
Radiotherapy (RT) combined with cetuximab is the standard treatment for patients with inoperable head and neck cancers. Segmentation of head and neck (H&N) tumors is a prerequisite for radiotherapy planning but a time-consuming process. In recent years, deep convolutional neural networks have become the de facto standard for automated image segmentation. However, due to the expensive computational cost associated with enlarging the field of view in DCNNs, their ability to model long-range dependency is still limited, and this can result in sub-optimal segmentation performance for objects with background context spanning over long distances. On the other hand, Transformer models have demonstrated excellent capabilities in capturing such long-range information in several semantic segmentation tasks performed on medical images. Inspired by the recent success of Vision Transformers and advances in multi-modal image analysis, we propose a novel segmentation model, debuted, Cross-Modal Swin Transformer (SwinCross), with cross-modal attention (CMA) module to incorporate cross-modal feature extraction at multiple resolutions. To validate the effectiveness of the proposed method, we performed experiments on the HECKTOR 2021 challenge dataset and compared it with the nnU-Net[1] (the backbone of the top-5 methods in HECKTOR 2021) and other state-of-the-art transformer-based methods such as UNETR[2], and Swin UNETR[3]. The proposed method is experimentally shown to outperform these comparing methods thanks to the CMA module's ability to capture better inter-modality complimentary feature representations between PET and CT, for the task of head-and-neck tumor segmentation.
Transformer, network architecture, tumor segmentation, PEC/CT
## I Introduction
Head and Neck (H&N) cancers are among the most common cancers worldwide [4], accounting for about 4% of all cancers in the United States. FDG-PET and CT imaging are the gold standards for the initial staging and follow-up of H&N cancer. Quantitative image biomarkers from medical images such as radiomics have previously shown tremendous potential to optimize patient care, particularly for Head and Neck tumors [5]. However, radiomics analyses rely on an expensive and error-prone manual process of annotating the Volume of Interest (VOI) in 3D. The automatic segmentation of H&N tumors from PET/CT images could therefore enable the validation of radiomics models on very large cohorts and with optimal reproducibility. Besides, automatic segmentation algorithms could enable a faster clinical workflow. By focusing on metabolic and anatomical features respectively, PET and CT include complementary and synergistic information in the context of H&N primary tumor segmentation.
Recently Transformer, a neural network based on self-attention mechanisms to compute feature representations and global dependencies, has flourished in natural language processing and computer vision [6]. In computer vision, Transformer-based architectures have achieved remarkable success and have demonstrated superior performance on a variety of tasks, including visual recognition [7, 8], objection detection [9, 10], semantic segmentation [11, 12], etc. [8, 13-15]. The success of vision transformers in the computer vision field has inspired their use in medical imaging, where they have shown promising potential in various applications, such as classification [16-18] segmentation [2, 19, 20] and registration [21, 22]. Chen et al. first proposed the TransUNet [19] for medical image segmentation, which used a 12-layer ViT for the bottleneck features and followed the 2D UNet design and adopted the Transformer blocks in the middle structure. Later
that year, two improved versions of TransUNet, TransUNet+[23], and Ds-TransUNet [24], were proposed and achieved better results for CT segmentation tasks. For 3D segmentation where the computational cost for self-attention becomes very expensive, researchers have attempted to limit the use of transformer blocks, i.e., only use self-attention at the bottleneck between the encoder and decoder network [25, 26] or adopted a deformable mechanism which enables attention on a small set of key positions [27]. SegTran[28] proposed to leverage the learning tradeoff between larger context and localization accuracy by doing pairwise feature contextualization with squeeze and excitation blocks. More recently, more and more state-of-the-art performance has been refreshed by networks with pre-trained transformer backbone. Pre-training techniques have become a new area of research in transformers as the self-attention blocks commonly require pre-training data at a large scale to learn a more powerful backbone [29]. For example, self-supervised Swin UNETR (Tang et al., 2021) collects a large-scale of CT images (5,000 subjects) for pre-training the Swin Transformer encoder, which derives significant improvement and state-of-the-art performance for BTCV [30] and Medical Segmentation Decathlon (MSD) [31]. Self-supervised masked autoencoder (MAE) [32] investigates the MAE-based self-pre-training paradigm designed for Transformers, which enforces the network to predict masked targets by collecting information from the context. Besides developing advanced architectures to better learn the data, researchers have also attempted to improve performance by providing additional data that is more specific to the task the network is given.
In representation learning, the advancement of multimodal learning has benefited numerous applications [33, 34]. The utilization of fused features from multimodalities has largely improved performance in cross-media analysis tasks such as video classification [35], event detection [36, 37], and sentiment analysis [38, 39]. A characteristic that these works have demonstrated in common is that better features for one modality (e.g., audio) can be learned if multiple modalities (e.g., audio and video) are present at feature learning time. In [40], Ngiam et al proposed the cross-modality (audio + video) feature learning scheme for shared representation learning and demonstrated superior visual speech classification performance compared to the classifier trained with audio-only or video-only data. Wang et al. proposed a DNN-based model combining canonical correlated autoencoder and autoencoder-based terms to fuse multi-view for an unsupervised multi-view feature learning [41]. Following this trend, deep learning-based multimodal methods have also gained attraction in the medical image analysis community due to their remarkable performance in many medical image analysis tasks including the classification [42, 43], diagnosis [44, 45], image-retrieval [46], and segmentation [47-49]. Carneiro et al. [50] proposed to use of shared image features from unregistered views of the same region to improve classification performance. In [44], Xu et al. proposed to jointly learn the nonlinear correlations between image and other non-image modalities for cervical dysplasia diagnosis by leveraging multimodal information, which significantly outperformed methods using any single source of information alone. In [45], Suk et al. proposed to learn a joint feature representation from MRI and PET using a hierarchical DCNN for Alzheimer's Disease diagnosis.
Despite the impressive representation capacity of vision transformer models, current vision transformer-based segmentation models still suffer from inconsistent and incorrect dense predictions when fed with multi-modal input data. We suspect that the power of their self-attention mechanism is limited in extracting the complementary information exisit in multi-modal data. To this end, we propose a dual-branch cross-attention Swin Transformer (SwinCross) to combine image patches from two different modalities at different scale to produce more complementary feature representations from the two modalities. Furthermore, to reduce computation, we develop a cross-modal attention (CMA) module based on cross attention and the shifted window self-attention mechanism from Swin Transformer [10]. To validate the effectiveness of the proposed method, we performed experiments on a public dataset and compared the proposed method with state-of-the-art methods such as UNETR, Swin UNETR, and nnU-Net. The proposed method is experimentally shown to be able to capture the inter-modality correlation between PET and CT for the task of head-and-neck tumor segmentation.
## 2 Related Work
### The Current State-of-the-art Methods for H&N Tumor Segmentation
The top-five performing teams in the HECTOR 2021 challenge all used U-Net or its variants for the primary H&N tumor segmentation task [51]. In [52], Xie et al. used a patch-based 3D nnU-Net with Squeeze and Excitation normalization and a novel training scheme, where the learning rate is adjusted dynamically using polyLR [53]. The approach achieved a 5-fold average Dice score of 0.764 on the validation data dataset, which ranked them first on the leaderboard for the tumor segmentation task. They trained five models in a five-fold cross-validation manner with random data augmentation including rotation, scaling, mirroring, Gaussian noise, and Gamma correction. The final test results were generated by ensembling five test predictions via probability averaging. In [54], An et al. proposed a coarse-to-fine framework using a cascade of three U-Nets. The first U-Net is used to coarsely segment the tumor and then select a bounding box. Then, the second U-Net performs a finer segmentation on the smaller region within the bounding box, which has been shown to often lead to more accurate segmentation [55]. Finally, the last U-Net takes as input the concatenation of PET, CT, and the previous segmentation to refine the predictions. The three U-Nets were trained with different objectives - the first one to optimize the recall and the rest two to optimize the Dice score. The final results were obtained via majority voting on three different predictions: an ensemble of five nnU-Nets, an ensemble of three U-Nets with squeeze-and-excitation (SE) normalization, and the predictions from the proposed model. In [56], Lu et al. proposed a huge ensemble learning model which consists of fourteen 3D U-Nets, including the eight models adopted in [57], winner of the HECTOR 2020 challenge, five models trained with leave-on-center-out, and one model combining a prior and
posteriori attention. The final ensembled prediction was generated by averaging all fourteen predictions and thresholding the resulting mask to 0.5. In [58], Yousefrizi et al. used a 3D nnU-Net with SE normalization trained on a leave-one-center-out with a combination of a "unified" focal and Mumford-Shah losses, leveraging the advantage of distribution, region, and boundary-based loss functions. Lastly, Ren et al [59] proposed a 3D nnU-Net with various PET normalization techniques, namely PET-clip and PET-sin. The former clips the Standardized Uptake Values (SUV) range in [0,5] and the latter transforms monotonic spatial SUV increase into onion rings via a sine transform of SUV, which ranked them fifth on the leaderboard. Although CNN-based methods have outstanding representation capability, it is lacking the ability to model long-range dependencies due to the limited receptive fields of the convolution kernels. This inherent limitation of receptive size sets an obstacle to learning global semantic information which is critical for dense prediction tasks like segmentation.
### Transformers and Multi-modal Learning
Transformers have been widely applied in the fields of Natural Language Processing [60-62] and Computer Vision [63-68] primarily due to its excellent capability to model long-range dependency. Besides achieving impressive performance in a variety of language and vision tasks, the Transformer model also provides an effective mechanism for multi-modal reasoning by taking different modality inputs as tokens for self-attention[69-78]. For example, Prakash et al.[74] proposed to use a Transformer to integrate image and LiDAR representations using attention. Going beyond language and vision, we propose to utilize a cross-modal attention Swin Transformer to fuse 3D PET and CT images at multiple resolutions for the segmentation of H&N tumors. We build the SwinCross architecture based on the shifted window block from Swin Transformer, which only computes self-attention within local regions, unlike conventional ViTs, which are more computationally expensive. Although Swin Transformer is unable to explicitly compute correspondences beyond its field of view, similar to how ConvNets operate to some extent, the shifted window mechanism still yields much larger kernels than most ConvNets [79].
## 3 Swincross
### Overall Architecture of SwinCross
In this work, we propose an architecture for 3D multi-modal segmentation with two main components: (1) a Cross-modal Swin Transformer for integrating information from multiple modalities (PET and CT), and (2) a cross-modal shifted window attention block for learning complementary information from the modalities. Our key idea is to exploit the cross-modal attention mechanism to incorporate the global context for PET and CT modalities given their complementary nature for the H&N tumor segmentation task. We illustrate the architecture of SwinCross in Fig. 1. The input image to the SwinCross model is multi-channel 3D volume \(F^{in}\in R^{H\times W\times D\times M}\), with a dimension of \(H\times W\times D\times M\). The input image is first split channel-wise, forming a set of single-channel 3D images \(F_{mod,1},...,F_{mod,k}\in R^{H\times W\times D}\). Then, we split each single-channel image into small non-overlapped patches with a patch size of \(\frac{H}{H^{\prime}}\times\frac{W}{W^{\prime}}\times\frac{D}{D^{\prime}}\), which corresponds to a patch resolution of \(H^{\prime}\times W^{\prime}\times D^{\prime}\). Each 3D patch is projected into an embedding space with dimension C to form a tokenized sequence \(S_{mod,k}\in R^{N\times C}\), where \(N=H^{\prime}\times W^{\prime}\times D^{\prime}\) is the number of tokens in the sequence and each token is represented by a feature vector of dimensionality \(\mathcal{C}\). The \(S^{mod,K}\) sequences are inputs to the encoder network.
### Network Encoder
The encoder uses linear projections for computing a set of queries, keys and values (\(Q\), \(K\), and \(V\)) for each input sequence \(S_{mod,k}\).
\[Q_{mod,k}=S_{mod,k}M^{q},K_{mod,k}=S_{mod,k}M^{k},V_{mod,k} \tag{1}\] \[=S_{mod,k}M^{v}\]
Figure 1: Architecture of SwinCross. 3D PET and CT volumes are used as inputs to our Cross-modal attention Swin Transformer (SwinCross) which adopts multiple cross-modal attention (CMA) modules for the fusion of intermediate feature maps between the two modalities. To effectively combine patch tokens from both modalities at different scales, we develop a fusion method based on the CMA blocks, which ascending information between two branches at multiple resolutions \((\frac{1}{4}\frac{1}{4},\frac{1}{16},and\frac{1}{32}\) of the input resolution) throughout the two feature extracting branches resulting in 5 feature vectors \((\frac{1}{2}\frac{1}{4},\frac{1}{8},\frac{1}{6};and\frac{1}{32}\) of the input resolution) from both modalities, which are combined via element-wise summation. The 5 feature vectors constitute fused representations of the CT and PET image at 5 different resolutions. These feature vectors are then processed with a ConvNet decoder which predicts the final segmentation map. We channel-wise concatenate the decoded feature vectors from a previous resolution to the feature vector at the current resolution and use the resulting feature vectors as input to the deconvolution block to produce the feature vector at the next resolution.
where \(M^{q}\in R^{Df\times Dq},M^{k}\in R^{Df\times Dk}\), and \(M^{y}\in R^{Df\times Dv}\) are weight matrices. In the case of bimodal cross-attention, it uses the scaled dot products between the \(Q\) and \(K\) of each modality to compute the attention weights and then aggregates the values for each query of each modality,
\[A_{mod,1}=softmax\left(\frac{Q_{mod,k}K_{mod,2}^{T}}{\sqrt{D_{k}}}\right)V_{ mod,1}, \tag{2}\]
\[A_{mod,2}=softmax(\frac{Q_{mod,2}\kappa_{mod,1}^{T}}{\sqrt{D_{k}}})V_{mod,2}, \tag{3}\]
in which \(Q_{mod,1}\), \(K_{mod,1}\), \(V_{mod,1}\), \(Q_{mod,2}\), \(K_{mod,2}\), \(V_{mod,2}\) denote queries, keys, and values from modality 1 and 2, respectively; \(D_{k}\) represents the size of the key and query.
As these are 3D tokens and the attention computation cost increases quadratically with respect to the number of tokens, we adopted the shifted window mechanism for the cross-attention calculation. Specifically, we utilize windows of size \(M\times M\times M\) to evenly partition the patchtified volume into \(\frac{H^{\prime}}{M}\times\frac{W^{\prime}}{M}\times\frac{p^{\prime}}{M}\) regions at a given layer \(l\) in the transformer encoder. In the subsequent layers of \(l\) and \(l+1\) of the encoder, the outputs are calculated as
\[\hat{A}_{mod,k}^{l}=\text{W-MSA}\left(\text{LN}\left(A_{mod,k}^{l-1}\right) \right)+A_{mod,k}^{l-1} \tag{4}\]
\[A_{mod,k}^{l}=\text{MLP}\left(\text{LN}\left(\hat{A}_{mod,k}^{l}\right) \right)+\hat{A}_{mod,k}^{l} \tag{5}\]
\[\hat{A}_{mod,k}^{l+1}=\text{SW-MSA}\left(\text{LN}\left(A_{mod,k}^{l}\right) \right)+A_{mod,k}^{l} \tag{6}\]
\[A_{mod,k}^{l+1}=\text{MLP}\left(\text{LN}\left(\hat{A}_{mod,k}^{l+1}\right) \right)+A_{mod,k}^{l+1} \tag{7}\]
A 3D version of the cyclic-shifting [10] was implemented for efficient computation of the shifted window mechanism. SwinCross follows a standard four-stage structure [10] but has cross-modality attention mechanism at each stage for fusion of intermediate feature maps between both modalities. The fusion is applied at multiple resolutions \((\frac{H}{2}\times\frac{W}{2}\times\frac{D}{2}\times C,\frac{H}{4}\times\frac {W}{4}\times\frac{D}{4}\times 2C,\frac{H}{8}\times\frac{W}{8}\times\frac{D}{8}\times 4C, \frac{H}{16}\times\frac{W}{16}\times\frac{D}{16}\times 8C,\frac{H}{32}\times \frac{W}{32}\times 32\times 16C)\) from each modality. The filtered feature maps from both modalities are summed element-wise and sent to the decoder, as indicated by the red plus signs on Fig. 1. At each stage, these feature maps are fed back into each of the individual modality branches using an element-wise summation with the down-sampled (via patch merging) input feature maps, as indicated by the green plus signs on Fig. 1.
The SwinCross encoder has a patch size of \(2\times 2\times 2\) and a feature dimension of \(2\times 2\times 2\times 2=16\), taking into account the multi-modal PET/CT images with 2 channels. The size of the embedding space C is set to 48 in our encoder. Furthermore, the SwinCross encoder has 4 stages which comprise of [2, 4, 2, 2] cross-modal shifted window transformer blocks at each stage. Hence, the total number of layers in the encoder is L = 10. Before stage 1, each single-channel image was split into small non-overlapped patches by a 3D convolution layer with stride size equal to 2 (patch size) and output channels equal to C, resulting in \(\frac{H}{2}\times\frac{W}{2}\times\frac{D}{2}\times C\) 3D tokens. To follow the hierarchical structure proposed in [10], a patch merging layer is used on each modality branch to decrease the resolution of the feature representations by a factor of 2 at the beginning of each stage. In order to preserve fine details from the input image to the output segmentation, we send the original input multi-channel 3D volume \(F^{th}\in H^{th\times W\times D\times M}\) and its embedded version together with the feature map outputs from the 4 stages to the decoder, resulting in a total of 6 feature maps with dimensions of \(H\times W\times D\times M\), \(\frac{H}{2}\times\frac{W}{2}\times\frac{D}{2}\times C\), \(\frac{H}{4}\times\frac{W}{4}\times\frac{D}{4}\times 2C,\frac{H}{8}\times\frac{W}{8}\times \frac{D}{8}\times 4C,\frac{H}{16}\times\frac{W}{16}\times\frac{D}{16}\times 8C,\) and \(\frac{H}{32}\times\frac{W}{32}\times\frac{W}{32}\times 16C.\)
### Network Decoder
We adopted a ConvNet decoder as opposite to a Transformer decoder for the ease of cross-modal feature fusion and lower computational cost. SwinCross adopts a U-shaped network design in which the extracted feature representations of the encoder are used in the decoder via skip connections at each resolution. At each stage \(i\) (\(i\in\) [0,1,2,3,4,5]) of the encoder, the output feature representations are reshaped into size \(\frac{H}{2^{i}}\times\frac{W}{2^{i}}\times\frac{D}{2^{i}}\) and fed into a residual block comprising of two 3x3x3 convolutional layers that are normalized by instance normalization layers. Subsequently, the resolution of the feature maps is increased by a factor of 2 using a deconvolutional layer and the outputs are concatenated with the outputs of the previous stage. The concatenated features are then fed into another residual block as previously described. The final segmentation outputs are computed by using a 1x1x1 convolutional layer and a sigmoid activation function.
## 4 Results and Discussion
### Ablation Studies on HECTOR 2021 Dataset
In Table 1, we ablate the CMA module block, which only concerns the attention mechanism of the Swin Transformer, and we keep everything else the same as in Swin UNETR (e.g., embed dimension, feature size, number of blocks in each stage, window size, and number of heads). We start from channel-wise concatenated input, which consists of two volume images from both modalities. This multi-modal input already gives Swin UNETR a strong 5-fold average Dice Score of 0.754\(\pm\)0.032. If we send in the images from two modalities in two separate branches (as shown in Fig. 1) and use CMA module block to fuse the learned features from each modality at each stage, the performance is improved to 0.769\(\pm\)0.026. The output from each CMA module block has the same shape as the input and each filtered feature is added back to the corresponding modality's branch. At each stage, the sum of the
filtered features from the CMA module block is sent to the decoder.
### Comparison to the State-of-the-art Methods in Medical Image Segmentation
We have compared the performance of SwinCross against the current SOTA methods in medical image segmentation such as Swin UNETR, UNETR, and nnU-Net, using a 5-fold cross-validation split. Evaluation results (dual modality) across all five folds are presented in Table 2. The proposed SwinCross model achieved the highest 5-fold average Dice score of 0.769 among all the comparing methods. Note that SwinCross outperformed Swin UNETR across all 5 folds, which demonstrated its capability of learning multi-modal feature representations at multiple resolutions via the Cross-modal attention modules. These results are consistent with our previous findings [80], in which we showed that nnU-Net outperformed Swin UNETR for H&N tumor segmentation on two public datasets. With the CMA block and dual-branch fusion mechanism, SwinCross demonstrates a slightly better segmentation performance than nnU-Net, as measured by the 5-fold average Dice score. However, competitive performance is seen from nnU-Net, which again indicates that for small object segmentation, the improvement from modeling long-range dependency may be limited as a smaller effective field may be enough to capture all the foreground and background information of the small object such as a H&N tumor [80].
Fig. 2 illustrates sample segmentation outputs of all the methods. For large tumors, Fig. 2a shows the benefit of modeling long-range dependency brought by the transformer-based models. SwinCross was the only network that could capture the tip of the tumor marked by the yellow crosshair, demonstrating the benefit of the CMA module blocks, which allow feature exchange between two modalities at multiple resolutions in the encoder. For smaller tumors, Fig. 2c shows that SwinCross was able to capture the fine edge of the tumor by incorporating complementary edge features from CT image, outperforming the other methods that used channel-wise concatenated input. The results confirmed the findings in [34] - performing fusion within the network is better than outside the network.
### Comparison to Single-modality Segmentation
As a comparison to using dual-modality input, we have computed the performance of the reference methods using single-modality input with the same 5-fold cross-validation split. Since H&N tumor is primarily present in PET, we conducted these single-modality experiments using PET image only. Evaluation results (PET only) across all five folds are presented in Table 3. The Swin UNETR achieved the highest 5-fold average Dice score of 0.732 among the reference methods for single-modality input. Fig. 3 shows box plot of the mean dice score values of the five splits from all the methods using singe-modality as well as dual-modality. Overall, the plot demonstrates that networks with dual-modality (PET and CT) input significantly outperforms the same networks with single-modality (PET) input for the task of H&N tumor segmentation.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{\begin{tabular}{c} Dice Score \\ \end{tabular} } & \begin{tabular}{c} SwinCross \\ (PET+CT) \\ \end{tabular} & \begin{tabular}{c} Swin \\ UNETR \\ (PET+CT) \\ \end{tabular} & \begin{tabular}{c} nnU-Net \\ (PET+CT) \\ \end{tabular} &
\begin{tabular}{c} UNETR \\ (PET+CT) \\ \end{tabular} \\ \hline Fold0 & **0.717** & **0.715** & **0.714** & **0.702** \\ Fold1 & **0.788** & 0.781 & 0.781 & 0.716 \\ Fold2 & 0.800 & **0.752** & **0.803** & **0.727** \\ Fold3 & **0.779** & 0.772 & 0.777 & 0.762 \\ Fold4 & **0.761** & **0.748** & **0.761** & **0.708** \\ Average & **0.769** & 0.754 & 0.767 & 0.723 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Five-fold cross-validation benchmarks in terms of mean Dice score values from all methods using PET and CT image.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{\begin{tabular}{c} Dice Score \\ \end{tabular} } & \begin{tabular}{c} SwinCross \\ (PET+CT) \\ \end{tabular} & \begin{tabular}{c} Swin \\ UNETR \\ (PET) \\ \end{tabular} & \begin{tabular}{c} nnU-Net \\ (PET+CT) \\ \end{tabular} &
\begin{tabular}{c} UNETR \\ (PET+CT) \\ \end{tabular} \\ \hline Fold0 & **0.717** & **0.715** & **0.714** & **0.702** \\ Fold1 & **0.788** & 0.781 & 0.781 & 0.716 \\ Fold2 & 0.800 & **0.752** & **0.803** & **0.727** \\ Fold3 & **0.779** & 0.772 & 0.777 & 0.762 \\ Fold4 & **0.761** & **0.748** & **0.761** & **0.708** \\ Average & **0.769** & 0.754 & 0.767 & 0.723 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Five-fold cross-validation benchmarks in terms of mean Dice score values from the reference methods using PET image only.
Figure 2: From left to right are input PET image, CT image, inferenced mask from nnU-Net, UNETR, Swin UNETR, SwinCross (proposed) and ground truth.
## V Conclusion
A Cross-modal Swin Transformer was introduced for the automatic delineation of head and neck tumors in PET and CT images. The proposed model has a cross-modality attention module that uses feature exchange between two modalities at multiple resolutions. A ConvNet-based decoder is connected to the encoder via skip connections at different resolutions. We have validated the effectiveness of our proposed model by comparing with the state-of-the-art methods using the HECTOR 2021 dataset. The proposed method is experimentally shown to outperform the other methods by capturing better inter-modality correlation between PET and CT for the task of head-and-neck tumor segmentation. The method proposed is generally applicable to other semantic segmentation tasks using other imaging modalities such as SPECT/CT, or MRI.
|
2304.07989 | IMCDCF: An Incremental Malware Detection Approach Using Hidden Markov
Models | The popularity of dynamic malware analysis has grown significantly, as it
enables analysts to observe the behavior of executing samples, thereby
enhancing malware detection and classification decisions. With the continuous
increase in new malware variants, there is an urgent need for an automated
malware analysis engine capable of accurately identifying malware samples. In
this paper, we provide a brief overview of malware detection and classification
methodologies. Moreover, we introduce a novel framework tailored for the
dynamic analysis environment, called the Incremental Malware Detection and
Classification Framework (IMDCF). IMDCF offers a comprehensive solution for
general-purpose malware detection and classification, achieving an accuracy
rate of 96.49% while maintaining a simple architecture. | Ran Liu, Charles Nicholas | 2023-04-17T04:53:40Z | http://arxiv.org/abs/2304.07989v3 | # Imodcf: An Incremental Malware Detection
###### Abstract
Dynamic malware analysis has become popular because it allows analysts to observe the behavior of running samples, facilitating improved decisions for malware detection and classification. With the increasing number of new malware, there is a growing need for an automated malware analysis engine that can accurately detect malware samples. In this paper, we briefly introduce the malware detection and classification approaches. Furthermore, we introduce a new malware detection and classification framework that works specifically in the dynamic analysis setting, namely Incremental Malware Detection and Classification Framework, or IMDCF. In this paper, we present a novel framework designed specifically for the dynamic analysis setting, named the Incremental Malware Detection and Classification Framework (IMDCF). IMDCF provides a end-to-end solution for general-purpose malware detection and classification with 96.49% accuracy and simple architecture.
## 1 Introduction
Malware has long been a prevalent security threat globally. There have been many approaches for malware detection that can be categorized into the following groups: static malware analysis, dynamic malware analysis, and machine learning malware analysis[1][2]. Static malware analysis involves examining the executable binary or source code. However, this approach has its limitations, as source code may not always be accessible. In contrast, dynamic malware analysis investigates the malware as it runs, often in a sandbox environment, and is also known as behavior-based malware analysis[3][4]. In our work, we employ Hidden Markov Models (HMMs) to clarify malware families, including a general "benign" family for benign specimens. In the IMDCF framework, each malware and benign family is used to train a set of corresponding HMMs. Each HMM within the same family is trained on a disjoint subset of the family dataset, covering at most 30% of the total features. The test sample is evaluated independently by HMMs from the same family, and the mean of their result scores is assigned to the sample, indicating the likelihood that the HMMs trained on that family would accept it. An input specimen is considered to belong to a specific malware or benign family if the computed probability is high. However, since high probability data items may be produced by different malware or benign families, IMDCF forms longer data sequences by combining previously seen input data items. The newly-formed longer data sequence is scored against each HMM family, and the sequence is assigned to the malware or benign family with the highest score. If no model accepts the given data sequence, it is considered part of a new malware family, potentially requiring the training of a new HMM for that family. IMDCF has several advantages:
* High accuracy: IMDCF achieves up to 96.49% classification accuracy when presented with mixed malware types.
* Incremental processing: IMDCF works with sequences of data, handling items as short as one opcode or system call while maintaining high accuracy.
* Versatility: IMDCF is designed for use in various settings, including mobile and IoT environments, and can be applied for purposes such as anomaly detection.
* Simplicity: IMDCF features a relatively simple structure, allowing for easy implementation and adoption.
The paper is organized as follows: The related works are introduced in Section 2. The HMM's background is introduced in Section 3. In section 4, we describe and introduce our framework. In section 5, we'll discuss our experiment. The future work is discussed in Section 6.
## 2 Related Work
Konrad Rieck et al. proposed an incremental malware analysis framework that incrementally extracts prototypes from test samples[5]. Their framework starts by running and observing malware activity in a sandbox, generating a report containing running behavior. This report is then embedded into a higher-dimensional vector, with each dimension representing a similar behavior pattern. Machine learning techniques, such as KMM, are applied to embedded reports to cluster and classify malware samples incrementally, for instance, on a daily basis. New prototype classes are subsequently added for further analysis. In comparison to Rieck's work, IMDCF works on dynamic features that incrementally feeding the classification engine with running behaviors, enabling classification while the malware is still active in the sandbox.
Shraddha Suratkar et al. investigated the use of HMM in anomaly detection[6]. Machine learning techniques are initially applied to test samples for anomaly detection, with trained HMMs then utilized to predict the next most probable system calls during an attack. Jing Zhao et al. discussed the efficiency of using a Gaussian Mixture Hidden Markov Model for malware classification[7]. Iyer, Divya et al. employed HMM for credit card anomaly detection, using transaction categories as hidden states and transaction history as the observation sequence[8]. By computing the probability of a new transaction being accepted by a given HMM, they determine whether the transaction is fraudulent. IMDCF employs a similar approach to compute the probability of new features.
## 3 Hidden Markov Model
HMM (Hidden Markov Model)[9] is a statistical model and is especially useful for modeling time series data or sequences where the underlying process generating the observations is supposed to be a Markov process with unobservable (hidden) states, which can be characterized as follows[10]:
* Hidden States of Markov process \(S=\{s_{1},s_{2},...,s_{n}\}\), and \(q_{t}\) donates the state at time instant t.
* A set of possible observations \(V=\{V_{1},V_{2},...,V_{m}\}\), where m is the number of distinct symbols for each state.
* The state transition probabilities stochastic matrix \(A=[a_{i,j}]\). \(a_{ij}=P(q_{t+1}=s_{j}|q_{t}=s_{i})\), where \(1\leq i\leq N\), \(1\leq j\leq N\) N is the number of states of the model.
* The observation transition probabilities stochastic matrix \(B=[b_{j}(k)]\). \(b_{j}(k)=P[V_{k}|s_{j}]\), where \(1\leq i\leq N\), \(1\leq j\leq M\).
* The initial state distribution vector \([\pi_{i}]\). \(\pi_{i}=P[q_{1}=s_{i}]\),where \(1\leq i\leq N\).
* The observation sequence \(O=\{O_{1},O_{2},...,O_{R}\}\), where R is length of the observation sequence.
Assuming we have a HMM \(\lambda(A,B,\pi)\) of a malware family, to decide if the fed observation sample \(O=O_{1},O_{2},...,O_{t}\) is belonging to the malware family can be computed as follows:
\[P(O|\lambda)=\sum_{Q}P(O|Q,\lambda)P(Q|\lambda) \tag{1}\]
, where \(Q=q_{1},q_{2},...,q_{R}\) is the optimal states, \(P(Q|\lambda)\) is the probability of Q by given HMM \(\lambda(A,B,\pi)\) and \(P(O|Q,\lambda)\) is the probability of observations \(O=O_{1},O_{2},...,O_{t}\) is generated by states \(Q=q_{1},q_{2},...,q_{R}\).
We have:
\[P(O|Q,\lambda)=\prod_{t=1}^{R}P(O_{t}|q_{t},\lambda)=b_{q_{1}}(O_{1})b_{q_{2 }}(O_{2})...b_{q_{R}}(O_{R}). \tag{2}\]
We further know:
\[P(Q|\lambda)=\pi_{q_{1}}a_{q_{1},q_{2}}a_{q_{2},q_{3}}...a_{q_{R-1}q_{R}} \tag{3}\]
We say observations \(O\) is accepted by the HMM \(\lambda\) with two probability if \(P(O|\lambda)\leq Threshold\) As introduced in [8]: Assume we have an observation sequence text\(O=O_{1},O_{2},...,O_{t}\) at time t that \(lambda\) has accepted, and a new observation \(O_{t+1}\) at time t+1. We remove \(O_{1}\) from \(O\) and add \(O_{t+1}\) to \(O\) to form a new sequence: \(Orime=~{}O_{2},O_{3},...,O_{t+1}\). \(O_{t+1}\) has a low likelihood of being accepted given HMM if \(P(Orime|lambda)-P(O|lambda)leq0\). then this sample belongs to a new malware family. Otherwise, we construct a longer sequence \(Orime=O_{t+1},O_{t+2},...,O_{t+2000}\) and compute \(Orime\) scores of all HMMs. The final decision is made by a majority vote.
## 4 Incremental Malware Classification Framework Description
IMDCF contains two processes: the Behavior Extraction process and Detection process. A schematic overview of IMDCF is shown in Figure 1
IMDCF begins by gathering running behavior in the sandbox, such as opcodes and API calls. The opcodes are then sorted by frequency and encoded into 26 alphabetic symbols, with all other opcodes encoded as the special symbol '\(\bigcap\)'. For example, sequence \(MOV->PUSH->ADD->SUB\) is encoded to \(A->B->C->D\).
A group of one class HMMs is trained, and each is trained with either benign files or one of the malware families. At time t, the sample generates an opcode \(O_{t}\). For distinct malware families and benign families, an initial sequence of opcodes is prepared. For instance, family A has a sequence \(O_{a}=O_{1},O_{2},...,O_{t-1}\), while family B has a sequence \(O_{b}=O_{1}^{\prime},O_{2}^{\prime},...,O_{t-1}^{\prime}\). For each sequence, a new observation sequence is constructed by dropping \(O_{1}\) (or \(O_{1}^{\prime}\)) and appending \(O_{t}\) to the respective sequences. This process updates the observation sequences for each family, reflecting the most recent opcode information. Each HMM then measure the likelihood of accept the new generating sequence.
* If the log likelihood of sequence \(O\) is below the threshold for all HMMs, the sample generating \(O_{t}\) is considered to belong to a new malware family.
* If the log likelihood one sequence \(O\) exceeds the threshold of an HMM, the sample generating \(O_{t}\) is considered to belong to that malware family.
* If the log likelihood one sequence \(O\) exceeds the threshold of more than one HMM, a longer sequence \(O=O_{t}O_{t+1}...\) is constructed. Repeating above process until only one HMM left.
## 5 Experiment
In our experiments, We analyze two malware families, Zeroaccess and Zbot and a benign family. We collect a set of opcodes from benign software without grouping them into families, as our primary concern is whether IMDCF can detect malware among the samples. The Zeroaccess is a Windows System Malware primarily used for downloading other malware samples onto infected computers. We collect 1,308 Zeroaccess files. Zbot is a Windows System Malware primarily used for stealing banking information. and we collect 2,136 Zbot files. Each HMM is trained with a subset of 20 files from the respective malware family, with observation sequences of at least around 100,000. To achieve the best training results, we set the iteration count to 200.
### Malware Detection Experiment
IMDCF can achieve 0.9091 accuracy score. As shown in Figure 2, IMDCF can successfully detect malware while producing few false positives
The Figure 3 and Figure 4 show the classfication results using Zeroaccess and Zbot model with the accuracy score 0.8384.
Figure 1: Incremental Malware Classification Framework contains two different process. The Malware first is monitored and executed in the sandbox, which calls behavior extraction process. The classification process has trained HMMs. The training process is done offline. Each HMM evaluates the coming observation sequence and assigns score to it.
The observation sequence length can affect the classification accuracy. As shown in Figure 5, Th accuracy increases as the length increases.
Figure 4: Plot shows the IMDCF classification results using Zbot Model with Log likelihood per opcode (LLPO) as y-axis and sample index as x-axis.
Figure 3: Plot shows the IMDCF classification results using Zeroaccess Model with Log likelihood per opcode (LLPO) as y-axis and sample index as x-axis.
Figure 2: Plot shows the IMDCF detection results with Log likelihood per opcode (LLPO) as y-axis and sample index as x-axis.
## 6 Conclusion
Malware addresses a lot of attention from both industry and academic since it's a major threat for today's network security. In this work, we present the IMDCF, the incremental malware classification model under dynamic analysis setting. For the malware detection task, IMDCF can use only one opcode with comparable accuracy. We are excited about the possibility of detecting malware using short sequences and plan to apply this model to other tasks.
|
2305.08993 | Survey of Malware Analysis through Control Flow Graph using Machine
Learning | Malware is a significant threat to the security of computer systems and
networks which requires sophisticated techniques to analyze the behavior and
functionality for detection. Traditional signature-based malware detection
methods have become ineffective in detecting new and unknown malware due to
their rapid evolution. One of the most promising techniques that can overcome
the limitations of signature-based detection is to use control flow graphs
(CFGs). CFGs leverage the structural information of a program to represent the
possible paths of execution as a graph, where nodes represent instructions and
edges represent control flow dependencies. Machine learning (ML) algorithms are
being used to extract these features from CFGs and classify them as malicious
or benign. In this survey, we aim to review some state-of-the-art methods for
malware detection through CFGs using ML, focusing on the different ways of
extracting, representing, and classifying. Specifically, we present a
comprehensive overview of different types of CFG features that have been used
as well as different ML algorithms that have been applied to CFG-based malware
detection. We provide an in-depth analysis of the challenges and limitations of
these approaches, as well as suggest potential solutions to address some open
problems and promising future directions for research in this field. | Shaswata Mitra, Stephen A. Torri, Sudip Mittal | 2023-05-15T20:18:27Z | http://arxiv.org/abs/2305.08993v2 | # Survey of Malware Analysis through Control Flow Graph using Machine Learning
###### Abstract
Malware is a significant threat to the security of computer systems and networks that requires sophisticated techniques to analyze its behavior and functionality for detection. Due to their rapid evolution, traditional signature-based malware detection methods have become ineffective in detecting new and unknown malware. One of the most promising techniques that can overcome the limitations of signature-based detection is to use control flow graphs (CFGs). CFGs leverage the structural information of a program to represent the possible paths of execution as a graph, where nodes represent instructions and edges represent control flow dependencies. Machine learning (ML) algorithms extract these features from CFGs and classify them as malicious or benign. In this survey, we aim to review some state-of-the-art methods for malware detection through CFGs using ML, focusing on the different ways of extracting, representing, and classifying. Specifically, we present a comprehensive overview of different types of CFG features used and different ML algorithms applied to CFG-based malware detection. We provide an in-depth analysis of the challenges and limitations of these approaches, as well as suggest potential solutions to address persisting open problems and promising future directions for research in this field.
Cybersecurity, Malware Analysis, Control Flow Graph, Machine Learning
## I Introduction
Malware is malicious software designed to damage or gain unauthorized access to computer systems. It is a significant threat to computer systems, causing billions of dollars in yearly damages. To maintain a secured cyber-space-- malware detection and analysis, therefore, have become extremely important with the increased presence of malware and cyber-attacks every day. To detect and analyze malware, researchers use various static and dynamic techniques. Traditionally, malware detection is done using a static approach, where program hash signatures are compared to identify malware presence. Due to the recent development of numerous signature spoofing techniques, the hash-comparison technique has seen reduced effectiveness. One alternative technique used to analyze malware is through a control flow graph (CFG). CFG analysis is a powerful approach used in computer science to determine the behavior of programs. It is a graphical representation of the execution flow of a program, which can be used to identify abnormal patterns and malicious behavior in the program. In cybersecurity, CFG analysis has become a critical static analysis technique for malware detection.
The importance of CFG analysis in malware detection lies in its ability to provide a meticulous view of program execution, allowing security analysts to understand the program's logic, identify potential vulnerabilities, and detect the presence of malicious behaviors, as it can reveal hidden or obfuscated code, and expose malicious behavior that would otherwise go undetected with other static approaches. This technique has been widely used in cybersecurity, and its effectiveness has been demonstrated in numerous studies. Due to the requirement of thorough analysis by professional security analysts and limited automation scopes, such an approach used to be cost- and time-prohibitive. However, recent advancements in machine learning (ML), deep learning (DL), and data analysis have enabled sophisticated and accurate analysis of CFGs for malware detection in an automated, timely, and cost-effective way.
In this study, we explore in detail the recent advancements of CFG analysis through ML in detail, its use cases in malware detection, persisting drawbacks, and further improvement areas. To fully grasp the topic, readers will require a basic understanding of programming concepts, cybersecurity fundamentals, and ML. By providing a comprehensive overview of CFG analysis in malware detection, this study will contribute to the ongoing efforts using ML to enhance cybersecurity and protect against emerging threats.
In section II, we discuss the research objective and criteria that shape the scope of the survey. In section III, we discuss the primary research findings that adhere to the rules set in section II. Finally, in section IV, we address the research questions set in II-A with drawbacks and future recommendations.
## II Research Method
Our study aims to address recent developments in cybersecurity to analyze and identify malware through CFG analysis using ML. It is not a full-fledged systematic literature review (SLR) set forth by Kitchenham et al. [1] covering all the developmental works. Instead, we aim to address some popular different ML frameworks that shaped the present research landscape. Furthermore, we also focus on providing a preliminary understanding of the topic with relevant literature.
Hence, we followed the procedures outlined in the following sub-sections to conduct our review. This approach allowed us to provide an overview of the current research landscape while acknowledging the limitations and potential further study areas.
### _Research Questions_
In this sub-section, we aim to address the background, objective, and outcome of conducting the survey.
* **Q1:** How can control flow graphs (CFG) be used to identify malware derivatives?
* **Q2:** What are the existing machine learning (ML) approaches to analyze malware using CFG?
* **Q3:** What are the drawbacks of existing ML approaches in processing CFG to classify malware?
### _Inclusion and Criteria_
In this sub-section, we aim to define the survey's scope by defining the development areas and the filtering process we followed.
We included a paper if:
* It contained information relevant to a research question.
* It was written in English.
We excluded a paper if:
* It did not address malware analysis by CFG using any ML approaches.
* It used any approach other than CFG (e.g., network behavior analysis) with ML to analyze malware.
* It used CFG to analyze any behavior (e.g., program characteristics) other than malware.
* It was greater than 10 years old.
### _Data Collection_
In this sub-section, we list our questions towards collecting information from each piece of literature, described in section III.
* What ML framework did the study use to process CFG?
* How does the practice affect malware analysis?
* What experimental evidence has been provided to support its developmental claims?
### _Data Analysis_
This sub-section lists the questions we asked about the data collected by pre-stated [II-C] questions to answer our research objective in section IV:
* How was the ML framework used and what impact does it have on malware analysis using CFG?
* How was the research conducted and analyzed? Was it conducted and analyzed reliably and validly?
* How does the study relate to other developmental studies? Is it consistent or contradictory?
* What claims did the study make on the development?
## III Results
This section summarises our findings and how the study shapes the current malware analysis landscape through CFG using ML. We present different types of malware analysis approaches over time with a summarized Table I and an appendix Table II. The literature is categorized based on malware platforms in chronological order.
### _Android Malwares_
According to Yahoo Finance news, Android is the most popular platform, covering 71.54% of all smartphones in 2022 [2]. In 2017, Symantec intercepted an average of 24000 mobile phone malware per day [3], demonstrating to the requirements of accurate yet efficient malware classification techniques.
Therefore, to secure Android devices from malware, Atici et al. [4] proposed a malware analysis approach using CFG code block grammar. The approach generates CFG from the Android Dalvik byte code instructions. Then, CFG code blocks are represented as string literals-- and using string encoding, input vectors from the string literals are generated for the ML algorithms to classify multi-class malware variants. Due to the straightforward approach, the model classification data dimension is reduced to 30 different code chunks, making it fast yet efficient. According to the experiments with the Android Malware Genome Project dataset [5], the model was able to attain a classification accuracy of 96.26% in general. On top of that, it was able to detect DroidKungfu malware families with a detection rate of 99.15%; these families are difficult to detect with traditional approaches.
To consider the Android application run-time behaviors with data traffic, Xu et al. [6] proposed another approach (CDGDroid) considering CFG and data flow graph (DFG). The approach primarily consists of three phases. In the first phase, the CFG and DFG graphs are extracted. Then, the graphs are encoded for the model to get trained and learn classification. Lastly, the encoded matrix is fed to the deep learning convolutional neural network (CNN) model to learn and detect unseen malicious or normal applications. For extraction and encoding, CFG and DFG graphs are extracted from smali files using Dalvik executions. Then, both graphs are combined via matrix addition or extension to be encoded further. Finally, the encoded matrix is fed to CNN for learning the malware characteristics. Experiments using Marvin [7], Drebin [8], VirusShare [9], and ContagioDump [10] datasets were conducted. According to the results, the proposed model achieved an accuracy of 99.8% over Marvin and 72.8% over the CognitiDump dataset in the detection of unseen malware derivatives. On top of the traditional experiments, a 10-fold cross-validation test was conducted to justify the effectiveness of malware detection using CFG and DFG using deep learning models.
To further improve the approach, Ma et al. [3] proposed an ensemble ML models approach considering Android API calls, frequency, and sequences. In the approach, the authors constructed a boolean, frequency, and time-series chronological dataset to develop three ML detection models. The diverse API calls and different usage behaviors based on the different attack types is the primary reason behind considering these diversified API usage datasets. First, CFG is constructed from de-complied Android source code. Then, three API datasets (boolean, frequency, and chronological) are constructed from CFG. After that, three ML models dedicated to each dataset type are built for malware analysis and classification. API
usage detection model utilizing the boolean dataset is built using a decision tree algorithm. A deep neural network model is used to learn and analyze the API frequency patterns. Lastly, the API sequence detection model is developed using long short-term memory networks (LSTM). Experiments on 10010 benign samples collected from AndroZoo [11] and 10683 malicious samples collected from Android Malware Dataset [12, 13] confirmed its detection accuracy of 98.98%.
### _Industrial & IoT Malwares_
Sophisticated malware like a metamorphic or polymorphic virus can effectively evade signature-based method-based tools by using advanced obfuscation techniques, including mutation and dynamically executed content (DEC) methods. Using DEC, it can dynamically produce new executable code in the run-time, making it difficult to recognize [14]. According to AV-Test, the total number of malware applications by the end of March 2023 was estimated to be over 1200 million and has increased over 10 times during the last decade [15]. Parallelly, IoT devices are especially prone to malware attacks because they are built on light and optimized system architecture prioritizing efficiency. Due to the global IoT adaptation in home and industrial systems, malware developers pay significant attention to disrupting such a landscape resulting in heavy losses. Therefore, polymorphic malware needs to be addressed efficiently for security and economic reasons.
To capture the malware dynamically executed contents (DEC) behavior, Nguyen et al. [14] proposed a CFG analysis approach using deep learning. DEC behavior refers to a code obfuscation technique that allows the malware to generate new code at run-time. For such, DEC behaviors are captured in the CFG using lazy binding. Therefore, all the binding between a memory location and corresponding assembly instructions is mapped into the CFG. To extract such CFG, the authors used BE-PUM [16], which applies on-the-fly push-down model generation from x86 binaries on dynamic symbolic execution in a breadth-first manner. After that, the CFG adjacency matrix is hashed to make it memory efficient by mapping the memory vector to a fixed string length. Finally, the hashed CFG adjacency matrix is fed to CNN to learn the pattern and identify malware. Experiments on real-world samples were collected from VXHeaven [17], Virusshare [9], and MALICIA [18] datasets containing 63690 malware and 13752 benign programs [19]. Evaluations using 10-fold cross-validation showcased an average accuracy of over 92% on all data samples using Yolo-based CNN.
To overcome the existing drawbacks of inefficient and ineffective graph mining techniques that commonly rely on handcrafted features and ensemble methods, Yan et al. [20] proposed a malware classification tool that utilizes the graph mining capabilities of Deep Graph Convolution Network (DGCNN). As the CFG follows a heterogeneous data structure, therefore is in tensor of variable size. Hence, it requires a graph machine learning approach to be considered. First, the CFG is extracted using a commercial reverse engineering software called IDA Pro [21]. Then, the CFG of unordered size is converted to a fixed size and order. Finally, the CFG tensor is fed to DGCNN to learn to classify using the Adam optimizer. Experiments on MSKCFG [22] and YANCFG [23, 24] datasets, each containing more than 10000 samples, were conducted and evaluated with 5-fold cross-validation. According to the evaluation, the model achieved an average F-1 score of 0.97 in the MSKCFG dataset and around 0.8 in the YANCFG dataset. Due to the generic approach, the model can be deployed in the cloud for real-time malware classification by a generic user.
### _Adversarial Malwares_
The variety and quantity of malware have increased rapidly, complicating classification based on fixed features. Also, the output of an ML model depends on the pattern of the training input samples. Therefore, any unseen pattern can go undetected and evade the anti-virus systems. Apart from numerous code obfuscation techniques, modern malware developers inject adversarial perturbations in the program to make it difficult to detect using standard malware classifiers.
To consider adversarial examples (AEs), Alasmary et al. [25] proposed a novel approach (Soteria) that can detect AEs for improved malware classification using deep learning. The model works in two phases: AEs detector and IoT malware classifier. First, the model starts by labeling the extracted CFG nodes by density and level-based labeling. Then, it uses a set of random walk algorithms proportional to the number of nodes in CFG for feature extraction. After that, uses the n-gram module to express and represent the behavior of the software process deeply. Finally, using an auto-encoder, the AEs are detected. The classifier works using an ensemble method of density-based and a level-based CNN classifier. Due to the loosely coupled system architecture and classifier reliance on the AE detector, the classifier does not require extracting features, optimizing the cost. Additionally, the classifier being a separate component allows the user to select a different classifier based on the scenario requirement. An experiment containing randomly selected 13798 malicious samples collected from CyberIOCs [26] was conducted, and feature validation was done using principal component analysis (PCA). Based on the evaluation, the AE detector was able to achieve an accuracy of 97.79% for detecting AEs, and 99.91% overall as a multi-class classifier.
Unlike code obfuscation techniques, packed malware is another approach to bypass malware detection tools. In the approach, malware is unpacked while executing, leading to a different CFG at run-time. Due to the deviated run-time CFG, the malware is able to bypass detection. To address such, Hua et al. [27] proposed an approach to strip the unpacked CFG into local CFG for final classification using DGCNN. In the approach, first, the unpacked CFG is stripped to local CFG by running in a sandbox. The unpack function calls do not relate to any malware local functions and vice-versa. Therefore, from the call adjacency matrix, the local CFG is stripped and generated for classification. Finally, using DGCNN the malicious local CFG is learned for further classification.
Experiments covering 6 malware families [9], each with 100 samples, were conducted using 10-fold cross-validation and were able to demonstrate overall accuracy of 96.4%.
In order to extend the robustness of the ML models, Wu et al. [28] proposed a malware classifier (MCBG) with Graph Isomorphism Network using Jumping Knowledge (GIN-JK). Utilizing the extensive pattern-learning capabilities of modern ML frameworks, the model is capable of learning semantic information about the function nodes as well as the structural information of the entire program CFG. Therefore, adversarial attacks such as code obfuscation or packing can also be considered with the approach. To capture the semantics, it considers the basic program blocks as string literals. Then bidirectional encoder representations from transformers (BERT) is used to pre-train and convert the raw instructions into tokens using masked language model (MLM) and next block prediction (NBP) tasks to generate node-embeddings. Such a pre-trained embedding converts the CFG to attributed CFG (ACFG). Finally, using GIN-JK, the structural information of the program is learned for malware classification. The reason behind selecting GIN-JK with a pooling function for graph representation over traditional GNN is its proven capabilities to allow learning in a simplistic yet efficient manner. Experiments were conducted using Microsoft Malware Classification Challenge (BIG2015) dataset [22] with 10868 labeled malware samples of 9 malware families. With a 5-fold cross-validation test set, the model was able to achieve an accuracy of 99.53%.
With the diverse ML approaches utilizing GNN to learn the CFG pattern for malware classification, none provide insights into the behavior. To address the explainability while detecting, Herath et al. [29] proposed a novel DL approach that identifies the most contributing CFG sub-graph alongside malware classification. Such a solution helps security analysts to identify node importance and analyze the behavior in a white-box manner. The model works using two inter-connected feed-forward DNNs. The first component learns to score the node embeddings produced from GNN, and the second component weights the original node embeddings with the produced scores to train a surrogate malware classifier. As both the models are jointly trained using a log-likelihood loss function, contribute to boosting the important node embeddings for malware classification by the second one. Due to the intended objective to learn the CFG node importance, the model is capable of addressing adversarial evasion techniques such as XOR obfuscation, Sematic NOP obfuscation, code manipulation, etc. Experiments with YANCFG dataset [20] over three state-of-the-art models (GNNExplainer [30], SubgraphX [31], and PGExplainer [32]) justify the feasibility of the approach.
### _Windows Malwares_
Windows is the most globally used operating system, making it an important playground for malware developers to target general users. For example, recent Ransomware attacks incurred a heavy cost to general users. On top of that, anti-malware tools are vulnerable to even general code obfuscation techniques, and 90% of the signature-based approaches don't conduct other static analysis [4].
Due to the high customer base of Windows OS, Sun et al. [33] proposed a novel approach to analyze the CFG wave features and heat representations with ML models for malware classification. In the poster work, the authors tested eight ML models (SVM, DT, LR, RF, KNN, ANN, AdaBoost, and XGB) over CFG wave and heat notations to identify the best approach. The reason behind considering wave feature and heat representation is the size efficient and permutation invariant characteristics of such. First, using r2pipe API, CFG was extracted and constructed 250 to 1000-dimensional heat and wave spectral graphs using NetLSD [44]. NetLSD offers to generate compact graph signatures using Laplacian heat or wave kernel inheriting Laplacian spectrum's formal features. Also, PCA was used for dimensionality reduction of 250 to 1000 features, to make the model efficient. To conduct the ex
Fig. 1: Malware detection life-cycle, from evasion techniques to CFG construction and CFG encoding to detection using ML approaches in chronological order. On the left, obfuscation techniques used by malware are listed, and on the right various ML detection approaches in chronological order are listed.
periment, the authors used 37537 Windows malware samples [43] with a 70% to 30% train test split ratio. According to the experiment, the wave features were the most accurate to classify malware using RF, DT, and XGB with a maximum accuracy of 95.9% compared to heat representations.
## IV Discussion
In this section, we discuss the answers to our research questions stated in section [II-A] based on the findings discussed in the result section [III]. Each subsection is addressing each of our research questions with limitations. We also illustrate the evolving research landscape through Fig 1.
### _Evolution of CFG Analysis_
To protect legitimate users from malicious threats, many approaches have been proposed over the last decade. For real-time protection, control flow graph analysis is one of the prominent. A control flow graph reflects the intended program behavior as a graph, allowing malicious behavioral patterns to be captured.
In the initial studies, CFG code blocks were encoded as string literal for program behavior pattern identification using NLP techniques as demonstrated by Atici et al. [4]. Using such an approach, the underlying malicious behavioral logic was neglected to identify patterns. Later, to focus more on the available graph data, DFG was also considered for better pattern identification alongside CFG, as demonstrated by Xu et al. [6]. With increased data to process, the proportional increase in computation time also became a limiting factor. Furthermore, numerous ensemble models utilizing different CFGs were considered to attain increased accuracy [3]. However, the program logic being the primary factor determining malicious behavior, such approaches were inefficient to detect evasion techniques like NOP insertion, code transposition, etc. Therefore, in order to consider the instruction node semantics, CFG encoding was necessary for detailed analysis as demonstrated by Yan et al. [20]. Apart from typical code obfuscation techniques, malware developers adopted different adversarial methods like Packing. Therefore, on top of the spectral features, the program's structural flow demanded to be analyzed as shown by Wu et al. [28]. Apart from traditional techniques, to keep up with the regularly evolving cyber threat landscape, researchers also tested a few unconventional approaches like encoding CFG as an image [14], analyzing CFG heat representation and wave features [33], etc., for classification.
### _Evolving ML Approaches_
Traditionally, CFG analysis was conducted by security analysts for malware classification in security centers. With the evolution of ML, now models are getting trained to identify such patterns in no time with impressive accuracy while
reducing analysis costs.
In the initial CFG analysis approaches, typical NLP techniques or basic ML algorithms like KNN, NB, Regression Tree, etc., were used to allow the model to be accurate yet efficient, as demonstrated in [4]. Due to limited algorithmic capabilities, the models were unable to address the underlying CFG logic. Hence, to capture the underlying instruction pattern, researchers used various CNN models as demonstrated by Yan et al. [20]. However, the incompatibility of CNN models with large heterogeneous CFG structures leads researchers to adopt GNN models for pattern analysis. Being a relatively new ML approach, numerous research approaches with various GNN models like GCN, GIN-JK, etc., are being carried out to target improvement areas; studies by Wu et al. [28], Herath et al. [29] are a few.
### _Limitations and Evolution_
With increasing computational power and memory, program dimensions are also increasing in proportion. Therefore, identifying detailed program behavioral patterns in a limited time with existing deep-learning approaches is becoming difficult. The three key areas that limit the process are:
* CFG Extraction
* Robust CFG Encoding
* Pattern Identification & Explainability
For a device with average computation capabilities like IoT and Android, CFG extraction becomes a heavy load. In many cases, malware can execute malicious activities before being detected by process-heavy models. Therefore, for IoT or Android domains, an efficient CFG extractor development will drastically reduce identification time.
Secondly, CFG encoding is the main area that enables a model to learn program patterns. The absence of a robust encoding mechanism is the leading cause behind the evasion of unseen malware family derivatives. Therefore, automated feature analysis is required to make a robust encoding standard, allowing the model to comprehend unseen patterns from large datasets.
Finally, pattern identification is the main task of an anti-malware model. As ML architecture is the main component behind pattern learning, it cannot utilize the encoded information from CFG without a well-designed ML model. As of today, GNN is the most suitable ML model, capable of learning the CFG pattern better than other ML models. But it consumes higher computation time for real-time applicability standards. Therefore, further research with GNN is required to make the model robust yet equally efficient [45]. On top of that, the malware detection models can be incorporated with the cybersecurity knowledge graph (CKG), which is also an active area of research, for in-depth pattern learning from a diverse, authentic data source [46]. Such an outcome would allow the models to learn from the global data repository and is expected to reduce the computation time significantly.
To address all these factors, the root cause of any model behavior must be identified and explained for a production-ready system, which is also an active area of research. Specifically, ML models are prone to adversarial attacks. There have been numerous studies [47] on conducting malicious attacks on ML models for evasion. Hence, developing a secured ML model is equally important as an efficient one. The study by Herath et al. [29] is notable example among a few. More research is required in the explainability domain to enable analysis in a directed rather than a trial-and-error manner while considering ML models as the black box.
## Conclusion
We have surveyed some of the recent techniques in malware detection through CFGs using ML that have shown significant potential in addressing the limitations of traditional signature-based malware detection methods, highlighting the different aspects of feature extraction, representation, and classification. We have discussed the different types of CFG features that have been used, as well as the different ML algorithms that have been applied to CFG-based malware detection. We have also discussed several challenges and limitations of these methods, such as scalability, robustness, and interpretability, and proposed possible solutions and directions for future research. Specifically, we identified three critical open areas that need further extensive research: the following.
* Effective and efficient CFG extraction.
* Robust and accurate ML algorithm to handle large data.
* Explainability of ML model behaviors for directed research and secure deployment.
Overall, we believe CFG-based malware detection using ML is a promising new approach that can provide a high level of accuracy and generality to overcome the limitations of signature-based detection approaches in securing computer systems and networks against the evolving threat of malware.
## Future Work
We hope this survey can serve as a helpful reference for researchers and practitioners interested in this field and inspire further developments and innovations in this area. As a future step, we plan to conduct experimental research to address the areas of improvement and persisting loopholes discussed above.
## Acknowledgment
The author would like to thank Amy Barton for constructive criticism of the manuscript.
|
2307.08132 | Heterogeneous graphs model spatial relationships between biological
entities for breast cancer diagnosis | The heterogeneity of breast cancer presents considerable challenges for its
early detection, prognosis, and treatment selection. Convolutional neural
networks often neglect the spatial relationships within histopathological
images, which can limit their accuracy. Graph neural networks (GNNs) offer a
promising solution by coding the spatial relationships within images. Prior
studies have investigated the modeling of histopathological images as cell and
tissue graphs, but they have not fully tapped into the potential of extracting
interrelationships between these biological entities. In this paper, we present
a novel approach using a heterogeneous GNN that captures the spatial and
hierarchical relations between cell and tissue graphs to enhance the extraction
of useful information from histopathological images. We also compare the
performance of a cross-attention-based network and a transformer architecture
for modeling the intricate relationships within tissue and cell graphs. Our
model demonstrates superior efficiency in terms of parameter count and achieves
higher accuracy compared to the transformer-based state-of-the-art approach on
three publicly available breast cancer datasets -- BRIGHT, BreakHis, and BACH. | Akhila Krishna K, Ravi Kant Gupta, Nikhil Cherian Kurian, Pranav Jeevan, Amit Sethi | 2023-07-16T19:06:29Z | http://arxiv.org/abs/2307.08132v1 | Heterogeneous graphs model spatial relationship between biological entities for breast cancer diagnosis
###### Abstract
The heterogeneity of breast cancer presents considerable challenges for its early detection, prognosis, and treatment selection. Convolutional neural networks often neglect the spatial relationships within histopathological images, which can limit their accuracy. Graph neural networks (GNNs) offer a promising solution by coding the spatial relationships within images. Prior studies have investigated the modeling of histopathological images as cell and tissue graphs, but they have not fully tapped into the potential of extracting interrelationships between these biological entities. In this paper, we present a novel approach using a heterogeneous GNN that captures the spatial and hierarchical relations between cell and tissue graphs to enhance the extraction of useful information from histopathological images. We also compare the performance of a cross-attention-based network and a transformer architecture for modeling the intricate relationships within tissue and cell graphs. Our model demonstrates superior efficiency in terms of parameter count and achieves higher accuracy compared to the transformer-based state-of-the-art approach on three publicly available breast cancer datasets - BRIGHT, BreakHis, and BACH.
Keywords:Graph Heterogeneous Histology Transformer
## 1 Introduction
Breast cancer is the most common cancer among women globally, and it continues to pose significant challenges for early diagnosis, prognosis, and treatment decisions, given its diverse molecular and clinical subtypes [29]. To address these challenges, recent advancements in machine learning techniques have paved the way for improved accuracy and personalized treatment strategies. However, most convolutional neural networks (CNNs) overlook the spatial relationships within histopathological images, treating them as regular grids of pixels [5, 14, 16, 19]. To overcome this limitation, graph neural networks (GNNs) have emerged as a promising alternative for classifying breast cancer.
GNNs are designed to handle complex graph structures, making them well-suited for tasks that involve analyzing relationships between entities [27, 26, 18].
By representing histopathological images as graphs, with image regions and structures as nodes and their spatial relationships as edges, GNNs can capture the inherent spatial context within the images [21, 24, 3, 10, 28]. This allows them to extract valuable information and patterns that may be missed by other machine-learning methods.
In this paper, we propose the use of heterogeneous graph convolutions between the cell and the tissue graphs to enhance the extraction of spatial relationships within histology images. This approach allows for the incorporation of diverse, multi-scale, and comprehensive features and spatial relationships by modeling both cell and tissue structures as well as their hierarchical relationships. Specifically, we introduce three features that have not been previously used in GNNs for histopathology images: (1) heterogeneous graph convolutions along with a transformer which outperforms [15] on BRIGHT [7], (2) an adaptive weighted aggregation technique with heterogeneous convolutions that outperforms [15] and is more efficient in terms of the number of parameters, and (3) the cross-attention modules of CrossVit [9] along with heterogeneous graph convolutions to extract the spatial relationship between cell and tissue graphs. We also analyzed different methods of k-nearest neighbor (kNN) graph building for cell and tissue graphs and found that edges based on node feature similarities performed better than other methods such as spatial closeness [22, 23] or dynamically learnable layers for edge creation [15]. Extensive experiments conducted on three publicly available breast cancer histology datasets - BRIGHT [7], BACH [4], and BreakHis [6] - demonstrate the gains of our method.
## 2 Related Work
The first work on entity-level analysis for histology was [28] in which a cell graph was created and a graph convolutional network was used for processing the graph. Other works, such as [22, 23, 15], have also focused on the entity-level analysis of histology images by constructing cell graphs and tissue graphs and by constructing more than two levels of subgraphs to capture the spatial relationship in the image. In [22], the authors introduced an LSTM-based feature aggregation technique to capture long-range dependencies within the graphs. By utilizing the LSTM, they aimed to capture the temporal dependencies between cells and tissues, enhancing the understanding of their relationships. In contrast, [15] took a different approach by constructing more than two levels of subgraphs and using a spatial hierarchical graph neural network. This network aimed to capture both long-range dependencies and the relationships between the cell and tissue graphs. To achieve this, the authors employed a transformer-based feature aggregation technique, leveraging the transformer's ability to capture complex patterns and dependencies in the data. However, it was observed that both of these approaches fell short of fully extracting the intricate relationship between tissue and cell graphs. Hence, there is a scope to explore alternative methods or combinations of techniques to better understand and leverage the spatial relationships within
histology images for improved analysis and classification of breast cancer and other diseases.
## 3 Methodology
### Tissue and cell graph extraction from histology images
The interrelationship between cells and tissue is investigated through the construction of cell and tissue graphs. The goal is to perform multi-level feature analysis by extracting relevant features from these entities. To construct the cell graph, a pre-trained model called HoverNet [12] is used for nuclei detection. The feature representation for each nucleus is extracted by processing patches around the nuclei with a ResNet34 encoder [14, 15]. Similarly, the tissue entities are identified using segmentation masks, which are obtained by applying the Simple Linear Iterative Clustering (SLIC) algorithm for superpixel segmentation [1, 15]. Once the tissue entities are identified, their feature representations are extracted in the same manner as the cell entities. We then utilize kNN to get the \(k\) most similar nodes of all nodes based on the distance of node feature representations for the formation of edges between cell entities and tissue entities. In our experiment, we used \(k=5\) for all models and datasets. The edges between cells and tissues are formed by using the spatial position of cells and tissues. We treat a cell node \(C_{i}\) and a tissue node \(T_{i}\) as connected if \(C_{i}\in T_{i}\).
### Heterogeneous Graph Convolution
The interaction between cell and tissue graphs has to be captured effectively for better analysis which is done using heterogeneous graph convolutions. We define
Figure 1: Cell and tissue graph formation
a heterogenous graph(\(H\)) as a union of cell-to-cell, tissue-to-tissue, and cell-to-tissue relations (which are defined by edges) along with their node features.
\[H=\{C,T,E_{cell->cell},E_{tissue->tissue},E_{cell->tissue}\} \tag{1}\]
where \(C\) and \(T\) are the features of the node in the cell graph and the tissue graph and \(E_{A->B}\) is the list of edges between the elements in sets A and B.
We utilize Graph Sage convolutions[13] to transmit messages individually from the source node to the target node for each relation. When multiple relationships direct to the same target node, the outcomes are combined using a specified aggregation method. This is how heterogeneous graph convolution is implemented. We can formulate it as follows: let \(z_{jR_{i}}\) be the output vector representations on node \(j\) due to Graph Sage convolution on the nodes defined by relation \(R_{i}\), where \(R_{i}\in\{cell->cell,cell->tissue,tissue->tissue\}\), then the final vector representation on node \(j\), \(z_{j}\) is
\[z_{j}=Aggregator(\{z_{jR_{u}},\forall u\in U\}) \tag{2}\]
where \(U\) is the set of relations directing to the node \(j\). This is implemented using PyTorch geometric library.
### Adaptive weighted aggregation for multi-level feature fusion
Low-level and high-level features are important in classification tasks. So we use skip connections for its extraction and adaptive weighted aggregation layer for feature fusion. Adaptive weighted aggregation layer \(f\) is represented as follows :
\[f(w_{1},w_{2},...w_{n},F_{1},F_{2}....F_{n})=w_{1}F_{1}+w_{2}F_{2}+w_{3}F_{3}+.......w_{n}F_{n}, \tag{3}\]
where \(w_{i}\) is trainable weights of dimension \(1\times 1\) and \(F_{i}\) is feature vector of dimension \(256\times 1\) for \(i\in 1,2,3,...n\).
### Cross-attention feature fusion and attentive interaction using transformers
The transformer encoder of [25] which uses self-attention is used for extracting the long-range dependencies between the nodes in the graph. Formally, let us say the transformer encoder is \(T\) and the input to \(T\) is \(Concat(N_{c},N_{t})\) where \(N_{c}\) and \(N_{t}\) are node features of cell and tissue graph respectively. The output from the transformer encoder goes to MLP layers to get the final output. \(N_{c}\) and \(N_{t}\) are of dimension \(C\times 1\times 256\) and \(C\times 1\times 256\) respectively where \(C\) is the number of nodes in the cell graph (we extrapolate the tissue graph with zeros for expanding the dimension of it to the same dimension as that of cell graph).
Cross-attention modules of [9] are used for extracting the interaction between cell graph and tissue graph which then is then fed into multi-layer perceptron layers to get the required output.
Figure 2: Proposed architectural variants of Heterogeneous Graph Neural Network (HG): (a) HG with adaptive weighted aggregation of multi-level features, (b) HG with CrossVit for hierarchical feature fusion, and (c) HG with transformer for attentive interaction.
## 4 Experiments and Results
### Clinical Datasets and evaluation methods
For assessing the utility of graph based deep learning methods, breast cancer histology datasets form an ideal test bed due to the diversity of its subtypes. We used three datasets in this work. The BRIGHT dataset is a breast cancer subtyping dataset released as a part of the Breast tumor Image classification on Gigapixel Histopathological images challenge [7]. The breast cancer histopathology image datasets (BACH) [4] has four disease states - normal, benign, ductal carcinoma in-situ (DCIS), and invasive ductal carcinoma (IDC). The BreakHis [6] has malignant and non-malignant classes.
#### 4.1.1 BRIGHT Dataset
The BRIGHT dataset contains 4025 hematoxylin & Eosin (H&E) stained breast cancer histology images scanned at 0.25mn/pixel resolution. It is classified into 6 classes: Pathological Benign (PB), Usual Ductal Hyperplasia (UDH), Flat Epithelial Atypia (FEA), Atypical Ductal Hyperplasia (ADH), Ductal Carcinoma In Situ (DCIS) and Invasive Carcinoma (IC). These are further grouped into 3: cancerous tumors (DCIS and IC), non-cancerous tumors (PB and UDH) and pre-cancerous tumors (FEA and ADH) [7].
#### 4.1.2 BACH Dataset
The BACH dataset contains 400 hematoxylin and eosin (H&E) stained breast cancer histology images with pixel scale \(0.42\mu m\times 0.42\mu m\). It is classified into 4 classes: normal, benign, in situ carcinoma, and invasive carcinoma [4].
#### 4.1.3 BreakHis dataset
The BreakHis dataset contains 9109 microscopic images of breast cancer at 400X magnifying factor. It contains 2480 benign and 5429 malignant tissue images [6].
### Experimental Setup
All the experiments are implemented in Pytorch and using Pytorch-geometric and histocartography library [17]. The F-score was used as the evaluation metric. We proposed three models and compared each other and also compared with the state-of-the-art model [15]. Hovernet was used for nuclei detection in all models for all datasets. The feature vector of length 512 for each nucleus was extracted from a patch size of 72, 72, and 48 around each nuclei using Resnet 34 model [14] for the BRIGHT, BACH, and BreakHis respectively. We also compared 3 methods of edge formation of graphs: dynamic structural learning [15], kNN algorithm on the distance between cell entities and the tissue entities, and kNN algorithm on node feature representation of cell entities and tissue entities. The batch size was set to 32. A learning rate of \(1e^{-4}\) and Adam optimizer with a weight decay of \(5e^{-4}\) were used while training the models.
### Comparison with the state-of-the-art methods
The results for six-class classification and three-class classification on the BRIGHT dataset are listed in Table 1 and 2 respectively. The four-class classification on the BACH dataset is shown in Table 3 and the 2-class classification on the BreakHis dataset is shown in Table 4.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline Model & DCIS & IC & PB & UDH & ADH & FEA & Weighted & Number of parameters \\ & & & & & & F-score & (in million) \\ \hline SHNN & 72.9 & 87.3 & 74.7 & 51.8 & 45.0 & 73.9 & 69.6 & 2.2 \\ \hline HG + AWA & 72.8 & 88.0 & 72.7 & **54.1** & 48.9 & 74.2 & 70.1 & **1.5** \\ \hline HG + CrossVit & 77.9 & 88.0 & 71.1 & 53 & **54** & 73 & 71.3 & 2.5 \\ \hline HG+transformer & **84** & **92** & **79** & 54 & 49 & **78.5** & **75.3** & 2.0 \\ \hline \end{tabular}
\end{table}
Table 1: Weighted F-score (%) of HG compared with other methods on BRIGHT dataset for six-class classification.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline Model & 1 & 2 & 3 & 4 & \(\mu\pm\sigma\) & Cancerous & Non-cancerous & Pre-cancerous \\ \hline SHNN & 78 & 78.11 & 78.93 & 79.21 & 78.56\(\pm\)0.56 & 85.21\(\pm\)2.07 & 78.08\(\pm\)0.81 & 72.11\(\pm\)0.82 \\ \hline HG + CrossVit & 78.75 & 78.28 & 77.2 & 74.29 & 77.13\(\pm\)2.00 & 83.93\(\pm\)1.12 & 76.29\(\pm\)3.08 & 70.38\(\pm\)2.35 \\ \hline HG + AWA & 80.24 & 77.98 & 80.82 & **80.55** & 79.89\(\pm\)1.30 & 87.08\(\pm\)1.49 & 78.09\(\pm\)2.64 & 73.75\(\pm\)1.36 \\ \hline HG+transformer & **83.45** & **82.59** & **81.86** & 80.36 & **82.06\(\pm\)1.31** & **88.78\(\pm\)1.96** & **80.61\(\pm\)1.33** & **76.08 \(\pm\)1.36** \\ \hline \end{tabular}
\end{table}
Table 2: Weighted F-score (%) of HG compared with other methods on BRIGHT dataset across four test folds for three-class classification.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline Model & Normal & Benign & InSitu & Invasive & Weighted F-score \\ \hline SHNN & **90** & 80 & 85.10 & 84.21 & 84.83 \\ \hline HG+transformer & 88.88 & **86.48** & **85.20** & **90.10** & **86.62** \\ \hline \end{tabular}
\end{table}
Table 3: Weighted F-score (%) of HG compared with other methods on BACH dataset.
### Ablation Studies
We performed studies on different methods of graph formation and removing and adding different layers of the model. The results are listed in Table 5. The spatial hierarchical neural network (SHNN) of [15] is Graph Sage convolutions in homogeneous graphs with a transformer. It can be observed that the addition of a transformer to Graph Sage convolutions in the SHNN led to an increase in F-score by 4% whereas in the case of a heterogeneous convolutional network the addition of a transformer to heterogeneous convolutions (HG) gave an increase in the F-score of nearly 7%. It can be concluded that heterogeneous convolutions led to an increase in interactions between cell and tissue features such that the output of these convolutions has better features for the transformer to work on than that of normal sage convolutions. It is also to be noted that in our model we used just two heterogeneous convolutional layers as opposed to 6 graph sage convolutional layers in [15].
We tried three different graph formation methods for cell and tissue graph formation. The results are listed in Table 6. Our method of graph formation using kNN algorithm on node features was seen to perform better than all of the other three methods.
## 5 Conclusion
We introduced three novel architectures for histopathological image classification based on heterogeneous graph neural networks. Our research highlights the capability of heterogeneous graph convolutions in capturing the spatial as well as
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Model & Malignant & Benign & Weighted F-score \\ \hline SHNN & 89.16 & 78.17 & 85.46 \\ \hline HG+transformer & **95.62** & **91.70** & **94.30** \\ \hline \end{tabular}
\end{table}
Table 4: Weighted F-score (%) of HG compared with other methods on BreakHis dataset.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Model & DCIS & IC & PB & UDH & ADH & FEA & Weighted F-score \\ \hline Graph Sage Conv & 72.1 & 84.8 & 70 & 39 & 44.2 & 68.5 & 65.3 \\ \hline SHNN & 72.9 & 87.3 & 74.7 & 51.8 & 45 & 73.9 & 69.6 \\ \hline HG & 76.5 & 85.6 & 72.5 & 49.1 & 40.25 & 72 & 68.2 \\ \hline HG+transformer & 84 & 92 & 79 & 54 & 49 & 78.5 & 75.3 \\ \hline \end{tabular}
\end{table}
Table 5: Weighted F-score (%) of different graph-based methods compared on BRIGHT dataset.
hierarchical relationship within the images, which allows it to surpass the performance of existing methods for histopathology image analysis. This emphasizes the significance of considering the relationship between cells and the surrounding tissue area for accurate cancer classification. Additionally, we established that the self-attention-based model outperforms the cross-attention-based model. We attribute this observation to the ability of self-attention to extract long-range dependencies within the graphs. Furthermore, we demonstrated the importance of analyzing the relationship between similar parts of the histopathology image, showcasing that constructing the graphs based on the similarity between node features yields superior results compared to other approaches, such as those based on spatial distance. However, we think that in the future both similarity and spatial distance should be combined for graph edge formation. It would also be good to explore novel techniques for enhancing graph convolutions in order to extract long-range dependencies within the graph more effectively. Additionally, there is a potential to develop innovative methods for graph pooling that minimize information loss. These directions of research would contribute to further advancements in histopathological image classification using heterogeneous graph neural networks.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Model & DCIS & IC & PB & UDH & ADH & FEA & Weighted F-score \\ \hline DSL & 81.7 & 90.9 & 78.0 & 55.0 & 48.3 & 75.2 & 73.6 \\ \hline KNN on distance & 81.1 & 90.4 & 77.8 & 56.8 & 57.7 & 75.9 & 74.9 \\ \hline KNN on node features & 84.0 & 92 & 79 & 54 & 49 & 78.5 & 75.3 \\ \hline \end{tabular}
\end{table}
Table 6: Weighted F-score (%) of different edge-formation methods compared on BRIGHT dataset. |
2308.06942 | Approximating Human-Like Few-shot Learning with GPT-based Compression | In this work, we conceptualize the learning process as information
compression. We seek to equip generative pre-trained models with human-like
learning capabilities that enable data compression during inference. We present
a novel approach that utilizes the Generative Pre-trained Transformer (GPT) to
approximate Kolmogorov complexity, with the aim of estimating the optimal
Information Distance for few-shot learning. We first propose using GPT as a
prior for lossless text compression, achieving a noteworthy compression ratio.
Experiment with LLAMA2-7B backbone achieves a compression ratio of 15.5 on
enwik9. We justify the pre-training objective of GPT models by demonstrating
its equivalence to the compression length, and, consequently, its ability to
approximate the information distance for texts. Leveraging the approximated
information distance, our method allows the direct application of GPT models in
quantitative text similarity measurements. Experiment results show that our
method overall achieves superior performance compared to embedding and prompt
baselines on challenging NLP tasks, including semantic similarity, zero and
one-shot text classification, and zero-shot text ranking. | Cynthia Huang, Yuqing Xie, Zhiying Jiang, Jimmy Lin, Ming Li | 2023-08-14T05:22:33Z | http://arxiv.org/abs/2308.06942v1 | # Approximating Human-Like Few-shot Learning
###### Abstract
In this work, we conceptualize the learning process as information compression. We seek to equip generative pre-trained models with human-like learning capabilities that enable data compression during inference. We present a novel approach that utilizes the Generative Pre-trained Transformer (GPT) to approximate Kolmogorov complexity, with the aim of estimating the optimal Information Distance for few-shot learning. We first propose using GPT as a prior for lossless text compression, achieving a noteworthy compression ratio. Experiment with LLAMA2-7B backbone achieves a compression ratio of 15.5 on enwik9. We justify the pre-training objective of GPT models by demonstrating its equivalence to the compression length, and, consequently, its ability to approximate the information distance for texts. Leveraging the approximated information distance, our method allows the direct application of GPT models in quantitative text similarity measurements. Experiment results show that our method overall achieves superior performance compared to embedding and prompt baselines on challenging NLP tasks, including semantic similarity, zero and one-shot text classification, and zero-shot text ranking.
## 1 Introduction
Large labeled datasets are often scarce in the real world where annotation is expensive and time consuming. This has prompted the development of few-shot learning, where the model is learned using only a few annotated samples [17]. One resort to the few-shot scenario is to utilize the pre-trained models like Generative Pre-trained Transformers (GPTs) [45; 46; 5; 41] with in-context learning [5; 75], fine-tuning [36] or the combination [2]. However, in-context learning requires heavy engineering to achieve a high accuracy [37], and its ability to generalize to different tasks is constrained by the input size and the need for precise formatting. Fine-tuning also has limitations, notably its inability to generalize to rare out-of-distribution datasets with limited labeled samples [73; 40].
Contrary to the data-hungry nature of machine learning, humans demonstrate exceptional ability in zero-shot and few-shot learning, where only a handful of labeled examples are needed to grasp a concept. Inspired by this, [24] proposed a human-like few-shot learning framework and boils it down to the ability of compressing data at inference time, measured by the universal information distance. Derived from a simple and accepted assumption in thermodynamics [3], the universal information distance emerges as the key component in taking the effective usage of few labeled data. Hence, accurate approximations of the information distance can lead to improved learning, though its incompatibility has posed significant challenges. This information distance, consisting of Kolmogorov complexity [34], is both data-type-agnostic and distribution-agnostic. Additionally, its parameter-free usage enables the metric's applicability across diverse applications. Inspired by this theory, we propose a novel method GPT-AC, that leverages the knowledge GPTs learned during
pre-training. Our method tackles tasks traditionally challenging for prompt engineering or fine-tuning, including semantic similarity, zero-shot text classification and ranking.
Kolmogorov complexity, also known as the length of the shortest computer program for a target output, is often approximated by the compression length. At the same time, entropy coding algorithms attempt to approach the code length lower bound declared by Shannon's source coding theorem: the entropy of the source. By using rich priors, such as pre-trained language models, that more accurately predict the source distribution, we can optimize the compression ratio and more closely approximate Kolmogorov complexity. Despite the potential of large language model-based compressors, their direct application to downstream tasks is nearly infeasible due to speed constraints. In addition to the inference speed required by the language model itself, the overhead of the compressor is even more substantial. Fortunately, the information distance only requires the compressed length instead of actual compression of the text sequence. We demonstrate an equivalence of the compression length under arithmetic coding to the negative log probability of text tokens when using GPT as the prior. This easy-to-compute compression length enables us to efficiently approximate the universal information distance without the overheads of the actual compression. By approximating normalized information distances [10; 32; 33] using GPT-based compression, we significantly enhance GPT's ability to quantitatively capture text similarities, which forms the foundation for its application in downstream NLP tasks.
Our contributions are as follows:
1. We propose a novel method that utilizes generative pre-trained language models to approximate information distance for few-shot learning without the need for fine-tuning or prompt engineering.
2. By connecting arithmetic coding's compression length to the cumulative negative log probabilities in GPT models, we efficiently compute and approximate the information distance derived from Kolmogorov complexity.
3. We validate the effectiveness of our method through experiments in semantic textual similarity, text classification and re-ranking under zero-shot and one-shot settings, exhibiting notable improvement over embedding and prompt baselines.
4. We also demonstrate that our lossless text compression method GPT-AC achieves SOTA compression ratio with Llamaa-7B backbone, highlighting the potential of pre-trained large language models as powerful priors in compression.
## 2 Related Works
### Few-shot Learning
Prior to the emergence of large pre-trained models, the majority of previous works on few-shot learning can be divided into two streams: meta/transfer-learning based methods [64; 15; 54; 57] and data augmentation based methods [30; 44; 14; 18]. However, the former relies on constraining the hypothesis space by prior knowledge from other tasks or support datasets while the latter depends on the heuristics of the datasets, often accompanied with the smoothness assumption [61] (i.e., close data points in the input space share the same label). Pre-trained models, on the other hand, have incorporated prior knowledge during the pre-training stage and are proved to be excellent few-shot learners [5]. However, pre-trained models suffer from (1) high computational cost and (2) unsatisfactory performance in out-of-distributed datasets [73]. The problem of computational cost is especially prominent for large language models like GPT-3 where it is infeasible to fine-tune locally. We utilize pre-trained language model for one-shot and zero-shot classification tasks with no fine-tuning required.
### Kolmogorov Complexity and Compression Distance
Information distance was first proposed by [3] as a universal similarity metric between two objects. It was proved to be universal in a way that it is optimal and can discover every possible metric [3]. Due to the problem of incompatibility, [9; 33; 10] have derived computable version of information distances for tasks like clustering and plagiarism detection, shedding light on the possibility of using real-world compressors to approximate Kolmogorov complexity. These prior works utilize traditional compressors, indicating that the performance gain on downstream tasks mainly comes from the compressor-based distance metric.
Recently, [23] propose non-parametric learning by compression with latent variables (NPC-LV) where neural networks are incorporated into information distance. They demonstrate that trained generative models like variational autoencoders can be used directly with zero parameters for downstream few-shot image classification. Also, [25] employ this method in text classification using GZIP, achieving competitive results compared to several widely-used deep learning approaches. However, it remains open in how to incorporate pre-trained language models into this framework, which we aim to address in this paper. A recent study [71] explores the use of compression length for in-context example selection. They rely on a large candidate set and applied the model under generation setting. In contrast, we focus on adapting generative models to zero/one-shot learning for text similarity tasks.
### Neural Compression
Our GPT-based compressor falls under the category of neural compression where neural networks are used for data compression. Shannon's source coding theorem [51] establishes the lower bound of the lossless compression on random variables with probability distribution. With near-optimal coding schemes, the bottleneck is the entropy approximation. Fortunately, deep generative models with explicit density estimation serve as the entropy model that can learn adaptively. [59] propose Bits-Back with Asymmetric Numeral Systems (BB-ANS), a lossless compression algorithm based on VAE. Bit-Swap [28] further improves BB-ANS by incorporating multiple latent variables and hierarchical networks. In addition to autoencoders, Flow [47; 69]-based lossless compression [22] outperform Bit-Swap and achieve the state of the art compression ratio of images. The development of deep neural networks also benefits lossless text compression. [19] use recurrent neural networks [50] combining with arithmetic coding [70] to achieve higher compression ratio than GZIP. Recent advancements, such as the fast transformer-based general-purpose lossless compressor TRACE [39], have demonstrated promising results in enhancing compression performance with transformer architecture. NNCP [1] v3.1, adaptively encodes the source message with Transformers, achieves state-of-the-art performance on the Matt Mahoney's Large Text Compression Benchmark2.
Footnote 2: [http://mattmahoney.net/dc/text.html](http://mattmahoney.net/dc/text.html)
### Pre-trained Models
Pre-training has been adopted in numerous deep learning models with the rise of transformer [62] due to its ability of learning task-agnostic representation. In NLP, encoder-only transformers like BERT [13] has achieved impressive performance on GLUE benchmark [68] including tasks like natural language inference and sentiment analysis with only MLP and fine-tuning. Decoder-only transformers like GPT [46; 5] can treat downstream discriminative tasks in a generative way. However, previous works on few-shot learning using language models are either prompt-based [43; 42; 35] or fine-tuning-based [75; 72; 40] while in this work, we propose a new way to leverage pre-trained language models for few-shot learning without fine-tuning or in-context learning.
## 3 Method
### Human-Like Few-Shot Learning and Universal Information Distance
We consider the following generalized human-like few-shot learning setting: assume a universe of objects \(\Omega\) comprising various concept classes. Given an unlabelled subset \(U\subset\Omega\), a hypothesis \(\phi\) is formulated. For any concept class \(c\), we have information \(D_{c}\) representing either a description or representations derived from a few examples. The goal is to determine the concept class for any new instance \(x\in\Omega\) under a computable metric \(\mathcal{M}\):
\[argmin_{c\in\mathcal{C}}\mathcal{M}(x,D_{c}|\phi(U)). \tag{1}\]
Here, \(\phi\) can be a pre-trained model that learned the distribution from the unlabeled data \(U\). For instance, GPT can be seen as the hypothesis \(\phi\) learned from a large unlabeled corpus.
This learning scenario differs from traditional machine learning settings as it permits extensive pre-training using unlabeled data but provides very limited labeled data for learning concept classes. Under this framework, the optimal metric \(\mathcal{M}\) for few-shot learning algorithms is proven to be the
universal information distance, defined by Kolmogorov complexity (details are shown in Appendix), based on the von-Neuman-Landauer Principle. Formally, the universal information distance \(\mathcal{M}_{UID}\) between two objects \(x\) and \(y\) is defined as:
\[\mathcal{M}_{UID}(x,y)=\max\{K(x|y),K(y|x)\}, \tag{2}\]
\(K(x|y)\) is the Kolmogorov complexity of \(x\) given \(y\), or informally, the length of the shortest program that outputs \(x\) given input \(y\). Since the Kolmogorov complexity is not computable [78], it is often approximated via compression in practice.
### GPT-AC: Generative Pre-trained Transformer based Arithmetic Coding for Compression
In this section, we propose GPT-based Arithmetic Coding (GPT-AC) where GPT is integrated into adaptive arithmetic coding, an entropy-based compressor.
**GPT as the Entropy Model**
Consider a text \(T=(t_{1},t_{2},...,t_{n})\) composed of a sequence of tokens. Let \(\phi\) represent a GPT model, where \(\phi(t_{1:(i-1)})=P_{i}(t_{i}|t_{1},t_{2},...,t_{i-1})\) models the probability distribution \(P_{i}\) of the next token \(t_{i}\). The function \(\phi(T)\) outputs all next-token probability distributions \((P_{2},\cdots,P_{n+1})\). To derive the distribution for \(P_{1}\), an EOS (End Of Sentence) token is added at the start of the text as \(t_{0}\). For each token \(t_{i}\), the associated \(P_{i}\) serves as the entropy model for both encoding and decoding in the compressor.
**GPT-AC Encoding**
In the encoding phase, under the scheme of adaptive arithmetic coding, we start with an initial interval \(I^{0}=[0,1)\). For each token \(t_{i}\) in the sequence, we calculate the cumulative distribution function \(F_{i}(t_{i})\) and \(P_{i}(t_{i})\) based on \(\phi(t_{1:(i-1)})\). Then, the interval \(I=[I_{\text{low}},I_{\text{high}})\) is updated according to the range assigned to \(t_{i}\):
\[\begin{split}& I_{\text{low}}^{i}=I_{\text{low}}^{i-1}+(I_{\text{ high}}^{i-1}-I_{\text{low}}^{i-1})\times F_{i}(t_{i}),\\ & I_{\text{high}}^{i}=I_{\text{low}}^{i-1}+(I_{\text{high}}^{i-1 }-I_{\text{low}}^{i-1})\times(F_{i}(t_{i})+P_{i}(t_{i}))\end{split} \tag{3}\]
After updating \(I\) for each token in the sequence, we can pick any number \(x\) within the final interval to represent the entire text sequence.
**GPT-AC Decoding**
When decoding the encoded message \(x\), the token \(t_{1}\) can be identified by finding the range \([F_{1}(t_{1}),F_{1}(t_{1})+P_{1}(t_{1})]\) that includes \(x\). The value of \(x\) is then updated by normalizing it within the range of \(t_{1}\), using the formula: \(x\leftarrow\frac{x-F_{i}(t_{1})}{P_{1}(t_{1})}\). With this updated \(x\) and the next-token probability distribution \(\phi(t_{2})\), we can decode the next token \(t_{2}\). This process is repeated until an EOS token is encountered, indicating the end of the text. The text can be losslessly decoded using \(x\) and \(\phi\) alone.
**Negative Log Probability as Compression Length**
During the arithmetic encoding, the length of the interval \(I^{i}\) equals to \(I^{i-1}*P_{i}(t_{i})\). From an initial interval of length 1, the entire message's encoding results in a final interval with a length of \(\prod_{i=1}^{n}P_{i}(t_{i})\). The number of bits required to represent this final interval, and thus the message \(T\), is \(\sum_{i=1}^{n}-\log_{2}P_{i}(t_{i})\). This reveals a method to approximate the compression length directly without
Figure 1: Illustration of GPT-based Arithmetic Encoding
exactly performing the compression. With the triangular forward attention masking in GPT, we can pass the full tokenized text sequence to the model and obtain probability distributions for all tokens.
**GPT Pre-training Optimizes for Compression Length**
The optimization target during pre-training for auto-regressive models such as GPT is defined as:
\[L(T|p_{model})=-\log p_{model}(T)=-\log p_{model}(t_{1},t_{2},...,t_{n})=\sum_{ i=1}^{n}-\log_{2}P_{i}(t_{i}).\]
For entropy coding, \(H(T)\triangleq\mathbb{E}(-\log p_{data}(T))\), defining the optimal code length. While \(p_{data}\) is often unknown, we thus use the observation \(p_{d\tilde{a}ta}\) to approximate \(p_{data}\):
\[H(T)=\mathbb{E}_{p_{data}}[-\log p_{model}(T)]\simeq\mathbb{E}_{p_{data}}[- \log p_{model}(T)]\]
According to The Shannon-Fano-Elias coding scheme [11], we can construct a prefix-free code of length \(-\log p_{\text{model}}(t_{1},t_{2},...,t_{n})+2\) bits. Consequently, the pre-training phase of GPT models is essentially learning a compressor that optimizes the coding length.
**The Rank Coding Variant**
In the method outlined above, we primarily employ arithmetic coding for text compression. An alternative variant of lossless coding uses rank instead of probability [21]. After the GPT model predicts the distribution of the next token, we can rank all tokens according to their probability. The target token is assigned the corresponding rank index, with higher probability tokens having smaller rank indices. In this variant, we approximate the compression length as \(\sum_{i=1}^{n}\log_{2}(rank_{i})\) where \(rank_{i}\) denotes the rank for token \(i\).
**Applicability of Our Method**
When calculating compression length, the algorithm is most efficient with transformers that use forward-only attention. This approach allows for a single-pass processing of the entire text sequence, thus ensuring efficient computation. However, through the use of a sliding window technique, the method's applicability can be extended to all generative language models covering decoder-only and encoder-decoder architectures.
### Universal Information Distance Approximation
Having computed the compression length using the GPT-AC method, we can now utilize it to approximate the universal information distance. Let \(x=\{x_{1},\cdots,x_{n}\}\) and \(y=\{y_{1},\cdots,y_{m}\}\) denote two tokenized text sequences, where each \(x_{i}\) or \(y_{i}\) represents a token in the sequence. We approximate \(K(x)\) using the compression length \(C(x)=\sum_{i=1}^{n}-\log_{2}P_{i}(x_{i})\) where \(P_{i}\) represents the probability distribution for \(x_{i}\) as predicted by the GPT model.
As in Equation (2), we also need \(K(x|y)\), approximated as follows: let \(P_{i}=\phi(y,x_{1:i-1})\) denotes the probability distribution for token \(x_{i}\) output by the GPT model, given \(y=(y_{1},\cdots,y_{m})\) and previous tokens in \(x\), \(K(x|y)\) can be estimated as \(C(x|y)=\sum_{i=1}^{n}-\log_{2}P_{i}(x_{i})\). A similar approach can be used to estimate \(K(y)\) and \(K(y|x)\). We denote all compressed-based approximations in \(C(\cdot)\).
However, compression lengths vary when the lengths of the input text differ. We need a normalized version to enable comparison across diverse object pairs. There are several normalized measures. [33] introduced a normalized version, referred to as the Normalized Information Distance (NID):
\[\mathcal{M}_{max}(x,y)=\frac{\max\{K(x|y),K(y|x)\}}{\max\{K(x),K(y)\}} \tag{4}\]
To tackle challenges such as partial matching 3, [31] proposed the following variants of the universal distances suitable for broader application scenarios:
Footnote 3: Partial matching means situations where only portions of two objects match each other.
\[\mathcal{M}_{min}(x,y)=\frac{\min\{K(x|y),K(y|x)\}}{\min\{K(x),K(y)\}}. \tag{5}\]
[27] proposed the Compression-Based Dissimilarity Measure (CDM) for data mining applications, demonstrating its effectiveness in practice. This is rescaled to fit the range \([0,1)\):
\[\mathcal{M}_{mean}(x,y)=\frac{C(x|y)+C(y|x)}{C(x)+C(y)}=2*CDM-1, CDM=\frac{C(xy)}{C(x)+C(y)} \tag{6}\]
### Applications of Universal Information Distance
We will now explain how the aforementioned distances can be applied to various NLP tasks. To determine the text similarity score between two texts \(x,y\), we first compute their individual compression lengths \(C(x),C(y)\). We also calculate the joint and conditional compression lengths \(C(xy),C(x|y),C(y|x)\). Using these values, we compute the distance metrics defined in Section 3.3 as \(\mathcal{M}\). We can then apply these distance measures to specific tasks:
* For semantic textual similarity, we treat the two sentences as \(x\) and \(y\), and use \(\mathcal{M}\) as predictions.
* For zero-shot text classification, we treat the label descriptions as \(x\) and the multiple choice options as \(y\). For one-shot text classification, we consider the training sample as \(x\) and the test sample as \(y\). We classify the text sample by comparing \(\mathcal{M}\) values for different classes.
* For text re-ranking, we treat the documents as \(x\) and the queries as \(y\), ranking according to \(\mathcal{M}\).
## 4 Experiments
Our experimental evaluation consists of four key components: lossless text compression and three downstream tasks, namely semantic textual similarity, text classification, and text re-ranking. For downstream applications, we mainly conduct experiments with the GPT-2 small (124M), comparing to GPT-2 embedding or prompt tuning baselines and BERT-base-uncased (110M) models from HuggingFace Hub4. We take GPT-2 as an example due to its light weights and availability. However, the proposed method is not limited to GPT models. It can be readily applied to more advanced large language models, such as LLAMA[58], provided that the output probabilities are available.
Footnote 4: [https://huggingface.co/](https://huggingface.co/)
### Lossless Text Compression
In the Lossless Text Compression task, we assess our method on the Enwik9 [53] and the first gigabyte of BookCorpus [77] datasets. In addition to GPT-2 models, we tested our method on LLAMA2-7B [58]. GPT-AC is benchmarked against both traditional methods, such as GZIP [12], and contemporary neural network-based techniques like DZIP [20] and TRACE [39]. In the actual implementation, GPT-AC processes chunks of 2,500 characters for GPT-2 and 10,000 characters for LLAMA2-7B independently. Although this would slightly compromise the compression ratio, it enables parallel computing.
As shown in Table 1, GPT-AC significantly outperforms conventional methods like GZIP and 7z in compression ratio. Even with the GPT-2 small model, GPT-AC achieves a more than 2-fold enhancement in compression ratio compared to the widely-used GZIP, on both the Enwik9 and Book datasets. On Enwik9, GPT-AC with Llama2-7B records a compression ratio of 15.56, a 67% enhancement over the previous state of the art of 9.33, based on NNCP [1]5. As the language model increases, the compression ratio consistently improves, suggesting that larger and better-trained language models will further amplify these results.
Footnote 5: Refer to:[http://mattmahoney.net/dc/text.html](http://mattmahoney.net/dc/text.html)
Note that NNCP involves updating the parameters of a large transformer model (v3.1, 199M) during encoding. We can also achieve an even higher compression ratio with encode-time optimization. However, the encode-time optimization process only enables an increase in compression ratio as the input message length goes up, and would overfit to the specific message. With random initialization, the compression ratio will be around 1.0 for the beginning of the input message, offering no benefits to similarity measurement.
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline Model \(\rightarrow\) & \multicolumn{4}{c}{GPT-AC (Ours)} & GZIP & 7z & DZIP & TRACE & NNCP \\ Dataset \(\downarrow\) & Llama2-7B & GPT-2-L & GPT-2-M & GPT-2-S & & & & & \\ \hline \hline Enwik9 & **15.56** & 8.05 & 7.71 & 6.53 & 3.09 & 4.35 & 4.47 & 5.29 & 9.33 \\ BookCorpus & **10.55** & 8.34 & 7.89 & 7.22 & 2.77 & 3.80 & 3.95 & 4.58 & - \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Compression Ratio by Compression Method.** Note the compression ratio equals to _Original text length / Compressed text length_.
### Semantic Textual Similarity
For the Text Semantic Modeling, we test the models on the Semantic Textual Similarity benchmark (STS-b) [8]. The dataset consist of sentence pairs with labels from 0 to 5 indicating the semantic relatedness. We compare GPT-AC against GPT-2-emb [46], where we take the last token embedding vector, and BERT-emb [13], where we take the averaged token embedding vector, which has been proven to be effective in previous studies[52]. We then calculate the cosine similarity between these vectors to serve as the distance measure. For GZIP, we follow [23; 25] and use the normalized compression distance as the metric.
As shown in Table 2, our method substantially outperforms the cosine similarity distance metrics derived from GPT-2 embeddings and shows moderate enhancement over those utilizing BERT embeddings. These results demonstrate the effectiveness of the approximated information distance in capturing semantic similarities.
### Text Classification
For Text Classification, we evaluate the models on PIQA (Physical Interaction: Question Answering [4]) and CaseHOLD (Case Holdings On Legal Decisions [76]) for zero-shot classification, and SST-5(sentiment analysis) [56], Medical abstracts [49], AG-News(news headlines) [74], and Banking77(banking and financial) [7] for one-shot classification. We compare our method with two main approaches: 1) fine-tuning GPT-2 or BERT with a classification layer, 2) in-context learning with GPT-2 (Refer to Appendix for detailed settings) and 3) calculate cosine similarity of Sentence-BERT (all-MiniLM-L12-v2) embeddings as a metric for classification.
As depicted in Table 3, in the zero-shot multiple-choice classification context, the information distance approximated by GPT-AC delivers superior results than cosine similarity distance metrics based on the embeddings from GPT-2, BERT, and even SBERT. Note that SBERT, which is fine-tuned on 1 billion high-quality labeled sentence pairs, does not fall under our category of human-learning few-shot models; it is included to provide a point of reference.
In one-shot text classification, our method surpasses both fine-tuned GPT and BERT on all datasets. Additionally, we also outperform the GPT-prompt version in all datasets except SST-5. Given that SST-5 is a widely used classification benchmark, we hypothesize that the superior performance of the
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Dataset & \# Test & GPT-AC (Ours) & GZIP & GPT-emb & BERT-emb \\ \hline \hline STS-12 & 3,108 & 40.2 & **50.4** & 5.4 & 30.9 \\ STS-13 & 1,500 & **66.0** & 48.4 & 14.6 & 59.9 \\ STS-14 & 3,750 & **55.3** & 43.3 & 10.9 & 47.7 \\ STS-15 & 3,000 & **70.3** & 59.1 & 9.6 & 60.3 \\ STS-16 & 1,186 & **69.5** & 59.4 & 26.8 & 60.3 \\ STS-b & 1,379 & **55.0** & 50.7 & 12.4 & 47.3 \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Semantic Textual Similarity Performance. Spearman Rank Correlation \(\rho\) between the distance metrics and given labels for the STS datasets. \(\rho*100\) is reported.**
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline & \multicolumn{2}{c}{Model\(\rightarrow\)} & GPT-AC & GZIP & GPT-prompt & GPT & BERT & (SBERT) \\ Dataset\(\downarrow\) & \# C & Domain\(\downarrow\) & (Ours) & & & & \\ \hline \hline Zero-shot & Multiple & Choice & & & & & \\ PIQA & 2 & Reasoning & **61.5** & 53.4 & 50.5 & 49.2 & 50.1 & (56.5) \\ CaseHOLD & 5 & Legal & **58.3** & 52.4 & 20.3 & 19.9 & 35.0 & (50.6) \\ \hline One-shot & & & & & & & \\ AGNews & 4 & News & **47.8**\(\pm\)3.3 & 30.2\(\pm\)3.0 & 47.2\(\pm\)2.9 & 37.7\(\pm\)7.2 & 45.5\(\pm\)3.1 & (45.8\(\pm\)10.2) \\ Medical & 5 & Bio-Med & **27.9**\(\pm\)3.2 & 25.6\(\pm\)2.8 & 22.1\(\pm\)1.2 & 23.7\(\pm\)3.5 & 23.8\(\pm\)4.8 & (39.7\(\pm\)9.1) \\ SST5 & 5 & Sentiment & 26.8\(\pm\)3.1 & 21.2\(\pm\)2.7 & **29.8**\(\pm\)1.6 & 22.7\(\pm\)2.8 & 21.1\(\pm\)3.3 & (26.2\(\pm\)2.3) \\ Banking77 & 77 & Finance & **34.0**\(\pm\)1.3 & 20.3\(\pm\)1.5 & - & 21.7\(\pm\)1.7 & 24.5\(\pm\)3.9 & (53.1\(\pm\)1.9) \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Text Classification Accuracy (100%). We report the averaged accuracy across 5 runs with different random seeds, together for the standard deviations. This does not apply to zero-shot experiments because the models do not contain randomness.**
prompt approach could be due to data leakage during GPT pre-training. Moreover, we did not apply the GPT-prompt method to the banking77 dataset because accommodating one-shot samples of 77 classes [7] within the GPT-2 prompt proves challenging, and adjusting the prompt can be complex. This issue represents a significant hurdle when applying GPT-2 with in-context learning.
### Text Re-ranking
For Text Re-Ranking, we evaluate the models on various domain-specific zero-shot text retrieval datasets, including Trec-Covid [65], Trec-News [55], SciFact [67], BioASQ [60], FiQA-2018 [38], and ArguAna [66]. Given a query, we first retrieve the top relevant document with BM25 [48] with Elastic Search API6. We then re-rank the documents with the models. We compare our system with the original BM25 ranking, and the Dense Passage Retrieval (DPR) [26] model, a BERT-based model already fine-tuned on the MS MARCO [6] for ranking, and a text GZIP [25] compressor.
Footnote 6: [https://github.com/faceface/face](https://github.com/faceface/face)
## 4 Conclusion
Figure 2: **Relation between Prediction Distance Ratio and One-shot Classification Accuracy. Experiment result under \(\mathcal{M}_{mean}\) with Log-Prob.**
### Information Distance and Classification Accuracy
In Figure 2, we aim to illustrate the performance variance when the test cases have different distance scoring. For each test case, we compute the prediction distance ratio \(R_{pred}(x)=\frac{\mathcal{M}(x,D_{c}*)}{|c|\sum_{c}\mathcal{M}(x,D_{c})}\). Here, \(D_{c}\) represents the one-shot example in class \(c\), \(C\) embodies all the classes, and \(c^{*}\) stands for the class predicted under metric \(\mathcal{M}\). A smaller \(R_{pred}\) suggests that the predicted class is more distant from the average distance. We then categorize all the test samples according to their \(R_{pred}\) value, with each group containing 10% of the data. In Figure 2, the x-axis represents the average \(R_{pred}\) within each group, and the y-axis represents the group accuracy. The plot indicates that the further the predicted class deviates from the average, the better performance of our method.
## 6 Conclusion and Discussion
In this work, we introduce GPT-based Compression, a novel approach that leverages GPT models to estimate the information distance for few-shot learning. Our proposed method facilitates an out-of-the-box application of generative language models under zero-shot and one-shot scenarios without fine-tuning or prompting. This enhances the generalizability of pre-trained generative models, demonstrated by our experiments across various downstream NLP tasks. While this method can synergize with existing techniques such as further fine-tuning and various prompting techniques, we leave these combinations as potential areas for future research.
Tapping into the capabilities of pre-trained language models can bring about significant improvements in lossless text compression. Our experiments indicate a direct relationship between the scale of the language model and the improvement in compression ratio. Performance can also be elevated with fine-tuning during encoding. As pre-trained large language models become more accessible, integrating LLM-driven compression into practical applications can lead to significant advantages, including reduced storage expenses and minimized transmission overheads in various contexts.
The universal information distance is foundational, unifying various popular deep learning approaches for few-shot learning. For example, Siamese Network [29] uses twin networks to extract features, where \(\mathcal{M}\) refers to a contrastive loss; Prototypical Network [54] optimizes \(\mathcal{M}\) to learn a better \(\mathcal{D}_{c}\) in the embedding space; Bi-Encoder architecture used in SBERT can also be unified where \(\mathcal{M}\) can be cosine similarity.
Lastly, we wish to highlight the distinction between two learning paradigms: large-data dependent learning and human-like few-shot learning. Despite the impressive ability of recent GPT models to learn from vast data, we contend that the creation of new concepts and ideas will predominantly occur in a few-shot learning manner, regardless of future advancements in these models. In the context of few-shot learning, where labeled data are scarce and cannot be used to approximate the non-computable information distance, both humans and machines are poised on an equal footing to unearth new regularities that augment compression.
## Limitations
For downstream NLP tasks, our experiments use older versions of pre-trained language models, constrained by computational constraints and limited access. |
2305.07478 | Professional Ethics by Design: Co-creating Codes of Conduct for
Computational Practice | This paper deals with the importance of developing codes of conduct for
practitioners--be it journalists, doctors, attorneys, or other
professions--that are encountering ethical issues when using computation, but
do not have access to any framework of reference as to how to address those. At
the same time, legal and technological developments are calling for
establishing such guidelines, as shown in the European Union's and the United
States' efforts in regulating a wide array of artificial intelligence systems,
and in the resurgence of rule-based models through 'neurosymbolic' AI, a hybrid
format that combines them with neural methods. Against this backdrop, we argue
for taking a design-inspired approach when encoding professional ethics into a
computational form, so as to co-create codes of conduct for computational
practice across a wide range of fields. | Samuel Danzon-Chambaud, Marguerite Foissac | 2023-05-12T13:46:32Z | http://arxiv.org/abs/2305.07478v1 | # Professional Ethics by Design: Co-creating Codes of Conduct for Computational Practice
###### Abstract.
This paper deals with the importance of developing codes of conduct for practitioners--be it journalists, doctors, attorneys, or other professions--that are encountering ethical issues when using computation, but do not have access to any framework of reference as to how to address those. At the same time, legal and technological developments are calling for establishing such guidelines, as shown in the European Union's and the United States' efforts in regulating a wide array of artificial intelligence systems, and in the resurgence of rule-based models through "neurosymbolic" AI, a hybrid format that combines them with neural methods. Against this backdrop, we argue for taking a design-inspired approach when encoding professional ethics into a computational form, so as to co-create codes of conduct for computational practice across a wide range of fields.
Social design, Game design, Professional ethics, Artificial intelligence
## 1. Introduction
In 2019, a team of journalists and developers at the BBC worked on setting up automated news--that is, automated text generation for journalistic purposes [7][17]--to cover the results of the general election in the United Kingdom. In doing so, they were faced with a somewhat unconventional challenge: having to delineate the exact journalistic rules that would go into the algorithm behind automated news [12]. For instance, this could involve reflections on determining a threshold to qualify the magnitude of a win. By how many votes should we tell the algorithm that it is "a large victory" as opposed to a "narrow win"? Likewise, below which percentage can it be considered a crushing defeat?
In journalism like in any other practice governed by professional ethics (for instance healthcare or law practice), work-related interrogations that are encountered on a day-to-day basis are usually addressed _via_ professional codes of conduct, which give recommendations as to a preferred way of acting in a given set of circumstances (e.g., the BBC's Editorial Guidelines, the American Medical Association's Code of Medical Ethics, bar associations' codes of conduct); however, these recommendations generally concern "real-life" situations and are not applicable the realm of algorithms and computation. Albeit not directly related to professional codes of conduct, a close enough example where computation and professional ethics intermingle is the Handbook of Sustainable Design of Digital Services (GR491, which frog's parent company, Cappemini, was involved in [25]) launched by the French Institute for Sustainable IT (Institut du numerique
responsable or INR). A set of 516 criteria that help digital professionals like project managers, front-end developers or interaction designers with reducing the environmental and social repercussions of the digital services they set up [20], GR491 has been collectively conceived by a working group of volunteers that included experts in UX design and sustainability.
Given the overwhelming role taken by datafication and information processing today--which brings about democratic concerns like filter bubbles [32], surveillance [41] and race and gender considerations [15]--we believe it is critical to reflect on computational aspects that are not yet a part of professional ethics. In this position paper, we first detail the legal and technological context that calls for developing professional codes of conduct for computational practice, then suggest a design-inspired approach that would be best suited to developing those. Ultimately, our goal is to bring forth a standardized set of procedures that can be emulated across disciplines, and where practitioners remain at the center of it.
## 2 Legal and Technical Context
Following the adoption of the General Data Protection Regulation (GDPR) that set limits to the business of datafication, the European Union has undertaken to be regulating AI so that it is safe for users and complies with existing laws [10]. The General Approach adopted at the end of 2022 is quite extensive in length: it is inclusive of rule-based and machine learning systems and takes a risk-based approach to regulating artificial intelligence. The main criticism, though, generally involves the idea that it would slow down innovation, as reported by a group of AI associations [26]. By contrast, the United States' Blueprint for an AI Bill of Rights, unveiled in October 2022, features a few guiding principles followed by a much more detailed Technical Companion, which provides guidance as to how to implement them. The Blueprint exhibits a more all-encompassing definition of AI than the EU's, yet shares similar concerns on preserving fundamental rights like safety for users [38]. That said, the US approach has been criticized for "lacking teeth" as it misses out on enforcement aspects [18][21][11].
Most interesting to us here are two dispositions contained in EU's General Approach and in the US' Blueprint: first, Article 69 of the General Approach specifically refers to the drawing up of codes of conduct that relate, among others, to "stakeholders participation in the design and development of the AI systems"; second, in its recommendations for "Safe and Effective Systems", the Technical Companion to the Blueprint stresses the importance of having "early-stage consultation" with impacted communities, but also with relevant stakeholders like "subject matter, sector-specific, and context-specific experts". Taken together, this testifies of lawmakers' inclination for an "Ethics by Design" approach to regulating AI--that is, when ethical considerations are thought of well ahead of writing the first line of code [16]. Our suggestion for establishing professional codes of conduct for computational practice therefore seems timely and relevant, as it would enable professional ethics to be embedded in the design of AI systems.
Besides complying with this regulatory background, coming up with professional codes of conduct for computational practice could also constitute an asset in the development of "neurosymbolic" AI systems, which may call for increased attention to encoding expert knowledge into algorithmic rules. Neurosymbolic systems generally refers to a blend of the two conflicting positions that have dominated much of the history of AI: on the one hand, the symbolic approach advocated by McCarthy, Minsky, Simon and Newell--which aimed at translating a wide vision of human understanding into computer code--and on the other hand, connectionist methods brought forth by Rosenblatt and, later, Lecun, Hinton and Bengio--which are rather focused on churning through large amounts of data using one or several layers of artificial "neurons" that mimic the functioning of the brain, relying for that on an activation function. Despite being the prevailing perspective up until the 1990s and leaving people in awe when IBM's Deep Blue beat chess champion Garry Kasparov,
symbolic AI has since waned off to give way to connectionist approaches [8]: those have, indeed, made impressive breakthroughs in recent years, especially in the realm of computer vision as shown in Krizhevsky, Sutskever and Hinton's groundbreaking neural architecture [24]. However, connectionism has also been criticized for, among others, being too opaque and, by extent, too unpredictable, thus reinforcing the idea that neurosymbolic systems will be at the forefront of AI research in the coming years [28][29][1].
According to Kautz' taxonomy [22], as of late significant AI breakthroughs--including some large language models--have incorporated a neurosymbolic dimension: for example, this is the case of Google's subsidiary DeepMind that beat world champion Lee Sedol at the very complex game of Go [34], of a software that learns by "looking at images and reading paired questions and answers" similarly to the way a human does [27, p. 1] and, more recently, of a program developed by Meta that, as pointed out by Marcus and Davis [30], makes use of neurosymbolic elements to play the highly reflective game Diplomacy at human-level performance [31]. Even though much work remains to be done on the best way forward to integrate connectionist and symbolic approaches [2][22], this is nonetheless clear evidence of the resurgence of rule-based methods under this hybrid format: as such, professional codes of conduct for computational practice could be critical to encoding professional ethics into a new generation of neurosymbolic AI systems. As we will show next, a human-centric take on design that is grounded in social responsibility appears to be the most relevant means to creating these guidelines.
## 3 The need for a design-inspired approach
As a discipline that has become increasingly oriented toward co-creation [6], design can be seen as the most appropriate framework to work with to come up with professional codes of conduct for computational practice. At its core, design takes a holistic look at the relationship between designed artifacts, people that are exposed to these artifacts and associated social, cultural and business contexts: as such, it primarily pays attention to user experience as its original focus is on valuing users first and foremost. That said, there is growing consensus now that design can also lead the way toward social good as it provides a unique set of skills, tools and methods that can be used in that regard [5]. According to Tromp, Hekkert and Verbeek [35], the idea is that design transcends problems that relate to usability only (i.e., user-centered design) to deal with questions that address social change as well (i.e., human-centered design). Thus, _social design_ or _design for social innovation_--as this vision is called [36][14]--is much relevant in a domain like professional ethics where the goal is to foster common good. Going into the specifics, we could follow one of the many design methodologies that are available to date [23], and which generally start with a phase of immersion and understanding (e.g., observations, interviews, surveys), followed by a key phase of ideation that can most notably be done through co-conception, before switching to a prototyping phase and, finally, a stage of testing and evaluation.
Relevant to co-conception here are some of the attributes provided by _game design_, another design stream with connections to two of the most prominent views on the act of designing: one carried by Simon (also mentioned above in the context of symbolic AI), whose focus is on rational problem-solving through decomposition [19][40]; the other advocated by Schon, according to whom reflection and action feed into each other in a sort of iterative loop [37]. Close to Simon's ideas is Bjork, Lundgren and Holopainen's [4] inventory of over 200 game design blocks or "patterns" (e.g., "paper rock scissors"), whereas Bateman and Boons' mention of Japanese game designers--which, according to them [3, p. xii], "display a kind of holistic thinking that defies decomposition into method" -- sits closer to Schon's argument, but is harder to encapsulate into a workable model [9]. During the co-conception phase, we could then draw on a Simon-inspired approach to game design so as to encourage practitioners to come up with their own ideas on how to translate
professional ethics into a computational format, which also presents the advantage of sharing the same epistemological roots.
## 4 Conclusion
In this paper, we detailed the legal and technological context that calls for developing professional codes of conduct for computational practice: first, the European Union's proposed AI Act and the United States' Blueprint for an AI Bill of Rights are both giving weight to an Ethics by Design approach where stakeholders are involved in the conception and development of AI systems, which shows the importance of encoding professional ethics into a computational form from the very beginning; second, the resurgence of rule-based models through neurosymbolic AI makes for having such guidelines already established, in order to integrate those into the development of future systems. What is more, we have also made the case for taking a social design perspective when conceiving these professional codes of conduct, which can be reinforced by a Simonian approach to game design in the co-conception phase.
The strength of using such a design-inspired approach rests in that it is not limited to any one domain, but--on the contrary--that it can be extended to many fields of application, like professional ethics within the journalistic, medical or legal community. Additionally, it brings to light another core issue, which is translating the specifics of any real-life situation into a form of abstraction that is suitable to a computational format, for which a proper design methodology is yet to be developed. If anything, New Zealand's efforts in rendering the law into the form of computer code shows the importance of undertaking this [13]: the program's ambition is to interact with businesses' and individuals' software so that it helps them better understand regulations and thus comply with it. This is very much line with Wing's observation on "computational thinking" [39], a way of solving problems, designing systems, and understanding human behavior that has abstraction and decomposition at its heart, but which also carries the complex and sometimes opaque task of translating human semantics into computer syntax [33].
## Acknowledgments
We are grateful to frogLab's director, Clement Bataille, and to our colleagues Rose Dumesny, Benjamin Martin and Yasmine Saleh for their insights and encouragement.
|
2306.06236 | iPLAN: Intent-Aware Planning in Heterogeneous Traffic via Distributed
Multi-Agent Reinforcement Learning | Navigating safely and efficiently in dense and heterogeneous traffic
scenarios is challenging for autonomous vehicles (AVs) due to their inability
to infer the behaviors or intentions of nearby drivers. In this work, we
introduce a distributed multi-agent reinforcement learning (MARL) algorithm
that can predict trajectories and intents in dense and heterogeneous traffic
scenarios. Our approach for intent-aware planning, iPLAN, allows agents to
infer nearby drivers' intents solely from their local observations. We model
two distinct incentives for agents' strategies: Behavioral Incentive for
high-level decision-making based on their driving behavior or personality and
Instant Incentive for motion planning for collision avoidance based on the
current traffic state. Our approach enables agents to infer their opponents'
behavior incentives and integrate this inferred information into their
decision-making and motion-planning processes. We perform experiments on two
simulation environments, Non-Cooperative Navigation and Heterogeneous Highway.
In Heterogeneous Highway, results show that, compared with centralized training
decentralized execution (CTDE) MARL baselines such as QMIX and MAPPO, our
method yields a 4.3% and 38.4% higher episodic reward in mild and chaotic
traffic, with 48.1% higher success rate and 80.6% longer survival time in
chaotic traffic. We also compare with a decentralized training decentralized
execution (DTDE) baseline IPPO and demonstrate a higher episodic reward of
12.7% and 6.3% in mild traffic and chaotic traffic, 25.3% higher success rate,
and 13.7% longer survival time. | Xiyang Wu, Rohan Chandra, Tianrui Guan, Amrit Singh Bedi, Dinesh Manocha | 2023-06-09T20:12:02Z | http://arxiv.org/abs/2306.06236v3 | # IPLAN: Intent-Aware Planning in Heterogeneous Traffic via Distributed Multi-Agent RL
###### Abstract
Navigating safely and efficiently in dense and heterogeneous traffic scenarios is challenging for autonomous vehicles (AVs) due to their inability to infer the behaviors or intentions of nearby drivers. In this work, we introduce a distributed multi-agent reinforcement learning (MARL) algorithm that can predict trajectories and intents in dense and heterogeneous traffic scenarios. Our approach for intent-aware planning, iplan, allows agents to infer nearby drivers' intents solely from their local observations. We model two distinct _incentives_ for agents' strategies: _Behavioral Incentive_ for high-level decision-making based on their driving behavior or personality and _Instant Incentive_ for motion planning for collision avoidance based on the current traffic state. Our approach enables agents to infer their opponents' behavior incentives and integrate this inferred information into their decision-making and motion-planning processes. We perform experiments on two simulation environments, Non-Cooperative Navigation and Heterogeneous Highway. In Heterogeneous Highway, results show that, compared with centralized training decentralized execution (CTDE) MARL baselines such as QMIX and MAPPO, our method yields a \(4.3\%\) and \(38.4\%\) higher episodic reward in _mild_ and _chaotic_ traffic, with \(48.1\%\) higher success rate and \(80.6\%\) longer survival time in _chaotic_ traffic. We also compare with a decentralized training decentralized execution (DTDE) baseline IPPO and demonstrate a higher episodic reward of \(12.7\%\) and \(6.3\%\) in _mild_ traffic and _chaotic_ traffic, \(25.3\%\) higher success rate, and \(13.7\%\) longer survival time.
Keywords:Autonomous Driving, Multi-agent Reinforcement Learning, Representation Learning
## 1 Introduction
In this work, we consider the task of trajectory planning for autonomous vehicles in dense and heterogeneous traffic. High density is typically measured in the number of vehicles per square meter and high heterogeneity refers to a large variance in agents' driving styles ranging from aggressive to conservative, vehicle dynamics, and vehicle types [1]. For example, these agents may include two-wheelers, cars, buses, and trucks. The key challenge to efficient trajectory planning in such environments is to be able to accurately infer the behavior of these heterogeneous agents [2]. Therefore, many solutions perform trajectory planning by jointly predicting the agents' future _trajectories_ along with their _intent_[3].
Trajectory prediction is the task of predicting the future states of an agent [4] which typically consists of spatial coordinates, and heading angle, but may also include first-order information such as velocity. Intent prediction focuses on inferring neighboring agents' behavior using local information [5]. In the context of autonomous driving, some studies have approached intent prediction by classifying driving behaviors into predefined classes [6; 2] such as aggressive or conservative. Although many methods for joint trajectory and intent prediction [7; 8; 3; 5] have been extensively
studied for planning in both industry and academia, most of the existing approaches are trained and evaluated on datasets like The Waymo Open Motion Dataset [9] and the NuScenes dataset [10], which primarily consist of homogeneous traffic and lack variation in driver behavior [3]. As a result, these methods [7; 8; 3; 5] often struggle to reliably predict the intentions of heterogeneous agents in unstructured and dense traffic [11].
On the other hand, simulators such as CARLA are designed to generate traffic agents with diverse, kinodynamically feasible behaviors [12], addressing the lack of diverse behavior in datasets. Most of the joint trajectory and intent prediction methods evaluated on the datasets discussed above can be used with such simulators [13; 4]. But these methods typically require generating and collecting data in offline storage, which defeats the purpose of a simulator [14]. Complementary to these offline approaches, simulators [12] also offers the capability to model multiple agents and their interactions simultaneously via multi-agent reinforcement learning (MARL), where the learning algorithm can engage with the simulation environment. MARL has demonstrated remarkable success in many different multi-agent domains such as Go [15], chess [16], poker [17], Dota2 [18], and StarCraft [19]. However, their applicability to autonomous driving has been relatively sparse [20].
Deep MARL for trajectory planning in autonomous driving only recently achieved significant momentum with the Highway-Env simulation environment [21] proposed in the author's doctoral thesis [22]. Since then, several deep MARL approaches have been proposed [23; 24] for trajectory planning, but these methods do not extend to heterogeneous traffic and also assume agents can communicate and share information with each other. To the best of our knowledge, there is no prior decentralized training decentralized execution (DTDE) MARL approach for joint intent and trajectory prediction for AVs in heterogeneous traffic.
**Main Contributions:** In this paper, we propose a new intent-aware trajectory planning algorithm for autonomous driving in dense and heterogeneous traffic environments. We cast the autonomous driving problem as a hidden parameter partially observable stochastic game (HiP-POSG) [25; 26] and solve it using a DTDE MARL framework, called iplan, built around a joint intent and trajectory prediction encoder-decoder architecture. Given the current traffic conditions and historical observations, iplan computes the optimal multi-agent policy for each agent in the environment, relying solely on local observations without weight-sharing or communication.
Our main contributions include:
1. To the best of our knowledge, we propose the first DTDE MARL algorithm for joint trajectory and intent prediction for autonomous vehicles in dense and heterogeneous environments. Our algorithm is fully decentralized without weight sharing, communication, or centralized critics, and can handle variable agents across episodes.
2. We model an explicit representation of agents' private incentives that include \((i)\)_Behavioral Incentive_ for high-level decision-making strategy that sets planning sub-goals and \((ii)\)_Instant Incentive_ for low-level motion planning to execute sub-goals. These incentives enable behavior-aware motion forecasting, which is more suited for heterogeneous traffic.
3. We perform experiments on two simulation environments, Non-Cooperative Navigation [27] and Heterogeneous Highway [21]. The results show that, compared to centralized training decentralized execution (CTDE) MARL baselines like QMIX and MAPPO, our method yields a \(4.3\%\) and \(38.4\%\) higher episodic reward in _mild_ and _chaotic_ traffic and is \(48.1\%\) more successful with an \(80.6\%\) longer survival time in _chaotic_ traffic in Heterogeneous Highway. Compared to the DTDE baseline IPPO, we demonstrate a higher episodic reward of \(12.7\%\) and \(6.3\%\) in _mild_ traffic and _chaotic_ traffic, a \(25.3\%\) higher success rate, and \(13.7\%\) longer survival time in the Heterogeneous Highway.
## 2 Related Work
**Trajectory and Intent Prediction for Autonomous Driving.** Trajectory prediction is a fundamental task in autonomous driving [28; 29; 30]. TraPHic and RobustTP [31; 8] use an LSTM-CNN framework to predict trajectories in dense and complex traffic. TNT [32] uses target prediction, motion estimation, and ranking-based trajectory selection to predict future trajectories. DESIRE [4] uses sample generation and trajectory ranking for trajectory prediction. PRECOG [13] combines conditioned trajectory forecasting with planning objectives for AVs. Additionally, many methods
focus on intent prediction to gain a better understanding of interactions between vehicles when predicting trajectories. Intent prediction can be done by physical-based methods like Kalman filter [33] or Monte Carlo [34], classical machine learning like Gaussian processes (GP) [35], Hidden Markov Model (HMM) [36], and Monte Carlo Tree Search (MCTS) [37], or deep learning-based methods such as Trajectron++ and CS-LSTM [7; 38]. [39] uses a Seq2Seq framework to encode agents' observations over neighboring vehicles as their social context for trajectory forecasting and decision-making. [40] uses temporal smoothness in attention modeling for interactions and a sequential model for trajectory prediction. However, most methods overlook variations in driving behaviors, which deteriorates their reliability in heterogeneous traffics.
**Intent-aware Multi-agent Reinforcement Learning.** As a large-scale and non-cooperative [41] scenario, the awareness of opponents' incentives is quite important when implementing MARL in autonomous driving. Intent-aware multi-agent reinforcement learning [5] estimates an intrinsic value that represents opponents' intentions for communication [42] or decision-making. Many intent inference modules are based on Theory of Mind (ToM) [43] reasoning or social attention-related mechanisms [44; 45]. [46] uses ToM reasoning over opponents' reward functions from their historical behaviors in performing multi-agent inverse reinforcement learning (MAIRL). [47] uses game theory ideas to reason about other agents' incentives and help decentralized planning among strategic agents. However, many prior works oversimplify the intent inference and make some prior assumptions about the content of intent. In the real world, agents' incentives are more complex and intractable during interactions among large groups of agents, so a more general and high-level incentive representation is needed in intent-aware MARL.
**Opponent Modeling.** Opponent modeling [48] in multi-agent reinforcement learning usually deploys various inference mechanisms to understand and predict other agents' policies. Opponent modeling could be done by either estimating others' actions and safety via Gaussian Process [49] or by generating embeddings representing opponents' observations and actions [50]. Inferring opponents' policies helps to interpret peer agents' actions [51] and makes agents more adaptive when encountering new partners [52]. Notably, many works [53; 54] reveal the phenomenon whereby ego agents' policies also influence opponents' policies. To track the dynamic variation of opponents' strategies made by an ego agent's influence, [55; 56] propose the latent representation to model opponents' strategies and influence based on their findings on the underlying structure in agents' strategy space. [57] provides a causal influence mechanism over opponents' actions and defines an influential reward over actions with high influence over others' policies. [58] proposes an optimization objective that accounts for the long-term impact of ego agents' behavior in opponent modeling. A considerable limitation of many current methodologies is the underlying assumption that agents continually interact with a consistent set of opponents across episodes. This assumption is a mis-fit for real-world autonomous driving contexts. On roads, drivers constantly come across different vehicles and drivers, necessitating the ability to infer the intentions of new opponents with minimal prior knowledge.
## 3 Problem Formulation
**Problem Setting and Assumptions:** We consider a multi-agent scenario with \(N\geq 2\) non-cooperative agents [59], _i.e._, agents are controlled by individual policies that maximize their own reward without weight sharing or communication. In each episode, agents interact with one another and gain general experience without any prior knowledge about a specific agent from previous episodes. Agents' strategies remain the same within one episode, though strategies may evolve between episodes. We assume that all agents are driven by motivations behind their actions. These motivations can arise from instantaneous reactions to environmental changes or more enduring preferences. We denote them as _incentives_ for agents' strategies. While these incentives are private and not explicitly known to other agents, they can be discerned through observing agents' strategies that offer insights into the incentives behind agents' actions. In this work, we explicitly model these private incentives with hidden parameters representing latent states. Therefore, we formulate this problem as a multi-agent hidden parameter partially observable stochastic game [60], or HiP-POSG1.
Footnote 1: an extension of the HiP-POMDP [25; 26]
**Task and objective:** We consider the tuple
\[\left\langle N,\mathcal{S},\left\{\mathcal{A}_{i}\right\}_{i=1}^{N},\left\{ \mathcal{O}_{i}\right\}_{i=1}^{N},\left\{\Omega_{i}\right\}_{i=1}^{N},\left\{ \mathcal{Z}_{i}\right\}_{i=1}^{N},\left\{f_{i}\right\}_{i=1}^{N},\mathcal{T}, \left\{r_{i}\right\}_{i=1}^{N},\gamma\right\rangle, \tag{1}\]
where \(N\) is the number of agents. \(\mathcal{S}\) is the set of states. \(\mathcal{A}_{i}\) is the set of actions for agent \(i\). \(\mathcal{O}_{i}\) is the observation set of agent \(i\) of the global state \(S\in\mathcal{S}\), generated by agent \(i\)'s observation function \(\Omega_{i}:\mathcal{S}\rightarrow\mathcal{O}_{i}\). In our problem, agent \(i\)'s observation \(\mathbf{o}_{i}^{t}\) at time \(t\) could be further specified as \(\mathbf{o}_{i}^{t}=\{o_{i,j}^{t}\}_{j\in\mathcal{N}_{i}}\), where \(\mathcal{N}_{i}\) refers to the set of agents \(j\) in the neighborhood of \(i\). The bold \(\mathbf{o}_{i}^{t}\) denotes the set of agent \(i\)'s observation of its neighbors at time \(t\). We denote the sequence of agent \(i\)'s historical observations \(o_{i,j}\) of opponent \(j\) up to time \(t\) as \(h_{i,j}^{t}=\{o_{i,j}^{k}\}_{k=1}^{t}\). The bold \(\mathbf{h}_{i}^{t}=\{\mathbf{o}_{i}^{k}\}_{k=1}^{t}\) denotes agent \(i\)'s observation history of its neighbors. Here, we indicate that agent \(i\)'s observation history of agent \(j\) only consists of its observation of agent \(j\)'s states, while agent \(j\)'s actions and rewards are unobservable information by others. \(\mathcal{Z}_{i}\) denotes the latent state space that represents the _incentive_ of agent \(i\)'s strategy. \(f_{i}:\mathcal{O}_{1}^{t}\times\mathcal{O}_{i}^{2}\times\ldots\times\mathcal{ O}_{i}^{t}\times\mathcal{Z}_{j}\rightarrow\mathcal{Z}_{j}\) is agent \(i\)'s incentive inference function that makes an estimation \(\hat{z}_{i,j}\) of its opponent \(j\)'s actual incentive \(z_{j}\) from its observation history of opponent \(h_{i,j}^{t}\) up to time \(t\) and its past estimation of \(z_{j}\). Here, we assume agent \(i\)'s estimations of agent \(j\)'s incentive \(\hat{z}_{i,j}\) belongs to the same latent state space \(\mathcal{Z}_{j}\) as agent \(j\)'s actual incentive \(z_{j}\). \(\mathcal{T}:\mathcal{S}\times\mathcal{A}_{1}\times\mathcal{A}_{2}\times\ldots \times\mathcal{A}_{N}\rightarrow\Delta(\mathcal{S})\) is the (stochastic) transition matrix between global states. \(r_{i}:\mathcal{S}\times\mathcal{A}_{1}\times\mathcal{A}_{2}\times\ldots \times\mathcal{A}_{N}\rightarrow\mathbb{R}\) is the reward function for agent \(i\). \(\gamma\) is the reward discount factor. Agent \(i\) decides its action \(a_{i}\in\mathcal{A}_{i}\) with policy \(\pi_{i}:\mathcal{O}_{1}^{t}\times\mathcal{O}_{2}^{t}\times\ldots\times\mathcal{ O}_{N}^{t}\times\mathcal{Z}_{1}\times\mathcal{Z}_{2}\times\ldots\times\mathcal{ Z}_{N}\rightarrow\Delta(\mathcal{A}_{i})\) with its observations \(\mathbf{o}_{i}^{t}\), own incentive \(z_{i}\), and estimated opponents' incentives \(\hat{z}_{i,j}^{t}\) at time \(t\).
The objective of agent \(i\) is to find the optimal policy \(\pi_{i}^{*}\), maximizing its \(\gamma\)-discounted cumulative rewards over an episode of length \(T\). The objective equation is given by
\[\pi_{i}^{*}=\arg\max_{\pi_{i}}\mathbb{E}_{\pi_{i}}\left[\sum_{t=1}^{T}\gamma^{ t}r_{i}\left(s^{t},\left\{a_{i}^{t}\right\}_{i=1}^{N}\right)\right] \tag{2}\]
where \(r_{i}\) is the reward function of agent \(i\).
**Incentive Latent Representation.** In this work, we assume that agents' actions are motivated by \((i)\) long-term planning tied to an agent's driving behavior or personality and \((ii)\) short-term collision avoidance related to the current traffic state. To this end, we decouple agent \(i\)'s incentive \(z_{i}\) into a vector \(z_{i}=\{\beta_{i},\zeta_{i}\}\). Our formulation is related to the task and motion planning literature [61] where the behavior incentive follows a high-level decision-making strategy with the goal of setting planning sub-goals whereas the instant incentive refers to the low-level motion planning with the goal of executing the sub-goals. The behavior incentive biases the motion forecasting in a behavior-aware manner such that it is better suited for heterogeneous traffic.
**Behavioral Incentive \(\beta_{i}\)** models drivers' driving styles which are deeply rooted in their _personalities_[62]. Given the observations for the previous few seconds, behavior incentive performs high-level decision-making and plans actions, or sub-goals, and asks, "_What's the most likely action of this driver to take next?_". The answer is encoded via \(\hat{\beta}_{i}^{t}\). This tells an agent whether it should speed up in empty traffic or slow down in dense traffic. It also is able to recognize conservative drivers and the possible need to overtake. Therefore, this incentive is able to reason between aggressive and conservative drivers.
**Instant Incentive \(\zeta_{i}\)** signifies drivers' instantaneous responses to proximate traffic, taking into account the positions and speeds of neighboring vehicles. Instant incentive then asks, _"How should I execute this sub-goal/high-level action/plan using my controller so that I'm safe and still on track towards my goal?"_. Instant incentive measures classical efficiency metrics defined in robotics literature such as collision avoidance (safety), distance from goal, and smoothness.
**Incentive Inference** To cater to two different incentives, we split agent \(i\)'s incentive inference function \(f_{i}\) into two distinct functions, \(f_{i,\beta}\) and \(f_{i,\zeta}\): \(\hat{\beta}_{i,j}^{t}\sim f_{i,\beta}(\cdot|h_{i,j}^{t},\hat{\beta}_{i,j}^{t-1})\) uses agent \(i\)'s historical observation \(h_{i,j}^{t}\) of opponent \(j\) up to time \(t\) and its previous estimation of opponent \(j\)'s behavioral incentive \(\hat{\beta}_{i,j}^{t-1}\) to estimate opponent \(j\)'s new behavioral incentive \(\hat{\beta}_{i,j}^{t}\) at time \(t\). \(\hat{\zeta}_{i,j}^{t}\sim f_{i,\zeta}(\cdot|o_{i,j}^{t},\hat{\beta}_{i,j}^{t}, \hat{\zeta}_{i,j}^{t-1})\) uses agent \(i\)'s observation \(o_{i,j}^{t}\) of opponent \(j\) at time \(t\), its current estimation over opponent \(j\)'s behavioral incentive \(\hat{\beta}_{i,j}^{t}\) and its previous estimation of opponent \(j\)'s
instant incentive \(\hat{\zeta}^{t-1}_{i,j}\) to estimate opponent \(j\)'s new instant incentive \(\hat{\zeta}^{t}_{i,j}\) at time \(t\). With the estimation of opponents' incentives, agent \(i\)'s policy \(a^{t}_{i}\sim\pi(\cdot|\mathbf{o}^{t}_{i},\hat{\boldsymbol{\beta}}^{t}_{i},\hat {\boldsymbol{\zeta}}^{t}_{i})\) decides its action \(a^{t}_{i}\) with its local observation, ego incentive, and estimations over opponents' incentives. Here, \(\hat{\boldsymbol{\beta}}^{t}_{i}\) denotes the combination of agent \(i\)'s behavioral incentive \(\beta_{i}\) and its estimations over all its opponent agents' behavioral incentives\(\{\hat{\beta}^{t}_{i,j}\}_{j=1,j\neq i}^{N}\) at time \(t\). \(\hat{\boldsymbol{\zeta}}^{t}_{i}\) denotes the combination of agent \(i\)'s instant incentive \(\zeta_{i}\) and its estimations over all its opponent agents' instant incentives \(\{\hat{\zeta}^{t}_{i,j}\}_{j=1,j\neq i}^{N}\) at time \(t\).
## 4 iplan: Methodology
We demonstrate the overall architecture of our proposed framework in Figure 1. Agents interact with the environment with continuous state space \(\mathcal{S}\). Here, we denote that an agent's state includes its ID, current position, and current velocity. An agent's observation includes the states of its neighbors within its observation scope. An agent \(i\) records its historical observations of its opponents' states for incentive inference. With historical observations \(h^{t}_{i,j}\), and intermediate observations \(\mathbf{o}^{t}_{i}\), agent \(i\) estimates opponent \(j\)'s behavioral incentive \(\beta_{j}\) and instant incentive \(\zeta_{j}\). The controller of agent \(i\) decides action \(a^{t}_{i}\) based on its local observation \(\mathbf{o}^{t}_{i}\), ego, and opponents' estimated behavioral incentives \(\hat{\boldsymbol{\beta}}^{t}_{i}\), and instant incentives \(\hat{\boldsymbol{\zeta}}^{t}_{i}\). The action space \(\mathcal{A}\) of the environment is discrete and consists of the following high-level actions: {_lane left_, _idle_, _lane right_, _faster_, _slower_} in our Heterogeneous Highway environment, or {_idle_, _up_, _down_, _left_, _right_} in our Non-cooperative Navigation environment (details in Section 5 and Appendix A), while a low-level motion controller (_e.g._, IDM model [63]) converts the high-level actions into a sequence of \(x,y\) coordinates.
### Behavioral Incentive Inference
The behavioral incentive inference module intends to estimate opponents' behavioral incentives by generating latent representations from their historical states. At time step \(t\), agent \(i\) queries a sequence of historical observations \(h^{t}_{ij}\) for opponent \(j\) from its observation history profile as the input of the behavioral incentive inference module. For ease of computing, we truncate the full historical interaction sequence into a fixed-length sequence that includes the observation history from the previous \(t_{h}\) steps. We introduce an encoder \(\hat{\mathcal{E}}_{i}\) to update opponents' behavioral incentive estimation and a decoder \(\mathcal{D}_{i}\) to predict opponents' state sequences in the next \(t_{h}\) steps with current historical observations and behavioral incentive estimation. In practice, we parameterize encoder \(\mathcal{E}_{i}\) with \(\theta_{\mathcal{E}_{i}}\), and decoder \(\mathcal{D}_{i}\) with \(\theta_{\mathcal{D}_{i}}\). Hence, the encoder \(\mathcal{E}_{i}\) approximates the behavioral incentive inference function \(\hat{\beta}^{t}_{i,j}\sim f_{\beta}(\cdot|h^{t}_{i,j},\hat{\beta}^{t-1}_{i,j})\).
To capture the sequential nature within opponents' state observation sequences, the encoder \(\mathcal{E}_{i}\) employs a recurrent network that processes \(h^{t}_{ij}\) as a time series. This produces a new estimate of the behavioral incentive of opponent \(j\). As insights from cognitive science suggest, the human social focus remains relatively stable [64]. Thus, we interpret the behavioral incentive inference for opponents as a gradual process, converging towards the true behavioral incentives of opponents without abrupt transitions between updates. Starting with an initial neutral estimation of opponents' behavioral latent states, agents propose new estimates for opponents' behavioral incentives at each time step. However, they employ a gentle update strategy, using an additional coefficient \(\eta\), to refine the behavioral incentive estimates. This approach allows agents to produce more accurate estimates of opponents' behavioral incentives, managing the variability between consecutive updates, which in turn ensures more stable agent policies.
\[\hat{\beta}^{t}_{i,j}=\eta\mathcal{E}_{i}(h^{t}_{i,j},\hat{\beta}^{t-1}_{i,j} )+(1-\eta)\hat{\beta}^{t-1}_{i,j}. \tag{3}\]
The decoder \(\mathcal{D}_{i}\) uses another recurrent network that concatenates agent \(i\)'s historical observations \(h^{t}_{ij}\) of opponent \(j\) with its current behavioral incentive estimation \(\hat{\beta}^{t}_{ij}\). The output is the predicted state sequence \(\hat{h}^{t+t_{h}}_{i,j}\) of opponent \(j\) from \(t\) to \(t+t_{h}\). We train our encoder and decoder with behavioral incentive inference loss \(\mathcal{J}_{\beta_{i}}\), given by an average L1-norm error between the predicted state
sequence \(\hat{h}_{i,j}^{t+t_{h}}=\mathcal{D}_{i}(h_{i,j}^{t},\hat{\beta}_{i,j}^{t})\) and the ground truth \(h_{i,j}^{t+t_{h}}\).
\[\mathcal{J}_{\beta_{i}}=\min_{\mathcal{E}_{i},\mathcal{D}_{i}}\frac{1}{Nt_{h}} \sum_{j=1}^{N}\left\|\mathcal{D}_{i}(h_{i,j}^{t},\hat{\beta}_{i,j}^{t})-h_{i,j}^ {t+t_{h}}\right\|_{1}. \tag{4}\]
### Instant Incentive Inference for Trajectory Prediction
The instant incentive inference module intends to estimate opponents' instant incentives from current observations of surrounding agents and their behaviors, which is used for trajectory prediction. Similar to the behavioral incentive inference, we introduce another encoder-decoder structure with encoder \(\phi_{i}\) parameterized by \(\theta_{\phi_{i}}\) and decoder \(\psi_{i}\) parameterized by \(\theta_{\psi_{i}}\). The encoder \(\phi_{i}\) approximates the instant incentive inference function \(\hat{\zeta}_{i,j}^{t}\sim f_{i,\zeta}(\cdot|o_{i,j}^{t},\hat{\beta}_{i,j}^{t},\hat{\zeta}_{i,j}^{t-1})\) from agent \(i\)'s current observations \(\mathbf{o}_{i}^{t}\) of agent \(i\), current behavioral incentive estimations \(\hat{\mathbf{\beta}}_{i}^{t}\), and previous instant incentive estimations \(\hat{\mathbf{\zeta}}_{i}^{t-1}\). The instant latent state encoder \(\phi_{i}\) uses a sequential structure with two networks. The first network is a Graph Attention Network (GAT) [65]. For agent \(i\), GAT reads its observation \(\mathbf{o}_{i}^{t}\) at time \(t\) and the current behavioral incentive estimation \(\hat{\mathbf{\beta}}_{i}^{t}\). The output of GAT is fed to an undirected graph \(\mathcal{G}_{i}^{t}\) that represents instantaneous interactions among agents at time \(t\). Every node in \(\mathcal{G}_{i}^{t}\) represents an agent in the environment, while the attention weight over the edge between node \(i\) and node \(j\) encodes the interaction between agent \(i\) and \(j\) with its relative importance. The second part of the encoder \(\phi_{i}\) is a recurrent neural network (RNN) to extract the temporal information from interaction history. The RNN uses the graphical representation \(\mathcal{G}_{i}^{t}\) of interactions as the input and previous instant incentive estimation \(\hat{\mathbf{\zeta}}_{i}^{t-1}\) as the hidden state. The output hidden state of this RNN \(\hat{\mathbf{\zeta}}_{i}^{t}\) is the updated instant incentive estimation over all opponents of agent \(i\).
The decoder \(\psi_{i}\) predicts all opponents' trajectories over a pre-defined length \(t_{p}\) from instant incentive estimations \(\hat{\mathbf{\zeta}}_{i}^{t}\). We use another RNN that takes agent \(i\)'s current observation \(\mathbf{o}_{i}^{t}\) as the input and its current instant incentive estimation \(\hat{\mathbf{\zeta}}_{i}^{t}\) as the hidden state. The first output of this RNN is the prediction of opponents' states \(\hat{\mathbf{o}}_{i}^{t+1}\) at the next time step \(t+1\). Then we use \(\hat{\mathbf{o}}_{i}^{t+1}\) as the new input
Figure 1: **Intent-aware planning in heterogeneous traffic:** At time \(t\), we show current vehicle states in solid colors: ego vehicles \(i\) (solid yellow vehicle), aggressive vehicles (solid red), conservative vehicles (solid green), and neutral vehicles (solid blue). The future states of each vehicle are shown with dotted colors. At time step \(t\), the ego-agent observes nearby vehicles and infers their behavioral and instant incentives. The behavioral incentive inference (red block) uses agent \(i\)’s historical observations \(\mathbf{h}_{i}^{t}\) of other vehicle states (stacked gray boxes of current observations, \(\mathbf{o}_{i}^{t}\)) to infer their behavioral incentives and predict future state sequences with behavioral incentive inferences. The instant incentive inference (blue block) uses agent \(i\)’s current observations \(\mathbf{o}_{i}^{t}\) (single gray box) and its inference of others’ behavioral incentives \(\hat{\mathbf{\beta}}_{i}^{t}\) (single red box) to infer other vehicles’ instant incentives \(\hat{\mathbf{\zeta}}_{i}^{t}\) for trajectory prediction. Agent \(i\)’s controller (yellow block) selects its action \(a_{i}^{t}\) with its current observations \(\mathbf{o}_{i}^{t}\) (gray) and its inference of others’ behavioral incentives \(\hat{\mathbf{\beta}}_{i}^{t}\) (red) and instant incentives \(\hat{\mathbf{\zeta}}_{i}^{t}\) (blue).
of RNN and iteratively predict opponents' states. The sequence of opponents' state predictions \(\{\hat{\mathbf{\phi}}_{i}^{t+k}\}_{k=1}^{t_{p}}\sim\psi_{i}(\mathbf{o}_{i}^{t}, \hat{\boldsymbol{\zeta}}_{i}^{t})\) is the trajectory prediction from \(t+1\) to \(t+t_{p}\) for all opponents of agent \(i\). We train our encoder and decoder with instant incentive inference loss \(\mathcal{J}_{\zeta_{i}}\), given by an average L1-norm error between predicted trajectories \(\{\hat{\mathbf{\phi}}_{i}^{t+k}\}_{k=1}^{t_{p}}\) and ground truth trajectories \(\{\mathbf{o}_{i}^{t+k}\}_{k=1}^{t_{p}}\).
\[\mathcal{J}_{\zeta_{i}}=\min_{\phi_{i},\psi_{i}}\frac{1}{Nt_{p}}\sum_{j=1}^{N} \sum_{k=0}^{t_{p}-1}\left\|\psi_{i}(\mathbf{o}_{i}^{t},\phi_{i}(\mathbf{o}_{i}^ {t},\hat{\boldsymbol{\beta}}_{i}^{t},\hat{\boldsymbol{\zeta}}_{i}^{t-1}))- \mathbf{o}_{i}^{t+k+1}\right\|_{1} \tag{5}\]
### Implementation
The pseudocode of our algorithm is provided in Algorithm 1. For each environmental step \(t\) in the execution (line \(4\)), agent \(i\) gathers its current and historical observations \(\mathbf{o}^{t}_{i}\) and \(\mathbf{h}^{t}_{i}\) (line \(6\)), and uses this information to infer their opponents' behavioral incentives \(\mathbf{\beta}^{t}_{i}\) and instant incentives \(\mathbf{\zeta}^{t}_{i}\) (lines \(7\) and \(8\)). After that, agent \(i\)'s policy \(\pi_{i}\) selects action \(a^{t}_{i}\) (line \(9\)). The backbone algorithm for each agent's controller is PPO [66], which includes a policy network \(\pi_{i}\) and a critic network \(Q_{i}\). For each gradient step in training, agent \(i\) updates its policy \(\pi_{i}\) and critic \(Q_{i}\) (line \(15\)) with sampled trajectories, computes the behavioral incentive inference loss \(\mathcal{J}_{\beta_{i}}\) (line \(16\)) to update its behavioral incentive inference encoder \(\theta_{\mathcal{E}_{i}}\) and decoder \(\theta_{\mathcal{D}_{i}}\) with \(\mathcal{J}_{\beta_{i}}\), and uses instant incentive inference loss \(\mathcal{J}_{\zeta_{i}}\) (line \(17\)) to update its instant incentive inference encoder \(\theta_{\phi_{i}}\) and decoder \(\theta_{\psi_{i}}\).
## 5 Empirical Results and Discussion
We perform experiments over two non-cooperative environments, Non-Cooperative Navigation [27] and Heterogeneous Highway [21]. Experiments are designed from two perspectives. The first is to compare our approach's performance with other CTDE and DTDE MARL approaches in non-cooperative environments. In this paper, we compare our method with two CTDE MARL baselines, QMIX [67] and MAPPO [68], and one DTDE MARL baseline, IPPO [69]. QMIX uses a central network to assign credits among agents with respect to their Q-values and global states. MAPPO uses a central critic that reads the observation of all agents and generates a critic value to update distributed actors. IPPO uses a distinct PPO policy to control each agent without any centralized training, weight-sharing, communication, or inference module. The other perspective is to show the necessity of instant and behavioral incentive inference, especially under highly heterogeneous scenarios. We further design two scenarios with different heterogeneity levels in both environments and perform ablation studies over two variants of our method, including iplan-BM a vanilla IPPO controller without the instant incentive inference module, and iplan-GAT, a vanilla IPPO controller without behavioral incentive inference module. Details regarding the experiment environment design are given in Appendix A. Further details regarding implementation, visual results, module design, and hyper-parameter study are given in Appendix B, C, D, and E, respectively.
### Environments
**Non-Cooperative Navigation**. Non-Cooperative Navigation is an adaptation of the Cooperative Navigation scenario in the Multi-agent Particle Environment (MPE) [27], This environment involves \(n\) agents independently covering \(n\) landmarks. Agents aim to choose, reach and remain at landmarks while avoiding conflict. Each agent, at every step, observes other agents' and landmarks' identifiers, positions, and velocities, selects actions from \(\{\textit{idle}\), _up_, _down_, _left_, _right_\(\}\), and gets a reward based on its distance to the closest landmark. Agents face a \(-5\) penalty if a collision happens, earn \(10\) if reaching a landmark, and win a \(100\) reward if all agents reach landmarks without conflicts. We span experiments over two scenarios. The _easy_ scenario has \(3\) controllable agents varying in their sizes and kinematics, and the _hard_ scenario adds an uncontrollable agent taking random actions apart from \(3\) controllable agents.
**Heterogeneous Highway**. Heterogeneous Highway is our enhanced multi-agent iteration of the Highway-Env's Highway scenario [21]. It replicates rush-hour traffic on a multi-lane highway with diverse driving behaviors. The MARL-controlled vehicles aim to navigate safely at speeds between \(20\) and \(30\)\(m/s\) amidst varied traffic. Uncontrollable vehicles fall under three behavior-driven models, adapted from [70]: _Normal_, _Aggressive_, and _Conservative_, distinguished by risk-taking and general speed. Each agent observes nearby vehicles' ID, position, and velocity, choosing actions from \(\{\textit{lane left}\), _idle_, _lane right_, _faster_, _slower_\(\}\). Rewards are given for collision-free navigation, maintaining speed, and using the rightmost lane. We perform experiments over two scenarios with different compositions of behavior-driven vehicles. The _mild_ scenario has \(80\%\)_Normal_, \(10\%\)_Aggressive_, and \(10\%\)_Conservative_ vehicles. The _chaotic_ scenario has \(40\%\)_Normal_, \(30\%\)_Aggressive_, and \(30\%\)_Conservative_ vehicles.
### Results on Non-Cooperative Navigation
Figure 1(a) compares episodic rewards in _easy_ and _hard_ scenarios. iplan outperforms other methods with low deviation. iplan-GAT and vanilla IPPO have larger deviations, indicating the benefit of behavioral incentive inference in stabilizing strategies. QMIX and MAPPO perform poorly with negative episodic rewards in both scenarios. In Non-Cooperative Navigation, agents are attracted to the closest landmark at each time step, allowing multiple agents to target the same landmark simultaneously. As there is no consensus in destination assignment, agents must observe and infer others' strategies to modify their own. This reliance on observations and inference contributes to the superior performance of DTDE MARL approaches over CTDE MARL approaches in Non-Cooperative Navigation.
### Results on Heterogeneous Highway
Figure 1(b) compares episodic rewards in the _mild_ and _chaotic_ traffic scenarios of the Heterogeneous Highway. We find that iplan has the best episodic reward in both the _mild_ and _chaotic_ traffic. iplan-GAT, iplan-BM, and vanilla IPPO have similar performances in _mild_ traffic scenarios, but iplan-GAT is slightly worse than iplan in the _chaotic_ traffic. Notably, two CTDE MARL baselines have much lower episodic rewards than DTDE MARL approaches in _chaotic_ traffic, and QMIX has a significant collapse compared with its performance in _mild_ traffic.
In addition to the episodic reward curve comparison, we evaluate our method and baselines over several navigation metrics, including:
**Episodic Average Speed**. Agents' average speed during their lifetime in an episode. Agents are encouraged to drive faster when driving between \(20\) and \(30\)\(m/s\).
**Average Survival Time**. The average time steps passed over all agents before they collide or reach the end of this episode. Longer survival time reflects agents' better ability to avoid collisions.
**Success Rate**. The percentage of vehicles that still stay collision-free when an episode ends.
Table 1 shows navigation metrics for _mild_ and _chaotic_ traffic. High speed (closer to \(30\)) correlates with low survival time and success rate. This is
\begin{table}
\begin{tabular}{r r r r r} \hline \hline & \multicolumn{1}{c}{\multirow{2}{*}{Approech}} & \multicolumn{1}{c}{\multirow{2}{*}{\begin{tabular}{c} Avg. Speed \\ (\(m/s\)) \\ \end{tabular} }} & \multicolumn{1}{c}{\multirow{2}{*}{\begin{tabular}{c} Avg. Survival Time \\ (\# Time Steps) \(\uparrow\) \\ \end{tabular} }} & \multicolumn{1}{c}{\multirow{2}{*}{\begin{tabular}{c} Success Rate \\ (\(\%\)) \(\uparrow\) \\ \end{tabular} }} \\ \hline \multirow{5}{*}{\begin{tabular}{c} QMIX [67] \\ MAPPO [68] \\ IPPO [69] \\ ipplan-GAT \\ ipplan-BM \\ \end{tabular} } & \multicolumn{1}{c}{\multirow{2}{*}{\begin{tabular}{c} \(2.14\pm 0.09\) \\ \(\mathbf{27.85\pm 0.40\)} \\ \(\mathbf{27.85\pm 0.40\)} \\ \(\mathbf{22.63\pm 0.17\)} \\ \(\mathbf{22.61\pm 0.16\)} \\ \end{tabular} } & \multicolumn{1}{c}{\multirow{2}{*}{
\begin{tabular}{c} \(7.59\pm 3.11\) \\ \(66.13\pm 4.13\) \\ \(75.54\pm 3.61\) \\ \(\mathbf{68.44\pm 6.64\)} \\ \(\mathbf{45.63\pm 6.33\)} \\ \end{tabular} } \\ \cline{1-1} \cline{2-5} & iplan & \(22.91\pm 0.15\) & & \(70.56\pm 3.81\) & \(\mathbf{68.44\pm 5.86\) \\ \cline{1-1} \cline{2-5} & iplan & \(22.91\pm 0.15\) & & \(70.56\pm 3.81\) & \(\mathbf{68.44\pm 5.86\) \\ \cline{1-1} \cline{2-5} & iplan & \(27.06\pm 0.47\) & & \(39.38\pm 2.64\) & \(19.69\pm 3.72\) \\ \cline{1-1} \cline{2-5} & MAPPO [68] & \(\mathbf{29.46\pm 0.05\)} & & \(42.31\pm 2.43\) & \(16.25\pm 3.76\) \\ \cline{1-1} \cline{2-5} & IPPO [69] & \(22.28\pm 0.13\) & & \(67.01\pm 3.64\) & \(42.50\pm 7.12\) \\ \cline{1-1} \cline{2-5} & iplan-GM & \(20.91\pm 0.13\) & & \(71.24\pm 3.83\) & \(61.88\pm 6.41\) \\ \cline{1-1} \cline{2-5} & iplan-BM & \(21.65\pm 0.28\) & & \(63.20\pm 3.51\) & \(35.31\pm 5.66\) \\ \cline{1-1} \cline{2-5} & iplan & \(21.61\pm 0.16\) & & \(\mathbf{76.20\pm 3.33\)} & \(\mathbf{67.81\pm 5.91}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Navigation metrics in Heterogeneous Highway: Metrics are averaged over \(64\) episodes with \(0.95\) confidence. iplan outperforms all other approaches in its highest success rate and survival time, though it tends to be conservative in its average speed.**
Figure 2: Comparison of average episodic reward in the Non-Cooperative Navigation and Heterogeneous Highway environments. **Conclusion:** iplan (orange) outperforms CTDE approaches like QMIX (blue) and MAPPO (brown) as well as IPPO (green) in heterogeneous traffic environments.
because aggressive reward-exploiting policies increase collision risk, reducing long-term reward. Approaches like iplan and iplan-GAT drive slower (closer to \(20\)) for safety and higher episodic reward. Instant incentive inference improves episodic reward and success rates, especially in _chaotic_ traffic. iplan maintains similar success rates but a higher average speed in _mild_ traffic, being more conservative and dependent in heterogeneous traffic. Comparing iplan and iplan-GAT, iplan drives faster in both scenarios for higher episodic reward. iplan-GAT has a longer survival time in _mild_ traffic, but the opposite in _chaotic_ traffic. This indicates that agents are more dependent on their instant incentive inference in _mild_ traffic when opponents' trajectories are more predictable, and more dependent on their behavioral incentive inference in _chaotic_ traffic due to aggressive vehicles' unpredictable behaviors. QMIX performs well in _mild_ traffic but poorly in _chaotic_ traffic (success rate \(<20\%\)) due to environmental heterogeneity effect on its credit assignment.
### Discussion
**Centralized versus Decentralized Training Regime**. In this work, we operated in the decentralized training regime, based on the assumption that agents should learn navigation policies in a DTDE manner without centralization in training. Empirically, we find that CTDE MARL approaches perform worse as the environmental heterogeneity increases due to the absence of consensus among agents in heterogeneous environments. On the other hand, the awareness of opponents' strategies becomes more important in agents' decision-making when the environment is heterogeneous, especially the awareness of agents' instant reactions to surroundings. This need for increased awareness makes intent-aware distributed MARL algorithms perform better in these environments.
To further investigate the empirical performance of CTDE and DTDE approaches under our problem setting, we conduct experiments integrating two incentive inference modules of iplan with two CTDE approaches, QMIX and MAPPO, and compare its performance with iplan and other baselines. We include the experiment details and results in Appendix G.3. Results show that integrating iplan inference module in CTDE approaches does not help to achieve a better performance in the _chaotic_ scenario of the Heterogeneous Highway than the current DTDE version of iplan.
**Decoupled Incentive Inference**. Individually, the incentives yield some benefit over a baseline controller. For example, we find that both the behavior and instant incentive inference modules individually help to achieve a higher reward, especially in more heterogeneous environments (See Figure 2). However, our system works best when both incentives are jointly activated, for example in Table 1, we find that the success rate drops significantly for iplan-GAT, compared to iplan (\(61.88\%\) versus \(67.81\%\)). This clearly indicates autonomous vehicles need the behavior incentive module to survive in the more heterogeneous chaotic traffic scenario.
## 6 Conclusion, Limitations, and Future Work
This paper presents a novel intent-aware distributed multi-agent reinforcement learning algorithm tailored for planning and navigation in heterogeneous traffic. We model two distinct incentives, the behavioral incentive and the instant incentive, for agents' strategies. Our approach enables agents to infer their opponents' behavior incentives and integrate this inferred information into their decision-making and motion-planning processes. Results show that our approach shows a promising result in the two environments we use, Non-Cooperative Navigation and Heterogeneous Highway, with a better performance in episodic reward curves and navigation metrics than baselines. Our research has some limitations:
First, our evaluation of the proposed approach has been conducted exclusively within a simulation environment. Such simulations typically leverage a low-dimensional observation space, compared to the high-dimensional spaces in real-world autonomous driving scenarios, such as those using image-based observations. Predicting the full state of a multi-agent system within these real-world contexts could prove challenging; agents might inaccurately reconstruct or predict states, leading to potentially significant and hazardous mistakes. Thorough evaluation and refinement of our methodology will be necessary in more intricate traffic scenarios and with real-world vehicle trajectories.
Second, given the vast scope of traffic scenarios and the varied spectrum of driving behaviors, our approach might fail to generalize in real-world applications. In other words, our agents might confront unfamiliar strategies they have not encountered during training. Such unforeseen generalization is
sues could negatively impact system performance when they arise. As a potential remedy, future work in this direction could incorporate a pre-trained behavior model using datasets that capture a wide range of driving behaviors. Agents could then fine-tune this model locally based on their activities. This adjustment might mitigate the adverse effects that arise when confronting unfamiliar agents, thereby enhancing the robustness of our approach.
Third, we explored two incentives to represent and infer the objectives of other drivers to inform the ego vehicle's motion planning. Our findings indicate that in diverse, dense, and heterogeneous settings, collectively inferring these incentives improves of the learning approach. However, in certain scenarios, such as in more straightforward or mixed conditions, the necessity of dual incentives remains ambiguous _i.e._ it might be that a singular incentive set is adequate. Future research could delve deeper into the advantages of specific representational selections for incentive or inference models across both simple and mixed contexts.
Fourth, while our contributions are substantiated through empirical evidence, they lack a solid theoretical foundation. The domain of theoretical research in MARL is nascent, and rigorous safety assurances are paramount for autonomous driving applications. Ensuing research efforts should aim to establish theoretical safety and convergence bounds for our approach.
In addition to addressing these identified limitations, we are enthusiastic about assessing our algorithm's performance under even more demanding traffic conditions. This includes varied weather patterns, nighttime driving conditions, and scenarios where drivers might not adhere strictly to traffic regulations.
|
2306.09105 | Performance Evaluation and Comparison of a New Regression Algorithm | In recent years, Machine Learning algorithms, in particular supervised
learning techniques, have been shown to be very effective in solving regression
problems. We compare the performance of a newly proposed regression algorithm
against four conventional machine learning algorithms namely, Decision Trees,
Random Forest, k-Nearest Neighbours and XG Boost. The proposed algorithm was
presented in detail in a previous paper but detailed comparisons were not
included. We do an in-depth comparison, using the Mean Absolute Error (MAE) as
the performance metric, on a diverse set of datasets to illustrate the great
potential and robustness of the proposed approach. The reader is free to
replicate our results since we have provided the source code in a GitHub
repository while the datasets are publicly available. | Sabina Gooljar, Kris Manohar, Patrick Hosein | 2023-06-15T13:01:16Z | http://arxiv.org/abs/2306.09105v1 | # Performance Evaluation and Comparison of a New Regression Algorithm
###### Abstract
In recent years, Machine Learning algorithms, in particular supervised learning techniques, have been shown to be very effective in solving regression problems. We compare the performance of a newly proposed regression algorithm against four conventional machine learning algorithms namely, Decision Trees, Random Forest, \(k\)-Nearest Neighbours and XG Boost. The proposed algorithm was presented in detail in a previous paper but detailed comparisons were not included. We do an in-depth comparison, using the Mean Absolute Error (MAE) as the performance metric, on a diverse set of datasets to illustrate the great potential and robustness of the proposed approach. The reader is free to replicate our results since we have provided the source code in a GitHub repository while the datasets are publicly available.
Random Forest, Decision Tree, \(k\)-NN, Euclidean Distance, XG Boost, Regression
## 1 Introduction
Machine Learning algorithms are regularly used to solve a plethora of regression problems. The demand for these algorithms has increased significantly due to the push of digitalisation, automation and analytics. Traditional techniques such as Random Forest, Decision Trees and XG Boost have been integral in various fields such as banking, finance, healthcare and engineering. Technology is always evolving and technological advancements are driven by factors such as human curiosity, problem-solving and the desire for increased efficacy and reliability. Researchers are constantly working on improving these existing methods as well as exploring new improved strategies as can be seen in (Hosein, 2022). This approach uses a distance metric (Euclidean distance) and a weighted average of the target values of all training data points to predict the target value of a test sample. The weight is inversely proportional to the distance between the test point and the training point, raised to the power of a parameter \(\kappa\). In our paper, we investigate the performance of this novel approach and several well-established machine learning algorithms namely XG Boost, Random Forest, Decision Tree and \(k\)-NN using the Mean Absolute Error (MAE) as the performance metric. We intend to showcase the potential of this new algorithm to solve complex regression tasks across diverse datsets. In the next section, we describe related work and then the theory of the proposed approach. After, we present and discuss the findings such as any issues encountered. Finally, we advocate that the proposed approach may be robust and efficient making it extremely beneficial to the field.
## 2 Related Work and
Contributions
In this section, we summarize the various regression techniques we considered and then discuss differences with the proposed approach. Our contribution, which is a detailed comparison, is then outlined.
### Decision Tree
Decision Tree is a supervised machine learning algorithm that uses a set of rules to make decisions. Decision Tree algorithm starts at the root node where it evaluates the input features and selects the best feature to split the data (Quinlan, 1986). The data is split in such a way so that it minimises some metric that quantifies the difference between the actual values and the predicted values such as Mean Squared Error and Sum of Squared error. Then a feature and a threshold value is chosen that best divides the data into two groups. The data is split recursively into two subsets until a stopping condition is met such as having too few samples in a node. When the decision tree is constructed, predictions are made by traversing from the root node to a leaf node. The predicted value is calculated as the mean of the target values in
the training samples which is associated with that leaf node.
### Random Forest
Random Forest builds decision trees on different samples and then averages the outputs for regression tasks (Breiman, 2001). It works on the principle of an ensemble method called Bagging. Ensemble is combining multiple models and then the set of models is used to make predictions instead of using an individual model. Bagging which is also known as Bootstrap Aggregation selects random samples from the original dataset. Each model is created from the samples that are given by the original data with replacement. Individual decision trees are constructed for each sample and each tree then produces its own output. These outputs are numerical values. The final output is then calculated from the average of these values which is known as aggregation.
### _k_-Nearest Neighbours
_k_-Nearest Neighbours (_k_NN) is a supervised machine learning algorithm that is used to solve both classification and regression tasks. Firstly, choose the number of neighbours (_k_) which is used when making predictions. Then it calculates the distances (Euclidean) between a new query and the existing data and selects the specified number of neighbours (_k_) that is closest to the query and finds the average of these values. The average is the predicted value.
### XGBoost
XGBoost (eXtreme Gradient Boosting) is an ensemble learning algorithm that combines the output of weak learners (usually decision trees) to make more accurate predictions (Chen and Guestrin, 2016). New weak learners are added to the model iteratively with each tree aiming to correct the errors made by the previous learners. The training process is stopped when there is no significant improvement in a predefined number of iterations.
### New Algorithm
Regression models enable decision-making in a wide range of applications such as finance, healthcare, education and engineering. It is imperative that these regression models are precise and robust so that better decisions can be made to enhance and improve these fields. While there are various popular machine learning algorithms for solving regression tasks, we introduce a new regression model that shows high accuracy and robustness, ensuring that real-world applications are optimised. The core of the approach is similar to _k_-NN but instead of using samples in a neighbourhood, all samples are used and closer samples are weighted more heavily than those further away. In this case, there is no parameter \(k\) to specify but we do introduce a parameter \(\kappa\) that dictates the rate of decay of the weighting.
## 3 Proposed Approach
The proposed approach was originally designed to determine a suitable insurance policy premium (Hosein, 2022). Specifically, (Hosein, 2022) noted as personalisation increases (i.e., more features), predictions become less robust due to the reduction in the number of samples per feature, especially in smaller datasets. His main goal was to achieve an optimal balance between personalisation and robustness. Instead of using the samples available for each feature, his algorithm computes the weighted average of the target variable using all samples in the dataset. This algorithm uses the Euclidean distance metric and a hyperparameter \(\kappa\), which controls the influence of the distance (the weights) between points in the data. Another aspect is that the same unit of distance is used for each feature which allows one, for example, to compare distance between a gender feature with the distance between an age feature.
The \(\kappa\) parameter introduced in (Hosein, 2022) is used as an exponent in the weighting formula, where the weights are inversely proportional to the distance between data points raised to the power of kappa. When \(\kappa\) is large, the influence of points further away from the test point decreases quickly since their distance raised to a large power becomes very large which in turn makes the weight very small. However, when \(\kappa\) is small, the influence of points further away decreases slowly since the distance raised to a small power results in a relatively smaller value which then makes the inverse weight larger.
The algorithm firstly normalises the ordinal features. Then the prediction is done in two parts. For example, say we have a single categorical feature _Gender_ with two values, _Male_ and _Female_. In the first stage, we compute the mean for each feature value over all the training samples. That is, the average target value for all females \(\mu_{Gender,Female}\) and the average target value for all males \(\mu_{Gender,male}\). With these means, the distance \(d\) between a _Female_ and a _Male_ is the distance between \(\mu_{Gender,Female}\) and \(\mu_{Gender,male}\). The second stage computes the prediction for a test
sample. For a given test sample \(i\), its prediction is the weighted average of the target value over all training samples. These weights are computed as \(\frac{1}{(1+d[i,j])^{\kappa_{2}}}\), where \(d[i,j]\) is the Euclidean distance between the test sample \(i\) and the training sample \(j\).
Numerical features pose an interesting challenge because they can have a wider (potentially infinite) range of values. For example, consider the feature of \(Age\). Compared to _Gender_, this feature can easily span 40 values (e.g., 18 to 58) instead of two. Thus, for the same training set, the number of samples per age will be low which implies the means (i.e., \(\mu_{Age,20}\), \(\mu_{Age,21}\), \(\mu_{Age,30}\) etc.,) used to compute distances (and hence the weights) may not be robust. Additionally, we may encounter some unique values in the testing data set that are not in the training data set or vice versa. Generally, categorical variables do not encounter these issues because the number of samples per category value is sufficient. In order to solve this problem for numerical features, we impute a value for means for each unique value in both the training and testing data sets.
These imputed means are calculated based on the distances between the attribute value and all training samples in the feature space. This distance assists the algorithm in determining which training samples are most relevant to the test point. Then the target values for these training samples are combined, using a weighted average similar to before. The weights are computed as \(\frac{1}{(1+d[i,j])^{\kappa_{1}}}\), where \(d_{f}[i,j]\) is the distance between the numerical feature values in test sample \(i\) and training sample \(j\). For example, say \(f\) = \(Age\), and test sample \(i\)'s age is 30 and training sample \(j\)'s age is 40 then \(d_{Age}[i,j]=|30-40|=10\).
This modification means our proposed approach uses two kappa values: one for the pre-processing (i.e., \(\kappa_{1}\)) and one for predicting (i.e., \(\kappa_{2}\)). We determine the optimal combination of kappa values \((\kappa_{1},\kappa_{2})\) that minimises the error. According to (Hosein, 2022), as \(\kappa_{2}\) increases, the error decreases up to a certain point and then the error increases after this point. Therefore, an optimal \(\kappa_{2}\) can be found that minimizes the error.
We define a range of values for both the pre-processing and predicting parts. The initial range is determined through a trial and error process. We observe the MAE and adjust these values if needed. However, note that while the initial range of kappa values involves some trial and error, the process of finding the optimal combination within the range is essentially a grid search which is systematic and data-driven and ensures that the model is robust. Since the algorithm uses all the training data points in its prediction, it will be robust for small data sets or where there are not enough samples per category of a feature. The Pseudo code in Figure 1 summarizes the steps of the proposed algorithm.
## 4 Numerical Results
In this section, we describe the datasets that were used and apply the various techniques in order to compare their performances. A GitHub repository, (Gooljar, 2023) containing the code used in this assessment has been created to facilitate replication and validation of the results by readers.
### Data Set Description
The data sets used were sourced from the University of California at Irvine (UCI) Machine Learning Repository. We used a wide variety of data sets to illustrate the robustness of our approach. We removed samples with any missing values and encoded the categorical variables. No further pre-processing was done so that the results can be easily replicated. Table 1 shows a summary of the data sets used.
### Feature Selection
There are various ways to perform feature selection (Banerjee, 2020) but the best subset of features can only be found by exhaustive search. However, this method is computationally expensive so we select the optimal subset of features for the Random Forest model using Recursive Feature Elimination with Cross-Validation(RFECV) (Brownlee, 2021) and use these features for all other models. Note that for each model, there may be a different optimal subset of features and, in particular, this subset of features may not be optimal for the proposed approach so it is not provided with any advantage. Table 2 shows the selected attributes for each dataset. The optimal subset of features was the full set of features for Auto, Energy Y2 and Iris datasets. The columns are indexed just as they appear in the datasets from UCI.
### Performance Results
We show the performances of the different algorithms Random Forest, Decision Tree, \(k\)-Nearest Neighbors, XG Boost, and the proposed method. The models are evaluated on seven datasets (Auto, Student Performance, Energy Y2, Energy Y1, Iris, Concrete, and Wine Quality). We used Mean Absolute Error (MAE) to measure the performance since it is robust and easy
\(\begin{array}{ll}1:&C\equiv\text{set of categorical features}\\ 2:&O\equiv\text{set of ordinal features}\\ 3:&X\equiv\text{set of training samples}\\ 4:&Y\equiv\text{set of testing samples}\\ 5:&\kappa_{1},\kappa_{2}>0\quad\text{tuning parameters}\\ 6:&\text{for each $f\in O$ do}\\ 7:&\text{for each sample $j$ in $X$ do}\\ 8:&x_{\text{train},j,f}\leftarrow\frac{x_{\text{train},j,f}}{\max(f)-\min(f)} \quad\text{ (normalize feature values in the train set)}\\ 9:&\text{end for}\\ 10:&\text{for each sample $i$ in $Y$ do}\\ 11:&x_{\text{test},i,f}\leftarrow\frac{x_{\text{test},i,f}}{\max(f)-\min(f)} \quad\text{ (normalize feature values in the test set)}\\ 12:&\text{end for}\\ 13:&\text{end for}\\ 14:&vf\equiv\text{set of categories for feature $f\in C$}\\ 15:&\text{for each $f\in C$ do}\\ 16:&\text{for each $v\in vf$ do}\\ 17:&z\equiv\{x\in X|x_{f}=v\}\\ 18:&\mu_{f,v}\leftarrow\frac{1}{|z|}\sum_{x\in z}y_{x}\quad\text{ (mean target value over training samples where feature $f$ has value $v$)}\\ 19:&\text{end for}\\ 20:&\text{Replace category values with their mean $\mu_{f,v}$ in both $X$ and $Y$}\\ 21:&\text{end for}\\ 22:&\text{for each}f\in O\text{ do}\\ 23:&\text{for sample $i$ with unique feature value $v$ in $X$ and $Y$ do}\\ 24:&\mu_{f,v}\leftarrow\frac{\sum_{j\in X}\frac{y_{j}}{(1+df[i,j])^{\kappa_{1} }}}{\sum_{j\in X}\frac{1}{(1+df[i,j])^{\kappa_{1}}}}\quad\text{( imputed mean target value over training samples when feature $f$ has value $v$)}\\ 25:&\text{end for}\\ 26:&\text{Replace feature values with the imputed mean values $\mu_{f,v}$ in both $X$ and $Y$}\\ 27:&\text{end for}\\ 28:&\text{for each test sample $i$ in $Y$ do}\\ 29:&\text{for each training sample $j$ in $X$ do}\\ 30:&d[i,j]\leftarrow\left(\sum_{f\in F}(x_{\text{test},i,f}-x_{\text{train},j,f})^{2}\right)^{\frac{1}{2}}\quad\text{(calculate Euclidean distance)}\\ 31:&\text{end for}\\ 32:&\text{end for}\\ 33:&\text{for each test sample $i$ in $Y$ do}\\ 34:&c[i]\leftarrow\frac{\sum_{j\in X}\frac{y_{j}}{(1+df[i,j])^{\kappa_{2}}}}{ \sum_{j\in X}\frac{1}{(1+df[i,j])^{\kappa_{2}}}}\\ 35:&\text{end for}\\ 36:&\text{Return the output values $c$}\\ \end{array}\]
Figure 1: Pseudo code for the algorithm.
to interpret. It measures the average magnitude of errors between the predicted values and the actual values. Lower MAE values indicate better performance. We used 10 Fold cross-validation for each model to ensure that we achieve a more reliable estimation of the model's performance.
The performance results (MAE) for each algorithm with each dataset are provided in Table 3. The average MAE for the proposed algorithm is 1.380, while the next best performer is XG Boost, with an average MAE of 1.653. Random Forest, Decision Tree, and \(k\)-NN have higher average MAEs of 1.659, 2.105 and 2.537, respectively. The proposed algorithm consistently performs better than the popular algorithms. We also provide a bar chart showing a comparison between the algorithms for the various datasets in Figure 2.
### Run-Time Analysis
The proposed model performs well against all other models used in this comparison but it requires more computational time. We did some run time testing for the datasets for the various algorithms using time.perf_counter() which is a function in the 'time' module of Python's standard library. It is used to measure the time a block of code typically takes to run. The function returns the value, in fractional seconds, of a high-resolution timer. On average, the proposed approach took approximately 1.56 times longer than XG Boost and about 274 times longer than the other algorithms. However, we can perform computation optimizations that will reduce the run-time significantly. We plan to explore ways to more efficiently determine the optimal \(\kappa\) values.
### Parameter Optimization
According to (Hosein, 2022), the error appears to be a convex function of \(\kappa\). As \(\kappa\) increases, the error decreases up to a certain point and then the error increases after this point. Therefore, an optimal value can be found that minimizes the error. Our approach uses two different \(\kappa\) values, one for imputing the values and one for predicting. This allows for more flexibility in the model since each value serves a different purpose and allows them to optimize both aspects independently to minimize the error. In some cases, the optimal \(\kappa\) value for the first part may not be best for the predicting part and vice versa. In Figure 3, we
\begin{table}
\begin{tabular}{|c|c|c|l|} \hline Dataset & No. of Samples & No. of Attributes & Target Value & Citation \\ \hline Student Performance & 394 & 32 & G3 & (Cortez, 2014) \\ \hline Auto & 392 & 6 & mpg & (Quinlan, 1993) \\ \hline Energy Y2 & 768 & 8 & Y2 & (Tsanas, 2012) \\ \hline Energy Y1 & 768 & 8 & Y1 & (Tsanas, 2012) \\ \hline Iris & 150 & 4 & Sepal Length & (Fisher, 1988) \\ \hline Concrete & 1030 & 8 & Concrete Compressive Strength & (Yeh, 2007) \\ \hline Wine Quality & 1599 & 11 & Residual Sugar & (Cortez et al., 2009) \\ \hline \end{tabular}
\end{table}
Table 1: Summary of Data sets.
\begin{table}
\begin{tabular}{|c|c|} \hline Dataset & Selected Attributes \\ \hline Student Performance & [2,23,28,29,31] \\ \hline Auto & [1,2,3,4,5,6] \\ \hline Energy Y2 & [0,1,2,3,4,5,6,7] \\ \hline Energy Y1 & [0,1,2,3,4,6] \\ \hline Iris & [1,2,3, 4] \\ \hline Concrete & [0,1,2,3,4,6] \\ \hline Wine Quality & [0,4,5,6,9] \\ \hline \end{tabular}
\end{table}
Table 2: Summary of Selected Features (categorical features are color coded in red)
can clearly see the pattern of the error. The MAE decreases to a minimum and then increases after \(\kappa_{2}\) = 8. In our approach, we used a single value for all the features when imputing. However, we note that the parameter can be further optimised by using different values for different features. This further optimisation will lead to an even more complex model but may yield better results so we intend to explore this in the future.
## 5 Discussion
We compared the Random Forest, Decision Tree, \(k\)-Nearest Neighbours (\(k\)-NN), XG Boost and the proposed algorithm on seven diverse datasets from UCI. The datasets were from various fields of study and consist of a combination of categorical and ordinal features. Our proposed approach uses two hyper-parameters to optimize predictions. The average mean absolute error of the proposed approach is 45.6 % lower than \(k\)-NN, 34.4% lower than Decision Tree, 16.8% lower than the Random Forest and 16.5% lower than XG Boost. The proposed approach achieves the lowest MAE for all datasets. These results illustrate the value and potential of the proposed approach.
## 6 Conclusions and Future Work
We present a robust approach that can be used for any regression problem. The approach is based on a weighted average of the target values of the training points where the weights are determined by the inverse of the Euclidean distance between the test point and the training points raised to the power of a parameter \(\kappa\). As shown in Figure 2 the proposed algorithm surpasses the traditional algorithms in each dataset. Its performance indicates that the proposed method is a promising approach for solving regression tasks and should be considered as a strong candidate for future applications. However, there is significant room for improvement of this algorithm. Future work can include using different \(\kappa\) values for each feature and exploring heuristic methods to determine these values which may result in even better performance. Also, since the algorithm's computations can be done in parallel (i.e., the grid search over the \(\kappa\) space), the run time can also be considerably decreased.
|
2306.16025 | The lower bound of weighted representation function | For any given set $A$ of nonnegative integers and for any given two positive
integers $k_1,k_2$, $R_{k_1,k_2}(A,n)$ is defined as the number of solutions of
the equation $n=k_1a_1+k_2a_2$ with $a_1,a_2\in A$. In this paper, we prove
that if integer $k\geq2$ and set $A\subseteq\mathbb{N}$ such that
$R_{1,k}(A,n)=R_{1,k}(\mathbb{N}\setminus A,n)$ holds for all integers $n\geq
n_0$, then $R_{1,k}(A,n)\gg \log n$. | Shi-Qiang Chen | 2023-06-28T08:52:54Z | http://arxiv.org/abs/2306.16025v1 | # The lower bound of weighted representation function
# The lower bound of weighted representation function
Shi-Qiang Chen1
School of Mathematics and Statistics,
Anhui Normal University, Wuhu 241002, P. R. China
Footnote 1: E-mail: [email protected] (S.-Q. Chen).
**Abstract.** For any given set \(A\) of nonnegative integers and for any given two positive integers \(k_{1},k_{2}\), \(R_{k_{1},k_{2}}(A,n)\) is defined as the number of solutions of the equation \(n=k_{1}a_{1}+k_{2}a_{2}\) with \(a_{1},a_{2}\in A\). In this paper, we prove that if integer \(k\geq 2\) and set \(A\subseteq\mathbb{N}\) such that \(R_{1,k}(A,n)=R_{1,k}(\mathbb{N}\setminus A,n)\) holds for all integers \(n\geq n_{0}\), then \(R_{1,k}(A,n)\gg\log n\).
**2020 Mathematics Subject Classification:** 11B13
**Keywords:** Partition; weighted representation function
## 1 Introduction
Let \(\mathbb{N}\) be the set of all nonnegative integers. For a given set \(A\subseteq\mathbb{N}\), \(n\in\mathbb{N}\), representation functions \(R_{1}(A,n)\), \(R_{2}(A,n)\) and \(R_{3}(A,n)\) are defined as
\[R_{1}(A,n)=\mid\{(a,a^{\prime}):n=a+a^{\prime},\ a,a^{\prime}\in A\}\mid,\]
\[R_{2}(A,n)=\mid\{(a,a^{\prime}):n=a+a^{\prime},\ a<a^{\prime},\ a,a^{\prime}\in A \}\mid,\]
\[R_{3}(A,n)=\mid\{(a,a^{\prime}):n=a+a^{\prime},\ a\leq a^{\prime},\ a,a^{\prime} \in A\}\mid,\]
respectively. Sarkozy once asked the following question: for \(i\in\{1,2,3\}\), are there two sets of nonnegative integers \(A\) and \(B\) such that
\[\mid(A\cup B)\setminus(A\cap B)\mid=+\infty,\]
\(R_{i}(A,n)=R_{i}(B,n)\) for all sufficiently large integers \(n\)? This problem of Sarkozy has been solved completely. Recently, many researchers have obtained many profound results around this problem of Sarkozy. For related research, please refer to [1]- [5], [7]- [10].
For any given two positive integers \(k_{1},k_{2}\) and set \(A\subseteq\mathbb{N}\), weighted representation function \(R_{k_{1},k_{2}}(A,n)\) is defined as the number of solutions of the equation \(n=k_{1}a_{1}+k_{2}a_{2}\) with \(a_{1},a_{2}\in A\).
In 2012, Yang and Chen [11] studied weighted representation function. They proved that if \(k_{1}\) and \(k_{2}\) are two integers with \(k_{2}>k_{1}\geq 2\) and \((k_{1},k_{2})=1\), then there does not exist set \(A\subseteq\mathbb{N}\) such that \(R_{k_{1},k_{2}}(A,n)=R_{k_{1},k_{2}}(\mathbb{N}\setminus A,n)\) for all sufficiently large integers \(n\), and if \(k\) is an integer with \(k\geq 2\), then exists a set \(A\subseteq\mathbb{N}\) such that \(R_{1,k}(A,n)=R_{1,k}(\mathbb{N}\setminus A,n)\) for all integers \(n\geq 1\). They also asked the following question.
**Problem 1**.: _Let \(k\) be an integer with \(k\geq 2\) and \(A\subseteq\mathbb{N}\) such that \(R_{1,k}(A,n)=R_{1,k}(\mathbb{N}\setminus A,n)\) for all integers \(n\geq n_{0}\). Is it true that \(R_{1,k}(A,n)\geq 1\) for all sufficiently larger integers \(n\)? Is it true that \(R_{1,k}(A,n)\rightarrow\infty\) as \(n\rightarrow\infty\)?_
In 2016, Qu [6] solved this problem affirmatively and proved that the following result.
**Theorem A**.: _(See [6, Theorem 1].) Let \(k\) be an integer with \(k>1\) and \(A\subseteq\mathbb{N}\) such that \(R_{1,k}(A,n)=R_{1,k}(\mathbb{N}\setminus A,n)\) for all integers \(n\geq n_{0}\). Then \(R_{1,k}(A,n)\rightarrow\infty\) as \(n\rightarrow\infty\)._
In this paper, we continue to focus on Problem 1 and give the lower bound of weighted representation function.
**Theorem 1.1**.: _Let \(k\) be an integer with \(k\geq 2\) and \(A\subseteq\mathbb{N}\) such that \(R_{1,k}(A,n)=R_{1,k}(\mathbb{N}\setminus A,n)\) holds for all integers \(n\geq n_{0}\). Then \(R_{1,k}(A,n)\gg\log n\)._
Throughout this paper, the characteristic function of the set \(A\subseteq\mathbb{N}\) is denoted by
\[\chi(t)=\left\{\begin{aligned} & 0&& t\not\in A,\\ & 1&& t\in A.\end{aligned}\right.\]
Let \(C(x)\) be the set of nonnegative integers in \(C\) which are less than or equal to \(x\). For positive integer \(k\) and sets \(A,B\subseteq\mathbb{N}\), define \(kA=\{ka:a\in A\}\) and \(A+B=\{a+b:a\in A,\ b\in B\}\).
## 2 Lemmas
**Lemma 2.1**.: _(See [11, Lemma 2].) Let \(k\geq 2\) be an integer and \(A\subseteq\mathbb{N}\). Then \(R_{1,k}(A,n)=R_{1,k}(\mathbb{N}\setminus A,n)\) holds for all integers \(n\geq n_{0}\) if and only if the following two conditions hold:_
_(a) for all \(n_{0}\leq n<k+n_{0}\), we have_
\[\sum_{\begin{subarray}{c}a_{1}\geq 0,a_{2}\geq 0\\ a_{1}+ka_{2}=n\end{subarray}}1=\sum_{\begin{subarray}{c}a_{1}\geq 0,a_{2} \geq 0\\ a_{1}+ka_{2}=n\end{subarray}}\chi(a_{1})+\sum_{\begin{subarray}{c}a_{1}\geq 0,a_{ 2}\geq 0\\ a_{1}+ka_{2}=n\end{subarray}}\chi(a_{2}); \tag{2.1}\]
_(b) for all \(n\geq k+n_{0}\), we have_
\[\chi(n)+\chi\left(\Big{\lfloor}\frac{n}{k}\Big{\rfloor}\right)=1. \tag{2.2}\]
**Lemma 2.2**.: _Let \(k\geq 2\) be an integer and \(A\subseteq\mathbb{N}\). Then \(R_{1,k}(A,n)=R_{1,k}(\mathbb{N}\setminus A,n)\) holds for all integers \(n\geq n_{0}\), then for any \(n\geq\lfloor\frac{n_{0}+k}{k}\rfloor+1\), we have_
\[\chi(n)+\chi(k^{i}n+j)=1,\quad j=0,\ldots,k^{i}-1,\quad\text{if $i$ is odd};\] \[\chi(n)=\chi(k^{i}n+j),\qquad j=0,\ldots,k^{i}-1,\qquad\text{if $i$ is even}. \tag{2.3}\]
Proof.: We now use induction on \(i\) to prove that (2.3) is true. By (2.2), we have
\[\chi(n)+\chi(kn+j)=1,\quad j=0,\ldots,k-1. \tag{2.4}\]
Therefore, (2.3) is true for \(i=1\).
Next, we assume that (2.3) is true for \(i=s\), we are going to prove the truth of (2.3) for \(i=s+1\). If \(s+1\) is even, then by the induction hypothesis on \(i=s\), we have
\[\chi(n)+\chi(k^{s}n+j)=1,\quad j=0,\ldots,k^{s}-1. \tag{2.5}\]
By (2.2), we have
\[\chi(k^{s}n+j)+\chi(k(k^{s}n+j)+u)=1,\quad j=0,\ldots,k^{s}-1;u=0,\ldots,k-1.\]
It follows from (2.5) that
\[\chi(n)=\chi(k(k^{s}n+j)+u),\quad j=0,\ldots,k^{s}-1;u=0,\ldots,k-1,\]
that is
\[\chi(n)=\chi(k^{s+1}n+j),\quad j=0,\ldots,k^{s+1}-1. \tag{2.6}\]
If \(s+1\) is odd, then by the induction hypothesis on \(i=s\), we have
\[\chi(n)=\chi(k^{s}n+j),\quad j=0,\ldots,k^{s}-1. \tag{2.7}\]
By (2.2), we have
\[\chi(k^{s}n+j)+\chi(k(k^{s}n+j)+u)=1,\quad j=0,\ldots,k^{s}-1;u=0,\ldots,k-1,\]
It follows from (2.7) that
\[\chi(n)+\chi(k(k^{s}n+j)+u)=1,\quad j=0,\ldots,k^{s}-1;u=0,\ldots,k-1,\]
that is
\[\chi(n)+\chi(k^{s+1}n+j)=1,\quad j=0,\ldots,k^{s+1}-1. \tag{2.8}\]
Up to now, (2.3) has been proved.
This completes the proof of Lemma 2.2.
## 3 Proof of Theorem 1.1
Let \(T=\lfloor\frac{n_{0}+k}{k}\rfloor+1\). Given an odd \(j\in[0,\lfloor\frac{\lfloor\log_{k}\frac{n}{T}\rfloor}{2}\rfloor]\), for any sufficiently larger integer \(n\), there exist an integer \(i\) such that
\[k^{i}(k^{j}+1)T\leq n<k^{i+1}(k^{j}+1)T. \tag{3.1}\]
Now we are going to prove \(i+j=\lfloor\log_{k}\frac{n}{T}\rfloor\) or \(\lfloor\log_{k}\frac{n}{T}\rfloor-1\). In deed, if \(i+j\geq\lfloor\log_{k}\frac{n}{T}\rfloor+1\), then
\[\frac{n}{T}=k^{\log_{k}\frac{n}{T}}<k^{\lfloor\log_{k}\frac{n}{T}\rfloor+1} \leq k^{i+j}<k^{i+j}+k^{i}\leq\frac{n}{T},\]
a contradiction. If \(i+j\leq\lfloor\log_{k}\frac{n}{T}\rfloor-2\), then
\[\frac{n}{T}<k^{i+j+1}+k^{i+1}\leq 2k^{i+j+1}\leq 2k^{\lfloor\log_{k}\frac{n}{T} \rfloor-1}\leq 2k^{\log_{k}\frac{n}{T}-1}\leq k^{\log_{k}\frac{n}{T}}=\frac{n}{T},\]
a contradiction. By (3.1), there exist \(T\leq t\leq kT-1\) and \(0\leq r<k^{i}(k^{j}+1)\) such that
\[n=k^{i}(k^{j}+1)t+r.\]
According to the value of \(r\), we divide into the following two cases for discussion:
**Case1.**\(0\leq r\leq k^{i+j}+k^{i}-k-1\). Noting that \(j\) is an odd, by (2.3), we have
\[[k^{i+j-1}t,k^{i+j-1}t+k^{i+j-1}-1]\cup[k^{i}t,k^{i}t+k^{i}-1]\subseteq A\text { or }\mathbb{N}\setminus A.\]
Then
\[[k^{i}(k^{j}+1)t,k^{i}(k^{j}+1)t+(k^{i+j}+k^{i}-k-1)]\subseteq A+kA\ \text{or}\ ( \mathbb{N}\setminus A)+k(\mathbb{N}\setminus A),\]
it follows that \(n\in A+kA\) or \((\mathbb{N}\setminus A)+k(\mathbb{N}\setminus A)\), which implies that
\[R_{1,k}(A,n)=R_{1,k}(\mathbb{N}\setminus A,n)\geq 1.\]
Up to now, we proved that for a given odd \(j\in[0,\lfloor\frac{\lfloor\log_{k}\frac{n}{T}\rfloor}{2}\rfloor]\), we have
\[R_{1,k}(A,n)=R_{1,k}(\mathbb{N}\setminus A,n)\geq 1. \tag{3.2}\]
It is clear that for any two different odds \(j_{1},j_{2}\) such that \(j_{1},j_{2}\in[0,\lfloor\frac{\lfloor\log_{k}\frac{n}{T}\rfloor}{2}\rfloor]\) and integers \(i_{1},i_{2}\) such that
\[i_{1}+j_{1}=K_{1},\ \ \ \ i_{2}+j_{2}=K_{2},\]
where \(K_{1},K_{2}\in\{\lfloor\log_{k}\frac{n}{T}\rfloor,\lfloor\log_{k}\frac{n}{T} \rfloor-1\}\), we have
\[i_{1}\neq i_{2}. \tag{3.3}\]
In deed, assume that \(j_{1}<j_{2}\), since
\[1=-1+2\leq K_{1}-K_{2}-j_{1}+j_{2}=i_{1}-i_{2},\]
it follows that \(i_{1}\neq i_{2}\). By (3.3), we have
\[[k^{i_{1}}t,k^{i_{1}}t+k^{i_{1}}-1]\cap[k^{i_{2}}t,k^{i_{2}}t+k^{i_{2}}-1]=\emptyset. \tag{3.4}\]
Therefore, by (3.2) and (3.4), we have
\[R_{1,k}(A,n)=R_{1,k}(\mathbb{N}\setminus A,n)\geq\lfloor\frac{\lfloor\log_{k }\frac{n}{T}\rfloor}{4}\rfloor\gg\log n.\]
**Case 2.**\((k^{i+j}+k^{i}-k-1)+1\leq r\leq k^{i}(k^{j}+1)-1\). Since
\[|A\cap\{T,kT\}|=1,\]
it follows that
\[|A(kT)|\geq 1,\ |(\mathbb{N}\setminus A)(kT)|\geq 1. \tag{3.5}\]
Let \(r=k^{i+j}+k^{i}-k-1+s,\ s\in[1,k]\). Then
\[n=k^{i}((k+1)t)+k^{i+j}+k^{i}-k-1+s=k^{i}((k+1)t+k^{j})+k^{i}-k-1+s. \tag{3.6}\]
By (2.3), we have
\[[k^{i}((k+1)t+k^{j}),k^{i}((k+1)t+k^{j})+k^{i}-1]\subseteq A\text{ or }\mathbb{N} \setminus A.\]
By (3.5), we can choose \(a\in[0,kT]\) such that
\[\{a\}\cup[k^{i}((k+1)t+k^{j}),k^{i}((k+1)t+k^{j})+k^{i}-1]\subseteq A\text{ or }\mathbb{N}\setminus A. \tag{3.7}\]
Since \(j\in[0,\lfloor\frac{\lfloor\log_{k}\frac{n}{T}\rfloor}{2}\rfloor]\), it follows from
\[i+j=\lfloor\log_{k}\frac{n}{T}\rfloor\text{ \ or \ }\lfloor\log_{k}\frac{n}{T}\rfloor-1\]
that
\[k^{i}-k-1\geq k^{\lfloor\frac{\lfloor\log_{k}\frac{n}{T}\rfloor}{2}\rfloor-1 }-k-1\geq k^{2}T\geq ka\]
for any sufficiently larger \(n\). It follows from (3.6) and (3.7) that
\[k^{i}((k+1)t+k^{j})+s\leq n-ka\leq k^{i}((k+1)t+k^{j})+k^{i}-k-1+s,\]
which implies that \(n\in A+kA\) or \((\mathbb{N}\setminus A)+k(\mathbb{N}\setminus A)\), and so
\[R_{1,k}(A,n)=R_{1,k}(\mathbb{N}\setminus A,n)\geq 1.\]
Up to now, we proved that for any given odd \(j\in[0,\lfloor\frac{\lfloor\log_{k}\frac{n}{T}\rfloor}{2}\rfloor]\), we have
\[R_{1,k}(A,n)=R_{1,k}(\mathbb{N}\setminus A,n)\geq 1. \tag{3.8}\]
By (3.3), we have
\[[k^{i_{1}}((k+1)t+k),k^{i_{1}}((k+1)t+k)+k^{i_{1}}-1]\cap[k^{i_{2}}((k+1)t+k), k^{i_{2}}((k+1)t+k)+k^{i_{2}}-1]=\emptyset. \tag{3.9}\]
Therefore, by (3.8) and (3.9), we have
\[R_{1,k}(A,n)=R_{1,k}(\mathbb{N}\setminus A,n)\geq\lfloor\frac{\lfloor\log_{k} \frac{n}{T}\rfloor}{4}\rfloor\gg\log n.\]
This completes the proof of Theorem 1.1.
|
2303.13783 | Lorentzian wormholes in an emergent universe | A non-singular Emergent Universe (EU) scenario within the realm of standard
Relativistic physics requires a generalization of the Equation of State (EoS)
connecting the pressure and energy density. This generalized EoS is capable of
describing a composition of exotic matter, dark energy and cosmological dust
matter. Since the EU scenario is known to violate the Null Energy Condition, we
investigate the possibility of presence of static, spherically symmetric and
traversable Lorentzian wormholes in an EU. The obtained shape function is found
to satisfy the criteria for wormhole formation, besides the violation of the
NEC at the wormhole throat and ensuring traversability such that tidal forces
are within desirable limits. Also, the wormhole is found to be stable through
linear stability analysis. Most ${importantly}$, the numerical value of the
emergent universe parameter $B$ as estimated by our wormhole model is in
agreement with and lies within the range of values as constrained by
observational data in a cosmological context. Also, the negative sign of the
second EU parameter $A$ as obtained from our wormhole model is in agreement
with the one required for describing an EU, which further indicates on the
existence of such wormholes in an emergent universe ${without}$ accounting for
any additional exotic matter field or any modification to the gravitational
sector. | Rikpratik Sengupta, Shounak Ghosh, B C Paul, Mehedi Kalam | 2023-03-24T03:49:54Z | http://arxiv.org/abs/2303.13783v1 | # Lorentzian wormholes in an emergent universe.
###### Abstract
A non-singular Emergent Universe (EU) scenario within the realm of standard Relativistic physics requires a generalization of the Equation of State (EoS) connecting the pressure and energy density. This generalized EoS is capable of describing a composition of exotic matter, dark energy and cosmological dust matter. Since the EU scenario is known to violate the Null Energy Condition, we investigate the possibility of presence of static, spherically symmetric and traversable Lorentzian wormholes in an EU. The obtained shape function is found to satisfy the criteria for wormhole formation, besides the violation of the NEC at the wormhole throat and ensuring traversability such that tidal forces are within desirable limits. Also, the wormhole is found to be stable through linear stability analysis. Most \(importantly\), the numerical value of the emergent universe parameter \(B\) as estimated by our wormhole model is in agreement with and lies within the range of values as constrained by observational data in a cosmological context. Also, the negative sign of the second EU parameter \(A\) as obtained from our wormhole model is in agreement with the one required for describing an EU, which further indicates on the existence of such wormholes in an emergent universe \(without\) accounting for any additional exotic matter field or any modification to the gravitational sector.
keywords: Wormhole, Emergent Universe +
Footnote †: journal:
## 1 Introduction
It is well known today that the standard big bang cosmology is plagued by the singularity problem. The beginning of time, or the early stage of the universe cannot be described in the standard relativistic context as singularity appears in the Einstein field equations (EFE's). In fact, for describing the universe, as long as its radius does not exceed the Planck scale, a theory of Quantum Gravity (QG) is required. However, till date there is no single consistent QG theory that is fully developed, although plenty of work is going on in this direction and Loop Quantum Gravity (LQG) [1; 2] along with the higher dimensional Superstring and M-theories [3; 4] can be said to be the two main contenders in this aspect. The later also attempts to unify the four natural interactions. The problem with such QG theories is that they can be tested only in very extreme physical conditions, situations involving very high spacetime curvature, believed to be found inside the event horizon of black holes, or in the very early universe-at the moment of, or just following its creation.
Two decades back, Ellis and Maartens [5] proposed a cosmological model known as the "Emergent Universe", which attempted to resolve the long standing initial singularity problem within the classical context of Einstein's General Relativity (GR). As a positive curvature universe is not ruled out by observation, they argued that the role of a positive curvature term may be significant in the early universe, though it
can be neglected at the late times. The consideration of a such a term results in a non-singular cosmology, with the universe originating as an Einstein's static universe (ESU) and is also free of the horizon problem. This static universe undergoes an inflationary phase and later reheating, to give the standard big bang era. Moreover, if the initial radius of the static universe is above the Planck scale, then a purely QG regime can be avoided altogether in this cosmological model.
This emergent universe scenario has attracted much attention from cosmologists in the last two decades [6, 7, 8, 9, 10, 11, 12]. In a later work, Ellis [13] extended the first model by considering the early universe to be dominated by a minimally coupled scalar field \(\phi\), described by a physically interesting potential \(V(\phi)=\left(Ae^{B\phi}-C\right)^{2}+D\), such that the constants \(A\), \(B\), \(C\) and \(D\) may be determined from specific properties of the emergent universe. In another work [14], it was shown that such a potential for the emergent universe can be reproduced by modifying the Lagrangian, adding a quadratic term of the scalar curvature, such that \(L=R+\alpha R^{2}\), where the coupling parameter \(\alpha\) turns out to be negative and the field can be identified with a negative logarithmic function of the curvature. A very important extension to the emergent universe (EU) scenario was done by Mukherjee et al. [15], where they obtained an EU even for a flat spacetime, in the context of the semi-classical Starobinsky [16] model. A one parameter family of solutions describing the EU were obtained, such that parameter was determined by the number and species of the primordial fields.
In a following work, Mukherjee et al. [17] obtained EU solution within the relativistic context, for a flat spacetime. They considered the composition of matter, such that the generalized Equation of State (EoS) is given as \(p=A\rho-B\rho^{\frac{1}{2}}\). Such an EoS serves a three fold purpose. It accommodates the scope for description of the matter or the source of gravity sector by quantum field theory, besides the scope for including exotic matter capable of violating the energy conditions of GR in a cosmological context and also accommodating the present late time acceleration of the universe as inferred from supernova observations. For such an EoS, the universe is large enough initially to avoid a QG regime. For obtaining an emergent universe, we must have \(B>0\). It may also be noted that an emergent universe is permitted with \(A\leq 0\)[17, 18]. For realistic values of the parameter \(B\) constrained from observations [19], the universe is found to contain dark energy, exotic matter and cosmological dust (matter).
The idea of wormholes on a serious mathematical context date back to 1935, with the Einstein-Rosen bridge [20] being proposed as a solution to the field equations of GR, which connected two different spacetime points located within the same universe, or even possibly in two different universes. Two decades later Misner and Wheeler [21] revisited the idea and coined the term "wormhole". However, such a wormhole was not traversable and found their existence from quantum fluctuations taking place in the spacetime foam. They are basically microscopic. The idea of Schwarzschild and Kerr wormholes cannot be considered seriously for traversability, as such wormholes contain event horizons. It is known that beyond an event horizon, the tidal forces are extremely large and also there is the presence of curvature singularity. The first formal, modern take on wormholes and their traversability was provided by Morris and Thorne in 1988 [22]. In this view, a wormhole is considered as a topological as well as geometrical object characterized by a compact spacetime region which has a trivial boundary with a non-trivial interior and connects two spacetime points in the same universe or in two different universes. It has both local and global structure [23]. Such wormholes are well described by a static and spherically symmetric spacetime metric. Any closed time-like curve is known to violate causality and such a possibility may arise from traversable wormholes [24].
Morris and Thorne [22] laid down a prescription for constructing a traversable wormhole. There are certain essential features that the radial metric component of a wormhole must follow in order to ensure traversability. We shall discuss these criteria in the context of our wormhole model in the concluding section. From a physical point of view, the tidal acceleration must be small enough so that any traveller attempting to traverse the wormhole does not get ripped apart. This can be ensured from the absence of horizons in the wormhole. Also, they suggested that the wormhole throat must be kept open so that it does not 'pinch off', in order to ensure traversability through it. This is possible by the violation of the Null
Energy Condition (NEC) at the throat. There are two ways in which the NEC may be violated. Firstly, by introduction of exotic matter at the throat within the relativistic context and secondly, by modifying the gravitational action, such that the additional terms in the modified field equations contribute effectively to the violation of NEC at the throat in the presence of ordinary matter. The possibility of a wormhole existing in the outer galactic halo region has been investigated recently [25]. Wormholes have been studied in the modified gravity context [26; 27; 28; 29; 30; 31; 32] and also in the relativistic context with exotic matter [33; 34; 35; 36; 37; 38; 39; 40]. However, recently a successful traversable wormhole model has been proposed in the relativistic context [41] where the wormhole is supported by a Maxwell and two Dirac fields in the absence of any exotic coupling and exotic matter and the wormhole is asymmetric about the throat, such that there is no \(Z_{2}\) or mirror symmetry.
As we have already discussed, in order to obtain an emergent universe solution with a flat spacetime, we have to either modify the Lagrangian describing the gravitational action or consider a generalized EoS for the matter source, within the relativistic context. Such an EoS is well known to accommodate exotic matter and also accounts for the late-time acceleration of the universe. So, it could be natural that an emergent universe described by such an EoS accommodates traversable wormholes as well, due to such a composition of the universe. Lorentzian wormholes are the ones which exist on a Lorentzian spacetime manifold. Since experimental physics seems to favour a Lorentzian signature, here we will investigate the possibility of existence of static, traversable Lorentzian wormholes in the background of an emergent universe. In our previous papers, we had successfully constructed static and traversable Lorentzian wormholes involving higher dimensional braneworld gravity [26] or by introduction an additional tachyon field as the matter source [33]. Time evolving Euclidean wormholes have been found to be present in an EU in the massive gravity context [42]. So, it would be worth an attempt to investigate the possibility of static and traversable Lorentzian wormholes being present in an EU without considering any additional matter field or modifying the Einstein-Hilbert action leading to standard relativistic GR.
In the following sections we have obtained the solutions for different physical parameters of the wormhole under the framework of the EU and investigated them different features of the wormhole that may be analyzed from the solution and consider its stability. A detailed analysis should throw light on whether a stable, traversable wormhole can exist automatically in an emergent universe setup.
## 2 Mathematical model of the wormhole
In this section we shall make use of the Einstein field equations (EFE's) to obtain the shape function for the wormhole and check whether it satisfies the criteria of being a well-defined function to describe the wormhole. We shall also check the validity of the null energy condition. The traversability condition can be verified by computing the tidal acceleration. The essential model parameters can by estimated from the Darmois-Israel junction conditions, from which we shall also compute the surface density and surface pressure of the wormhole. We shall also perform a linearized stability check in order to ensure that the wormhole is stable.
### The field equations
For a static, spherically symmetric matter distribution, the spacetime is characterized by the line element
\[ds^{2}=-e^{\nu(r)}dt^{2}+e^{\lambda(r)}dr^{2}+r^{2}(d\theta^{2}+sin^{2}\theta d \phi^{2}). \tag{1}\]
For such a wormhole, the metric potential \(e^{\lambda}(r)=\frac{1}{1-\frac{b(r)}{r}}\), where \(b(r)\) represents the shape function of the wormhole. The Einstein Field Equations (EFEs) for such a wormhole in the relativistic context may be written down as (taking \(G=\)\(c=\)1)
\[\frac{b^{\prime}}{r^{2}}=8\pi\rho, \tag{2}\] \[\left(1-\frac{b}{r}\right)\left(\frac{\nu^{\prime}}{r}+\frac{1}{r^ {2}}\right)-\frac{1}{r^{2}}=8\pi p,\] (3) \[\left(1-\frac{b}{r}\right)\left(\nu^{\prime\prime}+{\nu^{\prime}} ^{2}+\frac{\nu^{\prime}}{r}\right)-\frac{b^{\prime}-b}{2r}\left(\nu^{\prime}+ \frac{1}{r}\right)=8\pi p, \tag{4}\]
As we are performing our analysis in the background of an Emergent Universe, we shall consider the static Lorentzian wormhole to be composed of dark energy, exotic matter and dust, represented by the generalized EoS [17].
\[p\left(r\right)=A\rho\left(r\right)-B\sqrt{\rho\left(r\right)}. \tag{5}\]
Here the parameters \(A\) and \(B\) characterize the Emergent Universe. For simplicity of our analysis we take the redshift function \(\frac{\nu}{2}\) to be a constant which implies \(\nu^{\prime}=0\). This simplification can be found in case of other wormhole constructions with exotic matter in the relativistic context [33; 43].
### Solution for the shape function
Using the EoS in the matter conservation equation with a constant redshift function, the energy density of the wormhole is obtained as
\[\rho\left(r\right)=\frac{B^{2}}{4A^{2}}. \tag{6}\]
We note that the energy density is expressed in terms of the EU parameters \(A\) and \(B\) as expected.
The shape function can be obtained by plugging in the energy density into Eq. (2) and turns out to be
\[b(r)=\frac{2\pi B^{2}r^{3}}{3A^{2}}+C_{1}. \tag{7}\]
We plot the shape function against \(r\) in Figure 1. We have assumed that the throat radius of the wormhole is at \(r=r_{0}=0.1\). As we can see from the Figure 1, the shape function satisfies the necessary criteria to describe a traversable wormhole as per the Morris Thorne prescription [22]. Firstly at the throat radius \(r=r_{0}\), as we can see the shape function \(b(r_{0})=r_{0}\). Secondly, as \(r\) becomes greater than \(r_{0}\), at
Figure 1: Variation of shape function with respect to \(r\).
every point the shape function \(b(r)<r\) for that point. So, the obtained shape function is satisfactory for describing a wormhole.
### Null Energy condition
The null energy condition (NEC) in GR can be written in a simplified form as \(\rho+p\geq 0\). According to the Morris-Thorne prescription for construction of a traversable wormhole [22], one of the most essential conditions for ensuring the traversability must be the violation of the NEC at the throat, so that the throat does not pinch off due to the gravitational attraction. In order to ensure this within the relativistic context, the matter constructing the wormhole must violate the NEC. So, for our model we must check whether the composite matter described by the generalized EoS in Eq. (6) can is capable of violating the NEC.
By solving the EFE in Eq. (4), we obtain the pressure to be given as
\[p(r)=-\frac{2\,B^{2}\pi r^{3}+3C_{1}A^{2}}{24\pi A^{2}r^{3}}. \tag{8}\]
As expected, the pressure has a dependence on the EU parameters \(A\) and \(B\).
Summing up the above obtained pressure and energy density, we have
\[p+\rho=\frac{4B^{2}\pi r^{3}-3A^{2}C_{1}}{24\pi A^{2}r^{3}}. \tag{9}\]
We plot \(p+\rho\) as a function of \(r\) in Figure 2. We see that the NEC is violated at the throat as \(p+\rho\) turns out to be negative at the throat and as we further increase \(r\), it continues to stay negative but with a vanishingly small value. It appears from the figure that \(p+\rho\) turns zero after a value of \(r\) close to 1.5, but on enlarging we can see that it continues to be negative with a vanishingly small value. This ensures that the flare out condition is satisfied which implies that the first derivative of the shape function with respect to r, at the throat is less than unity. Hence, traversability is ensured.
### Darmois Israel Junction Condition
**Outside the wormhole we consider a vacuum spacetime. So, the exterior to which the wormhole spacetime is to be matched can either be described by a Schwarzschild or deSitter metric depending on whether the vacuum has vanishing energy density and pressure (\(\rho_{vac}=p_{vac}=0\)) or constant energy density and pressure given by \(p_{vac}=-\rho_{vac}=-\frac{\Lambda}{8\pi}\), respectively.**
Figure 2: Variation of the NEC with respect to \(r\).
However, for our analysis there is no need for considering a \(\Lambda\)-term (cosmological constant) as the composite matter described by the EoS can describe the late time acceleration effectively even in the case of \(\Lambda=0\). So, we consider a Schwarzschild exterior. Thus, the spacetime exterior to the surface of the wormhole is described by the well known Schwarzschild metric given as
\[ds^{2}=-\left(1-\frac{2M}{r}\right)dt^{2}+\left(1-\frac{2M}{r}\right)^{-1}dr^{2 }+r^{2}(d\theta^{2}+\sin^{2}\theta d\phi^{2}). \tag{10}\]
As a consequence of matter being present on the wormhole surface, there is an extrinsic discontinuity which produces an intrinsic surface energy density and surface pressure. The wormhole surface acts as the boundary between the interior and the exterior, as a result of which the wormhole structure is a geodesically complete manifold characterized by the EoS used in our model, which describes a composition of dark energy, exotic matter and cosmological dust. Being a geodesically complete manifold, the continuity of the metric coefficient at the wormhole surface yields a matching condition. We shall now obtain the components for the intrinsic stress-energy \(S_{ij}\) using the Darmois-Israel prescription [44, 45].
From the Lanczos equation [46, 47, 48, 49] we have
\[S_{j}^{i}=-\frac{1}{8\pi}(\kappa_{j}^{i}-\delta_{j}^{i}\kappa_{k}^{k}). \tag{11}\]
The discontinuity in extrinsic curvature as a result of presence of matter on the wormhole surface is expressed as
\[\kappa_{ij}=\kappa_{ij}^{+}-\kappa_{ij}^{-}, \tag{12}\]
and the extrinsic curvature is defined as
\[\kappa_{ij}^{\pm}=-n_{\nu}^{\pm}\left[\frac{\partial^{2}X_{\nu}}{\partial \xi^{i}\partial\xi^{j}}+\Gamma_{\alpha\beta}^{\nu}\frac{\partial X^{\alpha}}{ \partial\xi^{i}}\frac{\partial X^{\beta}}{\partial\xi^{j}}\right]|_{S}, \tag{13}\]
where \(n_{\nu}^{\pm}\) is the unit normal vector which may be expressed in the form
\[n_{\nu}^{\pm}=\pm\left|g^{\alpha\beta}\frac{\partial f}{\partial X^{\alpha}} \frac{\partial f}{\partial X^{\beta}}\right|^{-\frac{1}{2}}\frac{\partial f}{ \partial X^{\nu}}, \tag{14}\]
such that \(n^{\nu}n_{\nu}=1\). \(\xi^{i}\) denotes intrinsic coordinate of the wormhole surface and is described by the parametric equation \(f(x^{\alpha}(\xi^{i}))=0\). Here \(+\) and \(-\) denotes the wormhole spacetime at the exterior and interior, respectively.
The surface stress energy tensor for the spherically symmetric line element has the components expressed as \(S_{i}^{j}=diag(-\Sigma,\mathcal{P})\). Here \(\Sigma\) and \(\mathcal{P}\) denotes the surface energy density and surface pressure respectively, at the wormhole surface \(r=R\) which may be obtained as
\[\Sigma = -\frac{1}{4\pi R}\bigg{[}\sqrt{e^{\lambda}}\bigg{]}_{-}^{+} \tag{15}\] \[= \frac{1}{4\pi R}\left[\sqrt{\left(1-\frac{2M}{R}\right)}-\sqrt{1 -\frac{b(r)}{r}}\right]=\frac{1}{4\pi R}\left[\sqrt{\left(1-\frac{2M}{R} \right)}-\sqrt{1-\frac{2\pi B^{2}r^{2}}{3A^{2}}-\frac{C_{1}}{r}},\right]\] (16) \[\mathcal{P} = \frac{1}{16\pi R}\bigg{[}\bigg{(}\frac{2f+f^{\prime}R}{\sqrt{f} }\bigg{)}\bigg{]}_{-}^{+}\] (17) \[= \frac{1}{4\pi R}\left(\sqrt{1-\frac{2M}{R}}-\sqrt{1-\frac{C_{1}} {R}-\frac{2B^{2}\pi R^{2}}{3A^{2}}}\right)+\frac{M}{4R^{3}\pi\sqrt{1-\frac{2M }{R}}}+\frac{\left(\frac{B^{2}}{6A^{2}}-\frac{C_{1}}{8R^{3}\pi}\right)}{\sqrt {1-\frac{C_{1}}{R}-\frac{2B^{2}\pi R^{2}}{3A^{2}}}}\]
As we are considering the construction of a static wormhole which does not evolve with time itself and is a local object in the EU, so at the surface of the wormhole the surface energy density and surface pressure must vanish (\(\Sigma={\cal P}=0\))[26; 40]. \(\Sigma=0\)**leads to the condition**
\[b(r)|_{r=R}=2M, \tag{18}\]
which yields
\[-\left(-\frac{2\pi B^{2}R^{2}}{3A^{2}}-\frac{C_{1}}{R}\right)R=2M. \tag{19}\]
In addition, there is an additional boundary condition which involves the continuity of the metric potential \(g_{rr}\) and its derivative \(\frac{\delta g_{rr}}{\delta r}\) at the wormhole surface \(r=R\). **It is to be noted that the same boundary condition obtained in Eq. (18) an Eq. (19) can also be arrived at by making use of the matching condition involving the continuity of the metric potential \(g_{rr}\) to ensure smooth matching between the wormhole and exterior Schwarzschild spacetimes. Another matching condition involving the coninuity of the derivative of the radial derivative of the metric potential at the wormhole surface \(r=R\), again in order to ensure smooth matching of the two spacetimes yield the second boundary condition which can be written as**
\[\frac{\partial g_{rr}}{\partial r}|_{int}=\frac{\partial g_{rr}}{ \partial r}|_{ext}\] \[\Rightarrow -\frac{4\pi B^{2}R}{3A^{2}}+\frac{C_{1}}{R^{2}}=\frac{2M}{R^{2}} \tag{20}\]
**As there are three unknown model parameters (the EU parameters \(A\) and \(B\) and the integration constant \(C_{1}\)), we shall require an additional boundary condition. The third and final boundary condition comes from the vanishing of the surface pressure at the wormhole boundary, which is a consequence of the wormhole being static as already discussed. So, we can put Eq. (17) as zero which yields the third boundary condition. Now for the three unknown model parameters \(A\), \(B\) and \(C_{1}\) we have three equations in the form of the three boundary conditions and we have evaluated the parameters by solving these equations using MAPLE.**
These boundary conditions allow us to evaluate the unknown model parameters \(A\), \(B\) both of which are the EU parameters in addition to the integration constant \(C_{1}\). We had previously assumed the throat radius to be \(0.1\) (in \(km\)). As the wormhole contains exotic matter as one of its ingredients, so it is appropriate to minimize the mass of the wormhole in order to minimize the mass of exotic matter used. However, it may be noted here that we are not introducing any exotic matter by hand. The EoS which supports an EU within the relativistic context is itself generalized to describe exotic matter as a constituent. We assume the mass of the wormhole to be \(0.8M_{\odot}\), which appears to be a realistic choice. The surface or the boundary of the wormhole is taken at \(R=5\) (in \(km\)).
Matching the junction conditions at the wormhole surface and using the values of the physical model parameters mentioned above, we evaluate the unknown model parameters to be \(A=-0.054\), \(B=0.003784713921\) and \(C_{1}=0.09998799979\). These obtained values have been used for the plots, and we shall justify their physical significance in the concluding section.
### Tidal acceleration
For a traversable wormhole model, we must ensure that the tangential and radial components of the tidal acceleration for an observer passing through the throat of the wormhole must be lesser than the acceleration due to gravity on earth, so that the observer does not get ripped apart while traversing the throat due to large tidal gravitational forces.
The condition that has to be satisfied by the radial component of the tidal acceleration \(|R_{rtrt}|\) in order to ensure traversability at the throat is
\[|R_{rtrtt}|=|(1-\frac{b}{r})[\frac{\nu^{\prime\prime}}{2}+\frac{\nu^{\prime 2}}{ 4}-\frac{b\prime\,r-b}{2r(r-b)}.\frac{\nu^{\prime}}{2}]|\leq g_{earth}. \tag{21}\]
For our model, it has been considered that \(\nu^{\prime}(r)=0\). A constant redshift function is a convenient choice in case of many wormhole models with exotic matter in the context of standard GR [33; 43]. As a result, the radial component of the tidal acceleration vanishes and the condition of traversability is automatically satisfied for this component.
However, for the tangential component, the condition has to be evaluated at the throat as the tangential component is non-trivial and a constraint on the velocity of the traveller traversing the throat can be obtained on evaluating this condition. The condition for traversability is expressed by the inequality
\[\gamma^{2}|R_{\theta t\theta t}|+\gamma^{2}v^{2}|R_{\theta r\theta r}|=|\frac {\gamma^{2}}{2r^{2}}[v^{2}(b^{\prime}-\frac{b}{r})+(r-b)\nu^{\prime}]|\leq g _{earth}, \tag{22}\]
where the left hand side represents the tangential tidal acceleration.
Plugging in the expression for \(b(r)\) and using \(\nu^{\prime}(r)=0\) we get
\[\frac{\gamma^{2}}{2}\left|\frac{v^{2}}{r}\left(-\frac{4}{3}\frac{\pi B^{2}r}{ A^{2}}+\frac{C_{1}}{r^{2}}\right)\right|\leq g_{earth}. \tag{23}\]
Putting the values of the model parameters \(B=0.003784713920\), \(A=-0.054\), \(M=0.8M_{\odot}\), \(C_{1}=0.0999879990\) obtained from the matching conditions and taking \(\gamma\approx 1\) (as the velocity of the traveller \(v<<1\) so \(\gamma=\frac{1}{\sqrt{1-v^{2}}}\approx 1\)), the condition for traversability reduces to
\[\frac{1}{2}\left|\frac{v^{2}}{r}\left(-0.02400019199r+0.09998799990r^{-2} \right)\right|\leq g_{earth}. \tag{24}\]
At the throat of the wormhole, the abobe inequality reduces to
\[v\leq 0.1414468192\sqrt{g_{earth}}. \tag{25}\]
Since the upper limit on the velocity of the traveller traversing the throat of the wormhole turns out to be a realistic one for our wormhole, we claim that the tidal forces at the throat are not too strong to disrupt the traveller and hence traversability is ensured.
### Linearized Stability Analysis
In order to perform a qualitative stability analysis of our obtained wormhole, it is assumed that the throat radius of the wormhole is a function of proper time. The throat radius is represented by \(r_{0}=x(\tau)\). The energy density has the form
\[\sigma=-\frac{1}{2\pi x}\sqrt{f(x)+\dot{x}^{2}}, \tag{26}\]
and the pressure may be expressed as
\[p=\frac{1}{8\pi}\frac{f^{\prime}(x)}{\sqrt{f(x)}}-\frac{\sigma}{2}, \tag{27}\]
such that \(f(x)=1-\frac{2M}{x}\). Here \(M\) denotes the wormhole mass.
Making use of the conservation equation, an equation of motion
\[\dot{x}^{2}+V(x)=0, \tag{28}\]
can be obtained, such that the potential \(V(x)\) can be expressed as
\[V(x)=f(x)-[2\pi x\sigma(x)]^{2}. \tag{29}\]
The objective now is to perform stability analysis considering linearization around \(x_{0}\), which is an assumed static solution to Eq.(27).
Taylor expanding the potential around the assumed static solution \(x_{0}\) yields
\[V(x)=V(x_{0})-V^{\prime}(x_{0})(x-x_{0})+\frac{1}{2}V^{\prime\prime}(x_{0})(x-x_{ 0})^{2}+O[(x-x_{0})^{3}], \tag{30}\]
'prime' indicating derivative with respect to x.
Since the wormhole spacetime considered by us is static, we have \(V(x_{0})=0\) and \(V^{\prime}(x_{0})=0\). Thus to ensure stability of the wormhole, it is necessary that \(V^{\prime\prime}(x_{0})>0\). A parameter \(\beta\) is introduced, physically representing the sound speed which is defined as
\[\beta=\frac{\delta p}{\delta\sigma}. \tag{31}\]
The second derivative of the potential can be expressed in terms of the newly defined physical parameter \(\beta\) as
\[V^{\prime\prime}(x)=f^{\prime\prime}(x)-8\pi^{2}[(\sigma+2p)^{2}+\sigma(\sigma +p)(1+2\beta). \tag{32}\]
Expressing the stability condition for the wormhole \(V"(x_{0})>0\) in terms of \(\beta\), we have
\[\beta<\frac{f^{\prime\prime}(x_{0})}{8\pi^{2}}-(\sigma+2p)^{2}-2\sigma(\sigma +p)}{4\sigma(\sigma+p)}. \tag{33}\]
Plugging in the expressions for \(\sigma\) and \(p\), the stability criterion for the wormhole in terms of \(\beta\) takes the final form
\[\beta<\frac{x_{0}^{2}(f_{0}^{\prime})^{2}-2x_{0}^{2}f_{0}^{\prime\prime}f_{0}} {4f_{0}(a_{0}f_{0}^{\prime}-2f_{0})}-\frac{1}{2}. \tag{34}\]
The parameter \(\beta\) can be calculated for our wormhole model as
\[\beta=\frac{-2{x_{0}}^{6}+(-12\pi+9)\,m{x_{0}}^{5}+20m^{2}\left(\pi-\frac{1}{2 }\right){x_{0}}^{4}}{8x^{5}\left(-x_{0}+2m\right)\pi\left(-x_{0}+3m\right)}. \tag{35}\]
We plot \(\beta\) for different values of \(x_{0}\) in Figure 3. The regions denoted as 1, 2 and 3 satisfy the stability criterion. Thus, we can claim that the wormhole model obtained by us in the background of an EU is stable.
## 3 Discussion and Conclusion
In this paper, we explore the possibility of constructing static, traversable and spherically symmetric Lorentzian wormholes in the background of an Emergent Universe (EU). As we have discussed in the introductory section, EU cosmological models have been widely explored by cosmologists due to the absence of the initial singularity within the relativistic context. It is the most well established non-singular cosmological picture which is consistent even without modifying General Relativity. However, in order to accommodate EU in the relativistic context, a generalized EoS is introduced which describes an universe composed of exotic matter, dark energy and cosmological dust for suitable values of the model parameters. Again, it is known that in order to have an EU, it is necessary that the NEC must be violated [50]. Also, this is a necessary condition for the successful construction of a traversable wormhole [22]. In the relativistic context, exotic matter also becomes effective in constructing traversable wormholes. Therefore, the common grounds of requirement for exotic matter and violation of the NEC make it worth an attempt to explore the possibility of existence of traversable wormholes in an Emergent Universe. Also, the existence of time evolving Euclidean wormholes have been previously found in EU in the context of massive gravity [42]. This provides an additional motivation for the present investigation.
We have considered a constant redshift function to simplify our analysis. Using the generalized EoS used to describe an EU, we obtain a solution for the shape function and on plotting the shape function, it turns out that all the essential criteria that are required for a shape function to describe a traversable wormhole have been obeyed. At the throat radius \(r_{0}\), the shape function takes the value of the throat radius itself. Moreover, for all values of \(r>r_{0}\), the ratio remains \(\frac{b(r)}{r}<1\) (from Fig. 1).
The energy density and pressure inside the wormhole are obtained making use of the field equations and the conservation equation. They are found to depend on the EU parameters. Plotting the variation of \(p+\rho\) along \(r\), we find that the NEC is violated at the throat of the wormhole. This ensures that the derivative of the shape function \(b(r)\) with respect to \(r\) must be less than unity at the throat, which is known as the flaring-out condition. Satisfying this condition physically means that the throat of the wormhole does not get pinched off due to the gravitational attraction, thus preventing the wormhole structure from collapsing at the throat. This can be justified by the presence of exotic matter and dark energy components in the EU, described by the generalized EoS.
The discontinuity in the extrinsic curvature resulting from the presence of matter at the surface of the wormhole, in turn results in the origin of a surface stress-energy term containing surface energy density and surface pressure terms as its components. These components have been obtained and are also found to be dependent on the EU parameters. Since we have performed our analysis for a static wormhole metric, so we consider the surface density and pressures to vanish at the junction. This results in yielding the first boundary condition. Other boundary conditions can be obtained from the continuity of the metric potential and its derivative at the junction.
These matching conditions at the surface of the wormhole enable us to obtain an estimate of the unknown model parameters, namely the EU parameters \(A\) and \(B\) and also the unknown integration constant \(C_{1}\), by choosing realistic values of the parameters associated with the wormhole. The throat radius and the boundary of the wormhole are considered to be at \(0.1km\) and \(5km\), respectively. These are not unique choices but the values we have considered are reasonable. Since, exotic matter is one of the components of an EU, the wormhole in consideration will also contain some exotic matter as one of its constituents. So, it is justified to minimize the mass of the wormhole although dust matter is also a constituent. We make a reasonable assumption that the mass of the wormhole is \(0.8M_{\odot}\). Using the chosen values of the above mentioned wormhole parameters, we get the integration constant \(C_{1}=0.09998799979\). This is of relatively less physical significance.
However, it is interesting to check the values we obtain for the EU parameters, specially the parameter \(B\) as this parameter has been constrained observationally in a cosmological context [19]. If the value we
Figure 3: Variation of \(\beta\) with respect to \(x_{0}\).
obtain for the parameter \(B\) from our wormhole model happens to lie within or in close approximate to the constrained observational range, it would provide strong support in favor of existence of static traversable Lorentzian wormholes in an EU. Also, for EU, the parameter \(A\) must be negative and \(B\) must be slightly positive [17]. However when we applied the EoS containing these two parameters to the field equations to obtain the wormhole solution, we did not put any constraint on the sign of these parameters by hand. So the obtained negativity and positivity of \(A\) and \(B\), respectively is an indication in favor of the possibility of existence of such wormholes in an EU. The observationally constrained value for the parameter \(B\) for an EU constituted of exotic matter, dark energy and matter is \(0.003<B<0.5996\)[19]. From our model, the obtained value from the matching condition is \(B=0.003784713921\), which is within the constrained range. This, along with the small negative value obtained for the parameter \(A\), essential to construct an EU, provides strong support in favor of the existence of wormholes in an EU.
It is also essential to check that the tidal force at the wormhole throat is not strong enough, such that an observer attempting to traverse the throat of the wormhole gets ripped apart due to the tidal forces. The radial component of the tidal force becomes insignificant with the vanishing of the first derivative of the redshift function. However, the tangential component has a contributing term even for constant \(\nu\). Considering the upper limit for the tidal acceleration to be equal to the acceleration due to gravity on the earth, we obtain a reasonable upper limit on the velocity with which the traveller can attempt to traverse the wormhole throat safely, without getting ripped apart due to excess tidal forces. Finally, we perform a linearized stability analysis to check whether our obtained wormhole solution in an EU is stable. Plotting the parameter \(\beta\) with respect to \(x_{0}\), we show three regions of stability which justifies that the wormhole we have obtained is a stable one. The stable regions are obtained imposing constraint on the sound speed in terms of the wormhole solution parameters.
There have been some recent investigations on the possibilities of detecting wormholes [51; 52; 53; 54]. As we know, wormholes are asymptotically flat tubular structures that connect two different spacetimes where the individual fluxes might not be conserved, as a result of which there shall be mutual observable effects on objects present within close distances of both the wormhole mouths [51]. It may be possible to detect such effects on the orbits of stars in close approximately to the black hole present at the centre of our galaxy. Wormholes may also produce some micro-lensing effects which might appear identical to gamma ray bursts and these effects can lead to obtaining a higher limit on the mass density of wormholes making use of the BATSE data [52]. The scattering properties near a traversable and rotating wormhole can be explored considering the quasinormal ringing of black holes. Asymmetric wormholes shall be characterized by super radiance, while the wormholes exhibiting symmetry differ from black holes due to non-identical simultaneous ringing at a number of dominant multipoles [53]. If the redshift function of the wormhole is variable, the radial tidal forces do not vanish, resulting in possible detection of long-lived quasinormal modes dubbed as 'quasi-resonances' in the background of the wormhole [54].
We may conclude that not only time evolving Euclidean wormholes can exist in the EU scenario in the massive gravity context, but also static, traversable Lorentzian wormholes can also be accommodated in an EU within a relativistic context. The EoS describing an EU in the relativistic context is capable of producing stable, static, traversable wormhole solutions violating the NEC. Moreover, the sign of the EU parameter \(A\) is obtained as desired for an EU and also the numerical value of the EU parameter \(B\) appearing for a consistent wormhole model, is estimated to be within the range of values constrained from an observational basis in a cosmological context. Therefore, it can be concluded that Lorentzian wormholes are naturally present in an Emergent Universe without taking into account either any additional matter field (like our previous wormhole model supported by a tachyonic field [33]) or any modification to standard GR (like our previous wormhole model in the braneworld gravity context [26]).
## 4 Acknowlwdgement
MK and BCP is thankful to the Inter-University Centre for Astronomy and Astrophysics (IUCAA), Pune, India for providing the Visiting Associateship under which a part of this work was carried out. RS
is thankful to the Govt. of West Bengal for financial support through SVMCM scheme. SG is thankful to the Directorate of Legal Metrology under the Department of Consumer Affairs, West Bengal for their support.
|
2301.04492 | Fast conformational clustering of extensive molecular dynamics
simulation data | We present an unsupervised data processing workflow that is specifically
designed to obtain a fast conformational clustering of long molecular dynamics
simulation trajectories. In this approach we combine two dimensionality
reduction algorithms (cc\_analysis and encodermap) with a density-based spatial
clustering algorithm (HDBSCAN). The proposed scheme benefits from the strengths
of the three algorithms while avoiding most of the drawbacks of the individual
methods. Here the cc\_analysis algorithm is for the first time applied to
molecular simulation data. Encodermap complements cc\_analysis by providing an
efficient way to process and assign large amounts of data to clusters. The main
goal of the procedure is to maximize the number of assigned frames of a given
trajectory, while keeping a clear conformational identity of the clusters that
are found. In practice we achieve this by using an iterative clustering
approach and a tunable root-mean-square-deviation-based criterion in the final
cluster assignment. This allows to find clusters of different densities as well
as different degrees of structural identity. With the help of four test systems
we illustrate the capability and performance of this clustering workflow:
wild-type and thermostable mutant of the Trp-cage protein (TC5b and TC10b),
NTL9 and Protein B. Each of these systems poses individual challenges to the
scheme, which in total give a nice overview of the advantages, as well as
potential difficulties that can arise when using the proposed method. | Simon Hunkler, Kay Diederichs, Oleksandra Kukharenko, Christine Peter | 2023-01-11T14:36:43Z | http://arxiv.org/abs/2301.04492v1 | # Fast conformational clustering of extensive molecular dynamics simulation data
###### Abstract
We present an unsupervised data processing workflow that is specifically designed to obtain a fast conformational clustering of long molecular dynamics simulation trajectories. In this approach we combine two dimensionality reduction algorithms (cc_analysis and encodermap) with a density-based spatial clustering algorithm (HDBSCAN). The proposed scheme benefits from the strengths of the three algorithms while avoiding most of the drawbacks of the individual methods. Here the cc_analysis algorithm is for the first time applied to molecular simulation data. Encodermap complements cc_analysis by providing an efficient way to process and assign large amounts of data to clusters. The main goal of the procedure is to maximize the number of assigned frames of a given trajectory, while keeping a clear conformational identity of the clusters that are found. In practice we achieve this by using an iterative clustering approach and a tunable root-mean-square-deviation-based criterion in the final cluster assignment. This allows to find clusters of different densities as well as different degrees of structural identity. With the help of four test systems we illustrate the capability and performance of this clustering workflow: wild-type and thermostable mutant of the Trp-cage protein (TC5b and TC10b), NTL9 and Protein B. Each of these systems poses individual challenges to the scheme, which in total give a nice overview of the advantages, as well as potential difficulties that can arise when using the proposed method.
+
Footnote †: preprint: APS/123-QED
## I Introduction
With the ever-growing power of computers over the last decades, researchers in the field of molecular dynamics (MD) have gotten access to increasingly long trajectories and thereby to increasingly large amounts of data. The introduction of supercomputers which are specifically designed to generate MD trajectories (Anton [1] and Anton 2 [2]) is only the latest high point in this development. Furthermore, new sampling methods [3; 4] as well as distributed computing projects, such as Folding@home [5], have contributed to a massive increase in generated simulation trajectories. With this increasing amount of data it is essential to have powerful analysis tools to process and understand underlying systems and processes.
There is a rapid increase in application of unsupervised machine learning methods to analyze molecular simulation data. Two of the most used families of analysis techniques are clustering and dimensionality reduction (DR) algorithms. They help to find low-dimensional subspaces in which important aspects of the original data are preserved and to group the data based on a given similarity measure/metric and thereby gain a better overview and understanding. In practice, most of the times clustering and DR methods are used in combination. The DR algorithms can be roughly divided into: linear methods (the most known are principal component analysis (PCA) [6; 7] and time-lagged independent component analysis (TICA) [8; 9]), nonlinear methods (kernel and nonlinear PCA, multidimensional scaling (MDS) [10; 11] and MDS-based methods like sketch-map [12], isomap [13], diffusion maps [14; 15] or UMAP [16], etc.) and autoencoder-based approaches like (encodermap [17; 18], time-autoencoder [19], variational autoencoders [20] and Gaussian mixture variational autoencoders [21]). In terms of clustering algorithms, there are again a wide range of different methods: K-Means [22; 23], spectral-clustering [24], DBSCAN [25], density-peak clustering [26], CNN-clustering [27], root-mean-square deviation (RMSD) based clustering [28], neural-networks-based VAMPnets [29], etc. For a comprehensive overview of unsupervised ML methods commonly used to analyse MD simulation data we refer to Ref. [30].
Even from this incomplete list of available methods it should become obvious that there are a lot of different clustering, as well as DR methods. All these methods have their individual strengths and weaknesses. But there are still open challenges in the successful usage of the listed methods for processing simulation data with high spatial and temporal resolution. This is connected either to the proper choice of hyper-parameters (such as the number of dimensions for DR methods, the number of expected states for some clustering algorithms, neural-networks architectures, different cut-offs, correlation times, etc.), expensive optimisation steps or the amount of data which could be processed simultaneously. In this work we present a data processing scheme which combines three different algorithms in one workflow to create a powerful clustering machinery. It tackles a number of the mentioned challenges as it has a way to define an appropriate lower dimensionality of the data, does not require a priory information about the expected number
of states and it is fast in processing extensive MD trajectories with both a very high dimensionality and a large number of observations. It is specifically designed to find conformational clusters in long molecular simulation data (Fig. 1).
We use two different DR algorithms, namely an algorithm called "cc_analysis" and the encodermap algorithm. The cc_analysis method belongs to the family of the MDS-based techniques and was first introduced for the analysis of crystallographic data [31; 32]. Here it is used for the first time for projecting data of protein conformations. The dimensionality of the cc_analysis-space which is typically required is more than two (10 to 40 for the systems shown in this work) and the amount of data, which can be efficiently projected simultaneously is limited by the available memory (about 50000 frames for a 72 GB workstation). To process much longer trajectories and to obtain a two-dimensional representation we use the second DR algorithm - encodermap [33]. Its loss function however consist of two parts: the autoencoder loss and a MDS-like distance loss, which introduces an interpretability to the resulting 2D representation. Moreover, once the encodermap network is trained, the encoder function can be used to project data to the 2D map in an extremely efficient way. We use encodermap to project data into 2D and for a fast assignment of the additional members to the clusters defined in the cc_analysis space. Finally we use the HDBSCAN algorithm [34] to cluster the data in the cc_analysis space and then visualize the resulting clusters in the 2D encodermap space. HDBSCAN is a combination of density and hierarchical clustering, that can work efficiently with clusters of varying density, ignores sparse regions, and requires a minimum number of hyper parameters. We apply it in a non-classical iterative way with varying RMSD-cutoffs to extract the protein conformations of different similarities.
The combination of these three algorithms allows us to leverage their different strengths, while avoiding the drawbacks of the individual methods. Subsequently we will show how the scheme performs on long MD trajectories of wild-type and mutated Trp-cage with native and misfolded meta-stable states (208 \(\mu\)s and 3.2 \(\mu\)s long simulations); really extensive simulations of NTL9 (1877 \(\mu\)s); and Protein B, where only a small percent of the simulation data (5%) is in the folded state (104 \(\mu\)s).
## II Methods
### cc_analysis
For dimensionality reduction, we use an cc_analysis introduced in Ref. [31; 32]. This algorithm was originally developed to analyse crystallographic data, where presence of noise and missing observations pose a challenge to data processing in certain experimental situations. The method separates the inter-data-set influences of random error from those arising from systematic differences, and reveals the relations between high-dimensional input features by representing them as vectors in a low-dimensional space. Due to this property we expected it to be highly applicable to protein simulation data, where one seeks to ignore the differences arising from random fluctuations, and to separate the conformations based on systematic differences. In the course of the manuscript we show that this assumption proved to be correct.
The cc_analysis algorithm belongs to the family of MDS methods [10]. Its main distinction is that it minimizes the sum of squared differences between Pearson correlation coefficients of pairs of high-dimensional descriptors and the scalar product of the low-dimensional vectors representing them (see Eq. (1)). The procedure places the vectors into a unit sphere within a low-dimensional space. Systematic differences between the high-dimensional features lead to differences in the angular directions of the vectors representing them, and purely random differences of data points lead to different vector lengths at the same angular direction. The algorithm minimizes, e.g. iteratively using L-BFGS [35], the
Figure 1: Data processing routine presented in this article.
expression
\[\Phi(\mathbf{x})=\sum_{i=1}^{N-1}\sum_{j=i+1}^{N}\left(r_{ij}-x_{i}\cdot x_{j} \right)^{2} \tag{1}\]
as a function of \(\mathbf{x}\), the column vector of the N low-dimensional vectors \(\{x_{k}\}\). Here \(r_{ij}\) is the correlation coefficient between descriptors \(i\) and \(j\) in the high-dimensional space and \(x_{i}\cdot x_{j}\) denotes the dot product of the unit vectors \(x_{i}\) and \(x_{j}\) representing the data in the low-dimensional space; \(N\) is the number of observations, e.g. protein conformations. The output of cc_analysis is the N low-dimensional vectors \(\{x_{k}\}\), and the eigenvalues of the \(\mathbf{xx}^{T}\) matrix.
To understand why this is a sensible approach, one can think about obtaining the least squares solution of Eq. (1) algebraically by eigenanalysis of the matrix \(\mathbf{r}=\{r_{ij}\}\). In that case one would have to solve
\[\mathbf{xx}^{T}=\mathbf{r}\]
where \(\mathbf{r}\) is the matrix of the correlation coefficients \(r_{ij}\). The \(n\) strongest eigenvalue/eigenvector pairs (eigenvectors corresponding to the largest eigenvalues) could then be used to reconstruct the \(N\) vectors \(x_{i}\), which are located in a \(n\)-dimensional unit sphere. The systematic differences between the input data are thereby shown by the different angular directions in this low-dimensional sphere. This approximation is meaningful because in general the Pearson correlation coefficient can be written as a dot product between two vectors (after subtraction of the mean and dividing by the standard deviation to scale the vectors to unit length) and is equal to the cosine of the angle between them. Hence, in an ideal scenario, \(\sum_{i,j}^{N}x_{i}\cdot x_{j}\) can exactly reproduce the high-dimensional correlation coefficient matrix and \(\Phi(\mathbf{x})\) in Eq. (1) would be equal to zero.
The length of the vectors is less important than the angle between them. The latter has an inherent meaning: two high-dimensional feature vectors with a correlation coefficient of zero between them would be projected to unit vectors at \(90^{\circ}\) angles with respect to the origin, and two feature vectors with a correlation coefficient of one would have a corresponding angle of zero degrees.
Despite the generality of the cc_analysis approach, by now it was only applied to crystallographic data [36; 37]) and protein sequence grouping [38]. Here we present a first application of cc_analysis for protein simulation data analysis.
### Encodermap
To accelerate the processing of large datasets, e.g. from extensive simulations, in addition to cc_analysis, we make use of one more dimensionality reduction technique - encodermap. It was developed by Lemke and Peter [33] and is used here for fast assignment of data points to clusters as will be explained in details in Sec. II.4. The method combines the advantages of a neural network autoencoder [17] with a MDS contribution, here the loss function from the sketch-map algorithm [12] (Fig. 2). This combination is exceptionally efficient for projecting large simulation data to the two-dimensional representations: the sketch-map loss function allows to concentrate only on relevant dissimilarities between conformations (ignoring thermal fluctuations and coping with the large dissimilarity values caused by the data's high dimensionality). Furthermore the autoencoder approach allows to avoid complex minimisation steps of the sketch-map projection and to process huge amounts of data in a very short time.
The encodermap loss function \(L_{encodermap}\) (Eq. (2)) is a weighted sum of the autoencoder loss \(L_{auto}\) (Eq. (3)) and the sketch-map loss function \(L_{sketch}\) (Eq. (4)), which emphasizes mid-range distances by transforming all distances _via_ a sigmoid function (Eq. (5)).
\[L_{encodermap}=k_{a}L_{auto}+k_{s}L_{sketch}+Reg, \tag{2}\]
\[L_{auto}=\frac{1}{N}\sum_{i=1}^{N}D(X_{i},\tilde{X}_{i}), \tag{3}\]
\[L_{sketch}=\frac{1}{N}\sum_{i\neq j}^{N}[SIG_{h}(D(X_{i},X_{j}))-SIG_{l}(D(x_ {i},x_{j}))]^{2}, \tag{4}\]
where \(k_{a}\), \(k_{s}\) are adjustable weights, \(Reg\) is a regularization used to prevent overfitting; \(N\) is a number of data points to be projected; \(D(\cdot,\cdot)\) is a distance between points, \(X\) is a high-dimensional input, \(x\) is a low-dimensional projection (the bottleneck layer); \(SIG_{h}\) and \(SIG_{l}\) are sigmoid functions of the form shown in Eq. (5).
\[\text{SIG}_{\sigma,a,b}(D)=1-(1+(2^{\frac{5}{5}}-1)(\frac{D}{\sigma})^{a})^{- \frac{b}{a}}, \tag{5}\]
Figure 2: Schematic description of encodermap. It has an architecture of the classic autoencoder consisting of two neural networks (encoder and decoder) with the same number of layers and neurons in each layer connected through the bottleneck layer with two neurons. In addition to autoencoder loss \(L_{a}(X,\tilde{X})\) encodermap loss has a term with the sketch-map loss function \(L_{s}(X,x)\), which improves the quality of two-dimensional projection obtained in the bottle-neck layer (see Eq. (2)).
where \(a\), \(b\) and \(\sigma\) are parameters defining which distances to preserve.
### Hierarchical Density-Based Spatial Clustering of Applications with Noise (HDBSCAN)
The HDBSCAN [34; 39] can be approached from two different sides: it can be described as a hierarchical implementation of a new formulation of the original DBSCAN [25] algorithm called DBSCAN* by J. G. B. Campello _et al._[34] or it can be formulated as a robust version of single-linkage clustering with a sophisticated method to obtain a flat clustering result, as done by McInnes _et al._[39]. Here we describe it through the second approach.
In the first step the algorithm introduces the so-called mutual reachability distance (MRD) (Eq. (6)), which transforms the space to make sparse points even sparser but does not significantly change the distance between already dense points.
\[D_{mreach-k}(x_{i},x_{j})=\] \[max\{core_{k}(x_{i}),core_{k}(x_{j}),D(x_{i},x_{j})\}, \tag{6}\]
where \(x\) are points being clustered, \(k\) is a constant which determines a number of nearest neighbouring points, \(core_{k}(x)\) is a function, which finds the maximum distance between a point \(x\) and its \(k\) nearest neighbours; \(D(\cdot,\cdot)\) is a distance between two points. The maximum of three distances is selected as the MRD (Fig. 3 i)).
In the next step the minimum spanning tree based on the MRDs is build via Prim's algorithm [40] (see Fig. 3 ii)). This is done by starting with the lowest MRD in the data set and connecting the two points by a straight line. In the following steps always the next nearest data point to the existing tree, which is not yet connected, is added to the tree.
Once the minimum spanning tree is generated the cluster hierarchy can be built. This is done by first, sorting the edges of the tree by distance. Then the algorithm iterates over the edges, always merging the clusters with the smallest MRD. The result of this procedure can be seen in Fig. 3 iii).
In order to extract a flat clustering form this hierarchy, a final step is needed. In this step the cluster hierarchy is condensed down, by defining a minimum cluster size and checking at each splitting point if the new forming cluster has at least the same amount of members as the minimum cluster size. If that is the case, then a new cluster is accepted, if not then the data points splitting off are considered noise. The condensed tree of a toy system can be seen in Fig. 3 iv).
### Introduction of a new clustering workflow
In this article we present a data processing routine which we found to be extremely efficient for large molecular dynamics simulation trajectories. It relies on the three algorithms introduced above. A schematic description is given in Fig. 1. In this workflow a given data set is clustered iteratively until either a specified amount of data points are assigned to clusters or a maximum number of iterations have been reached.
Fig. 1 illustrates the sequence of data processing steps along the clustering workflow. In the first step a high-dimensional collective variable (CV) is chosen. For all systems that are shown in this article all pairwise distances between the C\({}_{\alpha}\) atoms were selected. After a CV has been chosen, for trajectories containing more than 25,000 frames, encodermap is trained on all data. Thereby we obtain a function which can be used to project data very efficiently to the newly generated 2D space. In parallel, a random subset from the entire data set is generated. The reason to use such a subset is a limitation that comes with the cc_analysis dimensionality reduction. As mentioned in Sec. II.1 the cc_analysis algorithm works with the correlation matrix. This means that the Pearson correlation coefficients of the selected CV (here the pairwise c-alpha distances) are calculated for all unique pairs of frames, and used as input to cc_analysis. However the larger a data set is, the larger the correlation coefficient matrix will be, until it is no longer efficient to work with that matrix due to very long computation times as well as memory issues. Therefore a subset is created, by randomly selecting up to 25,000 data points from the entire data set. This subset is then used in the cc_analysis dimensionality reduction to project the high dimensional CVs (between 190 and 1081 dimensions for the systems in this article) to a
Figure 3: Application of HDBSCAN on a toy data set with three clusters. i) Example for the computation of the MRD for two points (red and blue). The red and blue circles indicate the farthest distance to the 5 nearest neighbours for both points. One can see that the distance between the red and blue points (green line) is larger than both the radii of the blue and the red circle. Therefore in this case the green line distance is chosen as MRD. ii) The minimum spanning tree based on the MRDs. iii) The cluster hierarchy. iv) The condensed clustering.
lower dimensional subspace (20 to 30 dimensions for the systems in this article). The choice of the appropriate amount of reduced dimensions is done by searching for a spectral gap among the cc_analysis eigenvalues. Once the cc_analysis space has been identified, a clustering is generated by applying the HDBSCAN algorithm to that lower dimensional data. A detailed description on how to choose the dimensionality for cc_analysis and the parameters for HDBSCAN is given in the supporting information (SI), Sec. S-I.
We use two different DR algorithms in the workflow due to the following reasons. For once, the cc_analysis algorithm is used to project the smaller subsets to a still comparably high-dimensional subspace, which holds more information compared to the 2D projection of encodermap. This higher dimensional subspace is therefore very well suited to be the clustering space. Once the data subset is clustered in the cc_analysis space, the 2D encodermap space is used to assign the points that were not a part of the subset to the found clusters. The 2D projection is very well suited to do a fast assignment of additional points from the data set, as well as to serve for visualization purposes. Additionally, encodermap is able to project huge data sets very time-efficiently. Hence, the generated 2D projection of a given data set can be used to avoid the main disadvantage of the cc_analysis algorithm in the way we use the algorithm here, which is having to use subsets of the data due to memory issues. In order to circumvent this disadvantage, we build a convex hull in the 2D space for each cluster that was found in the cc_analysis space. If an unassigned point lies inside a convex hull, the RMSD to the central conformation of that cluster is computed. In case the RMSD is inside a given cutoff, the data point is considered to be part of that cluster, else it is not assigned to the cluster. This RMSD cutoff is chosen by taking the weighted mean of all average internal cluster RMSDs 1 of the first clustering iteration. We found that this procedure generates structurally quite well defined clusters with a low internal cluster RMSD since the RMSD criterion is based on well defined conformational states that emerged from cc_analysis combined with HDBSCAN. However there is also the possibility to identify more fuzzy clusters that only share a general structural motif by using a larger RMSD cutoff for the assignment. An example of the identification of such fuzzy clusters is described in Sec. III.2.
Footnote 1: By the average internal cluster RMSD we mean the average RMSD of all conformations to the cluster centroid.
By introducing a RMSD criterion in the last step, we force the clustering to only include structurally very similar conformations in the respective clusters. There are of course various other clustering algorithms, which group MD data sets based on their RMSD values, e.g. an implementation [28] in the GROMACS software package [41]. Such RMSD-based clustering algorithms have been used in the MD community for at least 20 years now and they are a very obvious choice for conformational clusterings of MD trajectories. They directly compare the positions of specified atoms in various conformations of a molecule and then group the individual conformations along the trajectory using a cutoff value. However these methods generally rely on the full RMSD matrix of a given data set. For larger trajectories it becomes almost infeasible to compute these matrices due to extremely long computation times as well as memory issues that arise when working with such large matrices. In our workflow we can circumvent these issues by only having to compute the RMSD between the coordinates of C\({}_{\alpha}\) atoms of the central conformations of each cluster and the data points that lie inside the convex hull of the respective clusters in the 2D space.
In case a given system has less then about 50,000 frames, it is in principle also possible to omit the encodermap part, since the cc_analysis algorithm is able to handle the entire data set. If this approach is chosen, the user can either entirely skip the RMSD criterion, or the members of clusters that are found in the cc_analysis space can still be accepted/rejected based on a RMSD cutoff. An advantage of using both the cc_analysis algorithm and the encodermap algorithm together is the possibility to check the dimensionality reduction steps on the fly. Since the clustering is done in one DR space, but visualized in the other, narrow and well defined clusters in the 2D space indicate that the 2D map separates the different conformational clusters nicely and that therefore the chosen encodermap parameters were well selected.
Our clustering scheme is not very dependent on the quality of encodermap projection, as it is used only to assign additional structures to already identified clusters. Since the clustering itself is done in the higher dimensional cc_analysis space and the final cluster assignment uses a RMSD cutoff. The only requirement that the scheme poses towards the 2D map is that similar conformations are located close to each other in the map. This is achieved by the MDS-like distance loss part of the overall loss function of encodermap.
Remaining points which were not assigned to any cluster after one clustering iteration are then used as a new pool of data, from which the new random subset is build. This whole cycle is repeated until a certain amount of data points are assigned to clusters or until a certain number of clustering iterations are performed. To decide on a stopping point for the iterative procedure we rely on two possible convergence criteria: either a percentage of assigned conformations or average cluster sizes found at an iteration. If we observe a plateau in the percentage of unassigned data points during several successive iterations the clustering procedure is stopped. Due to the design of our workflow, the average cluster size of newly added clusters will decrease with each iteration. Therefore, the average size of newly added clusters or the convergence of this property during successive iterations can also be used as a stopping criterion. Examples are shown in SI, Sec. S-II, Fig. S2.
## III Results and Discussion
### Description of the proteins' trajectories used for the analysis
In order to illustrate the capability and performance of the proposed scheme, we chose four test systems: 40 temperature replica exchange (RE) trajectories of the Trp-cage protein (TC5b) analysed in the original encodermap paper [33]; the other three systems are long trajectories of Trp-cage (TC10b), NTL9 and Protein B simulated by the Shaw group on the Anton supercomputer [42] and generously provided by them. The four systems are listed in Table 1. For all the systems we chose distances between C\({}_{\alpha}\) atoms as the input collective variables.
The first protein we analyse in this work is the Trp-cage system (TC5b) (Trp-cage RE). It is a comparatively small protein (20 residues) which has a very stable native state when simulated at room temperature. The combination of 40 temperature replica exchange trajectories (temperature range from 300 to 570 K, 3.2 us of simulation time, 1,577,520 frames) give a very diverse mixture of structures including trajectories where the system is very stable and barely moves away from the native state, as well as highly disordered trajectories where high-energy conformations are visited. This combination of conformations makes the data set extremely diverse and complicated for the analysis due to the high number of expected clusters with extremely varying size and density.
Secondly we consider the K8A mutant of the thermostable Trp-cage variant TC10b (Trp-cage Anton) simulated by Lindorff-Larsen _et al._[42] (208 us; 1,044,000 frames). This simulation was run at 290 K and produced a much more disordered trajectory compared to the low temperature replica simulations of the TC5b system. Despite the fact that TC5b and the K8A mutant of TC10b have slightly different amino acid sequences, we use the same trained encoderm to project both systems in the same 2D map (see Fig. 4 and Fig. 5), since both systems have the same number of residues and therefore the same dimensionality of CVs. This offers the opportunity to demonstrate that different systems can be compared to each other very nicely when projected to the same 2D space.
Next we probed our clustering scheme with extremely long (1877 us 2; 9,389,654 frames) simulations [42] of the larger (39 amino acids) N-terminal fragment of ribosomal protein L9 (NTL9) which has an incredibly stable native state. Besides the possibility to show how the algorithm deals with this extremely large data set, the system has also been studied by several other researchers [29; 44]. This allows us to compare our results to their findings. Schwantes and Pande [44] reported on very low populated states which involve register-shifts between the residues that are involved in the formation of the beta sheet structures of NTL9. This opens the opportunity to show whether our clustering workflow is able to identify both very large states, as well as extremely lowly populated states in the same data set.
Footnote 2: We used the trajectories 0, 2 and 3 according to the nomenclature of Ref. [42]. We have not used trajectory 1 because the topology file for this specific trajectory differs slightly form the other three in terms of the order and the numbering of the atoms. This issue has also been reported by other researchers [43].
Lastly we chose to analyse the protein B simulations (104 us; 520,250 frames) [42]. Compared to the afore
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \hline Trajectory length in \(\mu\)s & 3.2 & 208 & 1877 & 104 \\ \hline Number of frames & 1,577,520 & 1,044,000 & 9,389,654 & 520,250 \\ \hline Input CVs dimensionality & 190 & 190 & 703 & 1081 \\ \hline Number of cc\_analysis dimensions & 20 & 20 & 20 & 30 \\ \hline Average iteration time & & & & \\ on our local workstation & 15 & 18 & 55 & 12 \\ (see SI, Sec. S-V) [min] & & & & \\ \hline Average iteration time & 24 x 15 & 24 x 18 & 24 x 55 & 24 x 12 \\ over all used & = 360 & = 432 & = 1320 & = 288 \\ CPU threads [min] & 60\% & 33.1\% & 80.9\% & 20\% \\ \hline Total CPU time & 3600 & 4320 & 13200 & 2880 \\ \hline \end{tabular}
\end{table}
Table 1: Proteins analysed in this study and performance overview of the clustering scheme.
mentioned proteins protein B does not have a single very stable state, instead three helices can move quite easily against each other. This leads to a broad conformational space, where the energy barriers between the individual states are very small. Therefore the individual conformational states are not as easily separable and rather fade/transition into each other. Taking into account the long simulation time this system is very hard to cluster conformationally.
To demonstrate how our clustering scheme works we chose to apply it to these four systems that pose very diverse challenges (e.g. an extremely large data set, both highly and very lowly populated states in the same data, differences in the amount of folded/unfolded conformations along the trajectories). For each of the systems we initially conducted the same amount of clustering iterations (10) and then evaluated the resulting clustering and decided whether for a given system additional iterations were needed.
### Trp-cage
Tc5b.For the RE simulations of the Trp-cage the clustering scheme was run over 10 iterations and assigned 60.5% of all conformations to clusters. Fig. 4 shows an encodermap projection of all 40 replicas with some of the most populated clusters found after 10 iterations and representative conformations of these clusters. Similar conformations are grouped together and rare structures are spread out across the map. For example, the native conformation of Trp-cage RE (33.4%
Figure 4: Trp-Cage TC5b (40 temperature RE trajectories): Exemplary conformations of the most populated clusters found in each of the areas indicated by coloured circles and their populations in percentages. The cluster representatives show the average secondary structure over the entire cluster. The clusters are coloured randomly, the colours repeat. Therefore clusters that have the same colour but are separated in the 2D space contain different conformations. The depicted clusters hold 36.5% of all conformations. Most of the remaining 24% of conformations that have been assigned to clusters are slight variations of the native structure and are not shown here due to visibility reasons. The cluster that is referred to by an arrow is one of the fuzzy clusters that were generated by increasing the RMSD cutoff. Top right: a histogram of the 2D encodermap space.
of all conformations) is shown in the bottom right of the 2D map in Fig. 4. On the bottom left conformations with one turn near the middle of the backbone are located. The two parts of the backbone chain of these conformations lie right next to each other and partially form beta-sheet structures.
Using a larger cutoff distance in the RMSD-based assignment of structures to the clusters (the other clusters were generated by applying a 1.8 A RMSD cutoff to the central conformation) we obtained larger and quite diffuse clusters of extended conformations (one of these clusters is shown in the left part of the projection in Fig. 4 where it is referred to by an arrow). An appropriate size of this RMSD cutoff was defined for each fuzzy cluster individually by computing the mean value of the largest 20% of the RMSD values between the centroid and cluster members of the cluster identified in the current iteration (it is equal to 5.5 A for the cluster shown here). Before we identify fuzzy clusters, we first continuously assign structures based on a fixed RMSD cutoff (1.8 A for TC5b) until one of the stopping points defined in Sec. II.4 is reached (average cluster size for TC5b). Once this stopping point is reached, the RMSD cutoff is adjusted in the way explained above and fuzzy clusters are obtained. Thereby one ensures that all conformations that can be assigned to well-defined clusters are removed from consideration before identifying fuzzy clusters. The usage of such a varying cutoff can be very helpful in order to identify diffuse clusters, where the members share a certain structural motif but do not converge to a very defined conformation, just like the cluster shown here.
From the clustering results shown in Fig. 4 one can see that the proposed clustering workflow manages to efficiently identify structurally very well defined clusters for the TC5b system. Over 10 clustering iterations it assigned 60.5% of all conformations to 260 clusters. Besides the highly populated native state (33.4%), the algorithm also finds very "rare" states, which contain only a very small amount of conformations (\(\leq\)0.1%) but show nevertheless a very defined structural identity.
_TC10b._ Fig. 5 shows the same analysis applied to the trajectory of the K8A mutant of TC10b Trp-cage.
Figure 5: The most populated clusters and respective conformations of Trp-Cage TC10b [42] projected to the same 2D encodermap space as TC5b (Fig. 4).bTop right: a histogram of the 2D projection.
We used the encodermap which we trained on TC5b to project the trajectories to the same 2D space. The identification of clusters however is of course entirely independent and unique for both cases, since the clustering is done in the higher dimensional cc_analysis space.
Notably, the backbone conformation of the native state of this mutant is extremely similar to the one in the TC5b system. However this biggest cluster only contains 12% of all conformations along the trajectory compared to the 33.4% in the case of the TC5b system. If all clusters whose central conformation are within a 2 A RMSD to the native conformation are combined, we get native conformation percentage of 16.9%. This is in excellent agreement with the native cluster sizes reported by Deng _et al._[45], Ghorbani _et al._[46] who analysed the same Trp-cage trajectories provided by Lindorff-Larsen _et al._[42]. Furthermore our 33.4% of assigned conformations coincide very well with the reporting of Sidky _et al._[47]. They found a total of 31% of conformations distributed over eight metastable macrostates and the remaining 69% as one big "molten globule" state.
The TC10b trajectory is more disordered, this can be seen by the more homogeneous projection in 2D space (upper right plot in Fig. 5) and the RMSD values to the native conformation in SI, Sec. S-III, Fig. S3. This is also the reason why the clustering scheme assigned only 33.4% of all conformations to clusters after 10 iterations. If more frames should be assigned to clusters, more clustering iterations can be performed, the RMSD cutoff can be increased or both can be done simultaneously (for the Protein B system we show the results of this approach later in the article).
However the clusters in the very center of the map (dark blue circle) are much more compact and collapsed compared to the clusters that were found in the similar area of Trp-cage RE's 2D projection. Also some of the clusters that were found in the very bottom of the left hand side of the map in case of the replica trajectories (light blue circle) were not found at all in the TC10b trajectory. The very large and diffuse cluster on the left side of the map is present in both systems as well.
Clustering directly in 2D space of TC5b.The clustering discussed above was done in a 20 dimensional space after applying the cc_analysis algorithm and only displayed at a 2D projection done with encodermap. In order to demonstrate the advantages of our approach we also directly clustered the 2D encodermap space using the HDBSCAN. The encodermap space that we used for this clustering is the same space that we used to visualize the cc_analysis clustering in Fig. 4 and Fig. 5. The results of this clustering and a few chosen clusters can be seen in Fig. 6. In total this clustering assigned 13.5% of all conformations to 362 clusters. The biggest cluster that was found is the native cluster, however it only contains 0.8% of all conformations compared to the 33.4% that were found by clustering the cc_analysis space. The clustering in the 2D space identifies some structurally very well defined clusters, such as the clusters 0, 1 and 3, but also a lot of very diffuse and inhomogeneous clusters. To quantify this inhomogeneity we computed the average of the internal cluster RMSDs. For the TC5b system our clustering workflow resulted in an average cluster RMSD of 1.34 A and a weighted average RMSD of 1.03 A, where weights are defined as the fraction of each cluster to all clustered data. The average RMSD for the direct clustering in the 2D space is 2.25 A and the weighted average RMSD is 2.73 A. This clearly shows that the internal cluster RMSD variance is on average much larger when clustering directly in the 2D space. Furthermore the clustering in the 2D space itself naturally highly depends on the quality of the 2D map.
Other than the much clearer conformational identity of the individual clusters (shown via internal cluster RMSDs), our clustering scheme also manages to assign 60.5% of all conformations to different clusters. Compared to that the clustering in the 2D projection only assigned 9-14% of all conformations depending on the choice of clustering parameters.
Comparison to other clustering approaches.For a further assessment of our clustering scheme we have also applied a frequently used clustering routine to the TC5b data. In Si, Sec. S-IV and Figs. S4 and S5 the results of applying the k-means algorithm to an 11 dimensional PCA projection of the same CVs (pairwise C\({}_{\alpha}\) distances of TC5b) are shown.
In summary, the scheme identified both structurally very defined as well as quite diffuse clusters in considered systems. Even though the combination of the 40 RE trajectories produces a very diverse data set, the clustering scheme manages to assign a large amount of the conformations to clusters (60%). Our clustering results for the TC10b are in a very good agreement with the findings of other researchers [45, 46, 47]. Furthermore the comparison to a clustering in the 2D space clearly shows the superiority of using more dimensions obtained with the cc_analysis algorithm in HDBSCAN over just relying on a low-dimensional representation alone.
Figure 6: 2D encodermap space of TC5b clustered with HDBSCAN. Representations of chosen clusters that have the same location in the 2D map as clusters found with the clustering scheme in Fig. 4 are shown.
### Ntl9
Next we examined very long (1877 us) simulations of NTL9 [42]. With 9.38 million frames to cluster, this system is an ideal candidate to demonstrate how the proposed algorithm copes with large amounts of data. After 10 iterations 81% of all conformations were assigned to clusters. Fig. 7 shows a 2D projection made with encodermap, where points are colored according to the clusters found after ten iterations of the scheme and a histogram of the 2D space in the upper right corner. In total we found 157 clusters and assigned them 81% of all conformations over 10 clustering iterations.
A comparison of the timeseries of the RMSD values to the folded state to the respective data of the Trp-cage Anton simulations (SI, Sec. S-III, Fig. S3) reveals that the two systems exhibit very different dynamics. While in the Trp-cage case the RMSDs show the disordered nature of the system, in the case of the NTL9 trajectories the RMSDs are predominantly quite low and only spike up to larger values for rather short time periods. This suggests that the NTL9 system resides in a native-like state for the majority of the simulated time. This is confirmed during the very first iteration of the clustering scheme. There we found two clusters which make up for 75.8% of all conformations.
This example also nicely illustrates how the iterative clustering approach can be efficient in identifying clusters of very different size and density (highly populated native states and low populated clusters). After finding
Figure 7: The 2D encodermap projection of NTL9. The projection can be approximately divided into three parts: the upper part with the most dense areas (where the native-like states are located); the lower left and right planes divided by an unpopulated vertical gap. The left side includes various conformations with a singular beta sheet formed mostly between the beginning and the end of the protein. In contrast on the right side lie mostly extended conformations with multiple helices along the backbone. Exemplary conformations of some of the most populated clusters found in each of the marked areas in the map and their populations are shown. All clusters in the yellow circle are extremely similar to the native cluster and can be summed up to a total of 76% of all conformations. The structures that are shown here make up 78.4% of all conformations. Top right: Histogram of the 2D encodermap space.
and removing the first two clusters (75.8% of the data) the clustering algorithm becomes much more sensitive towards the less dense areas in the CV-space in the following clustering iterations.
We compared our clustering results with other publications analyzing the NTL9 trajectories from Ref. [42]. Mardt _et al._[29] applied the VAMPnets to trajectory 0 and found in total 89.1% of folded, native like conformations. If we take the clusters we found by analysing the trajectories 0, 2 and 3 and evaluate the conformations stemming from trajectory 0 (trajectory 0 resides in the native-like state for a larger fraction of the simulated time; see RMSD plots in SI, Sec. S-III, Fig. S3, the amount of folded, native-like conformations we find is in very good agreement with [29]. Furthermore Schwantes and Pande [44] reported the finding of three "register-shifted" states, which are very low populated and therefore very hard to find. "Register-shifted" refers to the identity of the specific residues involved in forming the beta sheet structure in the native-like states (residues 1-6, 16-21 and 35-39). With our method we identified six different register-shifted states in the NTL9 trajectories 0, 2 and 3 (see Fig. 8).
The states 0, 1 and 2 are the ones which were also found in [44]. To our knowledge states 3, 4 and 5 have not been reported yet. In state 0 the central of the three beta-sheet strands is shifted downwards, whereas in state 2 the rightmost strand is shifted downwards. In state 1 both the middle and the rightmost strands are dislocated compared to the native state. State 3 is similar to state 1 in the fact that both the middle and the rightmost strands are shifted, however in state 3 the rightmost strand is shifted upwards and not downwards like in state 1. Among these six states state 4 is unique since there the rightmost strand is turned by 180 degrees. Finally state 5 differ from other states in having an extra helix along the chain between the leftmost and the middle strand. Because of this additional helix the leftmost strand is extremely shifted compared to the native state.
The identification of these register-shifted states highlights one asset of the proposed workflow. It is able to find both very large states (native, 74.5%) as well as very low populated clusters (\(<\)0.001%) in the same data set.
### Protein-B
The last system we analysed is Protein B. This system does not have a very stable native state, instead the three helices can move against each other relatively freely. This can be seen in the timeseries of the RMSD to the closest experimental homologue (1PRB) shown in SI, Sec. S-III, Fig. S3. There are no extended periods where the values are stable over some time, meaning there are no large free-energy barriers separating the various accessible conformations and thus the system constantly transitions into different conformations. This has also been found in [42], where authors stated that they were unable to identify a free-energy barrier between folded and unfolded states for Protein B (tested over many different reaction coordinates).
Such a highly dynamic system is very challenging for a conformational clustering. Here we want to show where our algorithm has its limitations and what can be done to get a satisfactory clustering result. Fig. 9 gives an overview of some of the clusters found after ten iterations of the scheme. These clusters include only 20% of the Protein B trajectory and thus 80% of all conformations are still unclustered.
In order to have more data assigned to clusters two parameters can be adjusted. First, the RMSD cutoff value can be increased and thereby more conformations can be assigned to the found clusters. In this specific case this adjustment is justified, since due to the low free-energy barriers between different states, the individual clusters are not as sharply defined in terms of their conformations. In the 10 clustering iterations which are shown in Fig. 9 we used a RMSD cutoff of 3.0 A. In a second run we increased it to 3.5 A. This resulted in an assignment of 31% of all conformations to generally more loosely defined clusters.
A second approach is to increase the amount of clustering iterations. For the first ten clustering iterations of previously analysed systems, we tuned the clustering parameters manually. This includes the choice of the number of cc_analysis dimensions, as well as the min_samples and min_cluster_size parameters of HDBSCAN. However such a manual adjustment of the parameters is of course not feasible for automating the script in order to perform many more clustering iterations. Since the amount of cc_analysis dimensions needs to be very rarely changed once a suitable amount has been identified in the first clustering iteration, the automation of the script only relies on the choice of the HDBSCAN parameters. Once the amount of clusters found in a single iteration falls below a certain threshold (10 clusters in this case), the numerical
Figure 8: Register-shifted states found in the NTL9 trajectories 0, 2 and 3. The residues which form the beta sheets in the native state are colored based on their residue ID.
values of the min_samples and min_cluster_size parameters of HDBSCAN are slightly decreased. This leads to the detection of smaller clusters that have not been identified before. By applying this automation approach after the first 10 iterations to Protein B and using a RMSD cutoff of 3.5 A, we could assign 44.3% of all conformations to clusters over 100 iterations, which took roughly 15 hours on our workstation.
## IV Discussion
The Trp-cage system (TC5b) is a relatively small protein which has a quite stable native conformation. The combination of 40 temperature RE trajectories however gives a very diverse data set including (under standard conditions) very improbable high-energy conformations. Over ten iterations the algorithm managed to assign 60.5% of all conformations to clusters, which took on average 360 min per iteration over all CPU threads (15 min per iteration on a standard office machine with 24 CPU threads). Table 1 shows the clustering performance for the four systems discussed here. By switching the generally static RMSD cutoff to a varying cutoff we could show that the algorithm can both generate conformationally very defined clusters as well as quite diffuse. The conformations assigned to such loose clusters share a general structural motif. The ability to identify both of these cluster types is one of the advantages of the proposed algorithm. Furthermore we demonstrate that the clustering workflow is able to directly compare different systems (even if they slightly differ structurally), by projecting them to the same 2D map using the encodermap algorithm. This enables a direct and visual comparison of the sampled phase-spaces of different trajectories and their respective identified states. By comparing the clustering result where the clustering is done in a 20-dimensional cc_analysis space and then projected to a two-dimensional space to a clustering where the clusters are purely found in a 2D encodermap space, we prove an advantage using more dimensions and combine cc_analysis with encodermap. The scheme created clus
Figure 9: Protein B: Exemplary conformations of some of the most populated clusters found for the Protein B system after 10 clustering iterations and their populations; Top right: Histogram of the 2D encodermap space.
ters with a much clearer structural identity (lower RMSD variance), while being much less dependent on the quality of the 2D map.
We analysed long (9.38 million frames) trajectories of NTL9 to show how the proposed scheme copes with very large amounts of data. On average the algorithm needed 1320 min of computation time over all CPU threads per iteration (55 min per iteration on our office machine). Since this system also has one hugely populated native-state, it is also a nice example to demonstrate an advantage of the iterative clustering. After the clusters with the native states are removed from consideration, the algorithm becomes much more sensitive towards less populated areas in the following iterations. Applying this approach we could identify three very low populated register-shifted states, which have been reported before [44], and three not yet seen register-shifted states.
Lastly we looked at is Protein B, which is a highly dynamic system. To analyse this 1.04 million frames trajectory it took on average 288 min of computation time per iteration (12 min per iteration on our office machine). This system has no large free-energy barriers separating the various conformations, which makes it very difficult to cluster. This was confirmed by the fact that after ten clustering iterations only 20% of all conformations could be assigned to clusters. However by increasing the RMSD cutoff from 3.0 A to 3.5 A we could already increase the amount of assigned conformations to 31%, which of course resulted in slightly less structurally defined clusters. It is also possible to automate the clustering and run until a certain amount of conformations are assigned to clusters or until a given number of iterations is reached. In this specific case we ran the scheme for 100 automated iterations (\(\approx\)15 hours), during which 44.3% of the conformations were assigned to clusters.
For all considered systems the proposed workflow was able to identify defined clusters at the cost of leaving some amount of the trajectories unassigned. As we have shown here, the rest of the structures does not belong to any specific clusters and can be considered as unfolded or transition states. We intentionally do not propose any additional steps to assign or classify those conformations as it is highly dependant on the intended application of the data. For example in case the data is used to build subsequent kinetic models the rest of the points can be assigned to the nearest (e.g. in simulation time) cluster using methods such as PCCA+ analysis [48], or defined as a metastable transition state as in Ref. [47]. It can also be defined as noise and used as discussed in Ref. [49].
All performance data is shown in Table 1 and was obtained by running the clustering scheme script on the office workstation described in SI, Sec. S-V. The proposed workflow is, however, highly parallelizable, since the computationally most expensive step is the assignment of additional data points to the initially identified clusters in the small subset based on the convex hull and the RMSD criterion. If a large amount of CPU cores are available, the 2D encodermap projection array can be split by the amount of cores and the assignment can thereby be run in parallel which leads to a significant speed up.
The convex hull around the clusters identified in the small subset is used to reduce the amount of RMSD computations that have to be performed when assigning additional conformations in each clustering iteration. This however might in principle lead to the exclusion of data points that might otherwise have been assigned to some of the clusters. In order to get an idea of the magnitude of this "loss" of potential cluster members, we computed the RMSD of all data which was labeled as noise (623,000 conformations; 39.5%) to each of the cluster centers of TC5b (260 clusters). This computationally very expensive task took an additional 5 hours on our working machine. We found that 42,000 conformations (2.7%) were not assigned to the identified clusters due to the convex hull criterion. When keeping in mind that the entire 10 iteration clustering process took 2.5 hours, the "loss" of 2.7% of unclustered data can be considered a worthy trade-off.
Another point to consider is that due to the convex hull criterion clusters can be split. If data points that would be assigned to a certain cluster by reason of the RMSD criterion lie outside of the convex hull, they could be identified as another cluster in one of the following clustering iterations. In such cases it can make sense to merge these clusters in hindsight, due to their very similar structural identity. In order to showcase such a merge, we again analysed TC5b. We computed the RMSDs between all of the 260 central cluster conformations and merged all clusters that had a RMSD of \(\leq\) 1 A. This resulted in a reduction to 201 clusters with only very marginal influence on the average internal cluster RMSDs.
The code for the encodermap algorithm is available on the following github page [https://github.com/AG-Peter/encodermap](https://github.com/AG-Peter/encodermap). The cc_analysis code can be found under [https://strucbio.biologie.uni-konstanz.de/xdswiki/index.php/Cc_analysis](https://strucbio.biologie.uni-konstanz.de/xdswiki/index.php/Cc_analysis).
## V Conclusion
We developed a clustering scheme which combines two different dimensionality reduction algorithms (cc_analysis and encodermap) and the HDBSCAN in an iterative approach to perform fast and accurate clustering of molecular dynamics simulations' trajectories. The cc_analysis dimensionality reduction method was first applied to protein simulation data. The method projects collective variables to a usually relatively high-dimensional (\(\sim\)10-40 dim) unit sphere, separating noise and fluctuations from important structural information. Then the data can be efficiently clustered by density based clustering methods, such as HDBSCAN. The iterative application of HDBSCAN allows to account for the inhomogeneity in population and density of the projected points, which is very typical for protein simulation
data. As cc_analysis relies on the calculations of correlation matrices between each frame, this drastically limits the amount of data one can project simultaneously. To allow processing of long simulation trajectories we included encodermap to the scheme. In addition to the obvious advantage of the two-dimensional visualisation it is used - in combination with a RMSD-based acceptance criterion - for a fast structure-based assignment of additional points to the clusters initially identified in the higher dimensional projection done with cc_analysis. To demonstrate the accuracy and performance of the proposed scheme we applied the clustering scheme to four test systems: replica exchange simulations of Trp-cage and three long trajectories of a Trp-cage mutant, NTL9 and Protein B generated on the Anton supercomputer. By applying the scheme to these four test systems we could show that: the algorithm can efficiently handle very large amounts of data, that it can be used to compare the clusters of structurally different systems in one 2D map, and that it can also be applied to cluster systems which do not have very stable native states and are therefore intrinsically very difficult to cluster conformationally. Furthermore the algorithm is able to find clusters independent of their size. By varying a RMSD cutoff both conformationally very well defined clusters, as well as fuzzy clusters, whose members only share an overall structural motive, can be identified.
## VI Supporting Information
Supporting Information (PDF) includes:
(S-I): Methods to chose parameters for cc_analysis and HDBSCAN.
(S-II): Stopping criteria for the clustering workflow.
(S-III): RMSD plots of trajectories for Trp-cage, Protein B and NTL9.
(S-IV): Comparison of the proposed clustering workflow to PCA and k-means clustering for Trp-cage (TC5b).
(S-V): Workstation specifications.
## VII Acknowledgements
This work was supported by the DFG through CRC 969. We also greatly appreciate the computing time on bwHPC clusters which was used to produce the Trp-cage TC5b trajectories. Furthermore we would like to thank the D.E. Shaw research group for providing the Trp-cage, NTL9 and Protein B trajectories.
|
2303.12489 | Few-shot Multimodal Multitask Multilingual Learning | While few-shot learning as a transfer learning paradigm has gained
significant traction for scenarios with limited data, it has primarily been
explored in the context of building unimodal and unilingual models.
Furthermore, a significant part of the existing literature in the domain of
few-shot multitask learning perform in-context learning which requires manually
generated prompts as the input, yielding varying outcomes depending on the
level of manual prompt-engineering. In addition, in-context learning suffers
from substantial computational, memory, and storage costs which eventually
leads to high inference latency because it involves running all of the prompt's
examples through the model every time a prediction is made. In contrast,
methods based on the transfer learning via the fine-tuning paradigm avoid the
aforementioned issues at a one-time cost of fine-tuning weights on a per-task
basis. However, such methods lack exposure to few-shot multimodal multitask
learning. In this paper, we propose few-shot learning for a multimodal
multitask multilingual (FM3) setting by adapting pre-trained vision and
language models using task-specific hypernetworks and contrastively fine-tuning
them to enable few-shot learning. FM3's architecture combines the best of both
worlds of in-context and fine-tuning based learning and consists of three major
components: (i) multimodal contrastive fine-tuning to enable few-shot learning,
(ii) hypernetwork task adaptation to perform multitask learning, and (iii)
task-specific output heads to cater to a plethora of diverse tasks. FM3 learns
the most prominent tasks in the vision and language domains along with their
intersections, namely visual entailment (VE), visual question answering (VQA),
and natural language understanding (NLU) tasks such as neural entity
recognition (NER) and the GLUE benchmark including QNLI, MNLI, QQP, and SST-2. | Aman Chadha, Vinija Jain | 2023-02-19T03:48:46Z | http://arxiv.org/abs/2303.12489v1 | # Few-shot Multimodal Multitask Multilingual Learning
## 1 Abstract
While few-shot learning as a transfer learning paradigm has gained significant traction for scenarios with limited data, it has primarily been explored in the context of building unimodal and unilingual models. Furthermore, a significant part of the existing literature in the domain of few-shot multitask learning perform in-context learning which requires manually generated prompts as the input, yielding varying outcomes depending on the level of manual prompt-engineering. In addition, in-context learning suffers from substantial computational, memory, and storage costs which eventually leads to high inference latency because it involves running all of the prompt's examples through the model every time a prediction is made. In contrast, methods based on the transfer learning via the fine-tuning paradigm avoid the aforementioned issues at a one-time cost of fine-tuning weights on a per-task basis. However, such methods lack exposure to few-shot multimodal multitask learning. In this paper, we propose **f**ew-shot learning for a **m**uiltimodal **m**ultitask **m**ultilingual (FM3) setting by adapting pre-trained vision and language models using task-specific hypernetworks and contrastively fine-tuning them to enable few-shot learning. FM3's architecture combines the best of both worlds of in-context and fine-tuning based learning and consists of three major components: (i) multimodal contrastive fine-tuning to enable few-shot learning, (ii) hypernetwork task adaptation to perform multitask learning, and (iii) task-specific output heads to cater to a plethora of diverse tasks. FM3 learns the most prominent tasks in the vision and language domains along with their intersections, namely visual entailment (VE) [1], visual question answering (VQA) [2], and natural language understanding (NLU) tasks such as neural entity recognition (NER) and the GLUE benchmark [3] including QNLI [4], MNLI [5], QQP [6], and SST-2 [7].
## 2 Introduction
Self-supervised pretraining has propelled the adoption of deep learning on tasks with limited labeled data. With their task-agnostic features and improved data efficiency, self-supervised pre-trained models have drastically reduced the opportunity cost to tackle tasks that earlier required a significant amount of data and thus proved intractable using supervised learning. As a result of the advancements in self-supervised pretraining, semi-supervised approaches that combine self-supervision with supervised learning on a task-specific dataset that tackles a related task, have emerged as a new paradigm that has enabled transfer learning.
One of the biggest open challenges for machine learning research is building models that can be rapidly adapted to novel tasks using only a handful of annotated examples. The domain of few-shot learning (FSL), which is a specific variant of transfer learning, has emerged as an attractive solution to label-scarce scenarios where data annotation can be time-consuming and costly. These methods are designed to work with a small number of labeled training examples, and typically involve adapting pre-trained models for specific downstream tasks. Several flavors of FSL methods exist, each with its pros and cons.
One such large-scale self-supervised approach, popularized by the arrival of the generative pre-trained transformer (GPT) series [8; 9] of NLP models, is transfer learning via in-context learning (ICL) which emerges from training at scale. ICL teaches a model to perform a downstream task by feeding in a prompt with a nominal set of supervised examples as input to the model along with a single unlabeled example for which a prediction is desired. In effect, few-shot prompting using a small collection of input-target
pairs offers a walk-through to the model on how to transform the input into the output. Notably, since ICL requires no parameter updates, i.e., no gradient-based training is required, a single model can effectively act as a swiss-army knife by being able to immediately perform a wide variety of tasks. ICL, therefore, solely relies on the capabilities that a model learned during pretraining. The ease of use and quick adaptability to target tasks are characteristic features that have caused widespread adoption of ICL [10, 11, 12, 13, 14].
While ICL offers a multitude of benefits, it also suffers from several major drawbacks. First, processing all the prompted input-target pairs every time the model makes a prediction incurs significant compute, memory, and latency costs. These costs stack up as the number of the inferences increases - in a situation where the goal is to perform inference over a batch of test examples rather than one-off predictions, ICL can prove to be impractical from a resource standpoint. Second, owing to a limited-length context window, the number of support examples \(k\) that the model can utilize are restricted to nominal numbers. This is because we must fit all \(k\) examples into the model's context window, which is limited to a specific number of tokens (1024 in case of GPT-2 and 2048 in case of GPT-3). Third, ICL typically produces inferior performance compared to fine-tuning [15, 8, 16]. Finally, while the model's performance is a function of semantic and structural aspects of the prompt which can cause a significant yet unpredictable impact on the model's performance [17], far beyond inter-run variation of fine-tuning. In particular, semantic changes such as the phrasing or choice of words in the prompt and syntactic changes such as the exact formatting of the prompt (including the wording [18] and ordering of examples [19]) can cause a significant, unintended, and difficult-to-estimate impact on the model's performance. Furthermore, recent work has also demonstrated that ICL can perform well even when provided with incorrect labels, raising concerns as to how much learning is taking place at all [13].
Another common semi-supervised learning paradigm is transfer learning via fine-tuning (FT) which follows a two-staged process: (i) utilize the parameters of a pre-trained large-scale self-supervised model learning for weight initialization, and (ii) perform gradient-based fine-tuning using data associated with the downstream task of interest. With the advent of representation-learning approaches such as BERT [20], the domain of NLP underwent a radical transformation from supervised to semi-supervised approaches for tasks such as sentiment analysis, neural entity recognition, question answering, summarization, conversational response generation, etc. Representation-learning approaches have now taken center-seat in NLP, with the learned contextualized representations from these pre-trained models serving as initial task-agnostic features that, in turn, offer a the starting point for learning task-specific features. While problems with limited labeled data have benefited significantly owing to the reduced data-appetite of semi-supervised approaches, tasks with abundant labeled data have also seen improved performance.
While FT has produced many state-of-the-art (SoTA) results [21] on a range of classification tasks, it results in a model that is specialized for a single task with an entirely new set of parameter values, which can become impractical when FT a model on many downstream tasks. In other words, such models typically perform one task at a time, and cannot learn new concepts or adapt to new tasks in a few shots. FM3 seeks to address this gap and enable multimodal FSL - much like how SetFit contrastively fine-tunes pre-trained Sentence Transformer models [22] and dispenses with prompts altogether and does not require large-scale pre-trained LMs to achieve high accuracy. With only 8 labeled examples in the Customer Reviews (CR) sentiment dataset, SetFit is competitive with RoBERTa finetuned on the full training set [23], despite the fine-tuned model being three times larger.
Both ICL and fine-tuning have been explored in a multimodal context. A slew of methods, notably _Flamingo_[15] and _Frozen_[24], perform ICL with the final objective to have the model rapidly adapt to a variety of multimodal tasks. While _Flamingo_ achieves competitive performance with FSL, in some cases outperforming models fine-tuned on thousands of times more task-specific data, _Frozen_ offers relatively lower performance in return for the flexibility of using an off-the-shelf pre-trained LM and keeping its weights frozen. On the other hand, Oscar [25] and Omninet [26] are multimodal multitask models that do not perform ICL. While Oscar is pre-trained using pre-trained with aligned data on task-agnostic cross-modal objectives (a masked token loss over words and visual tags, and a contrastive loss between visual tags and others) and then fine-tuned to specific tasks, Omninet is simultaneously trained on its target tasks and undergoes no finetuning. In the zero-/few-shot learning context, multimodal pretraining has recently shown to enable strong generalization in the discriminative setting using large-scale contrastive learning [27, 28].
An additional paradigm for enabling a model to perform a new task with minimal updates is parameter efficient fine-tuning (PEFT), where a pre-trained model is fine-tuned by only updating a small number of added or selected parameters. Recent methods have matched the performance of fine-tuning the full
model while only updating or adding a small fraction (e.g. 0.01%) of the full model's parameters [29; 30]. Furthermore, certain PEFT methods allow mixed-task batches where different examples in a batch are processed differently [30], making both PEFT and ICL viable for multitask models. While the benefits of PEFT address some shortcomings of fine-tuning (when compared to ICL), there has been relatively little focus on whether PEFT methods work well when very little labeled data is available. [16] closes this gap by proposing T-Few, a model that learns using PEFT and a fixed set of hyperparameters, attaining strong performance on novel, unseen tasks while only updating a tiny fraction of the model's parameters.
FM3 combines the best of both worlds of ICL- and FT-based transfer learning and offers an efficient and prompt-free framework that offers strong generalization to new multimodal vision-language tasks in a few-shot setting. Despite the flexibility offered by ICL, its limitations leave much to be desired, especially in situations where compute, latency, memory, batch inference, performance determinism, etc. are important. On the other hand, FT offers performance invariance since it does not require prompts, offers better performance than ICL-based methods [15], and is resource-efficient in terms of compute, latency, memory, etc. While zero-/few-shot generalization is a desirable by-product of ICL, the only significant downside to FT is that generalization to new tasks with limited data is challenging. FM3 is architected keeping the aforementioned drawbacks of ICL in mind and thus follows the FT approach but overcomes its limitations as follows: (i) multimodal contrastive fine-tuning to enable FSL, (ii) using hypernetworks with a limited parameter count to perform task adaptation for multitask learning, and (iii) task-specific output heads to cater to a plethora of diverse tasks.
FM3 achieves high accuracy with little labeled data - for instance, with only 16 labeled examples per class on the complex task of SNLI-VE [1], FM3 surpasses the current SoTA fine-tuned on the full training set of 430K examples! Compared to other FSL methods, FM3 has several unique features:
* **No prompts or verbalisers:** Current techniques for few-shot fine-tuning require handcrafted prompts or verbalisers to convert examples into a format that's suitable for the underlying language model. FM3 dispenses with prompts altogether by generating rich embeddings directly from text examples. This obliterates the need for manual prompt engineering, which in turn, results in performance determinism.
* **Resource efficiency:** Optimal use of compute, latency, memory, etc. compared to our baselines _Flamingo_, _Frozen_, and especially ICL-based methods.
* **Frozen pre-trained models:** FM3 uses pre-trained vision and language encoders without fine-tuning them. This implies that FM3 architecture enables drop-in plug-and-play replacement for modality encoders. Only small hypernetwork models need to be fine-tuned when experimenting with different encoders.
* **Fast to train:** FM3 doesn't require large models like _Flamingo_ (80B) or _Frozen_ (7B+) to achieve high accuracy. As a result, it is significantly faster to train and run inference with.
* **Multilingual support:** FM3 enables multilingual processing, and can be paired up with any multilingual text encoder such as multilingual Sentence Transformer [22] variants of MPNet [31], RoBERTa [32], ALBERT [33], LASER [34], etc., which enables multilingual learning in 50+ languages by simply fine-tuning a multilingual model checkpoint.
While proposals that address a subset of the areas of few-shot multimodal multitask multilingual learning exist, to our knowledge, FM3 is the first to explore the intersection of the domains of multimodal multitask multilingual learning in a FSL setting.
## 3 Related Work
### Few-shot learning using pre-trained models
In the domain of NLP, SetFit proposed by Tunstall et al. [23] is an efficient and prompt-free framework for few-shot fine-tuning of Sentence Transformers (ST). SetFit works by fine-tuning a pre-trained ST on a small number of text pairs in a contrastive Siamese manner. The resulting model is then used to generate rich text embeddings, which are used to train a classification head. This simple framework requires no prompts, and achieves high accuracy with orders of magnitude less parameters than existing techniques. SetFit obtains comparable results with parameter-efficient fine-tuning (PEFT) and parameter efficient tuning (PET) techniques, while being an order of magnitude faster to train. SetFit achieves high accuracy with little labeled data - for instance, with only 8 labeled examples per class on the Customer Reviews
sentiment dataset [35], SetFit is competitive with fine-tuning RoBERTa Large on the full training set of 3K examples. Owing to its practical utility in enabling FSL, we adopt the idea of contrastive fine-tuning from SetFit and generalize it to a multimodal multitask multilingual setting as part of FM3.
### Multitask fine-tuning using PEFT
In [36], Houlsby et al. propose a parameter-efficient fine-tuning method which introduces adapter modules between the layers of a pre-trained language model. Adapter modules yield a compact and extensible model; they add only a few trainable parameters per task, and new tasks can be added without revisiting previous ones. The parameters of the original network remain fixed, yielding a high degree of parameter sharing. They achieve SoTA performance on GLUE [3] whilst adding only a few parameters per task. However, the downside of this approach is that they are trained separately for each task and thus do not enable sharing information across tasks.
To circumvent the aforementioned issue of knowledge sharing across tasks, Mahabadi et al. [37] learn adapter parameters for all layers and tasks using shared hypernetworks, which condition on task, adapter position, and layer ID in a transformer model. This parameter-efficient multitask learning framework achieves the best of both worlds by sharing knowledge across tasks via hypernetworks while enabling the model to adapt to each individual task through task-specific adapters. Experiments on the GLUE benchmark show improved performance in multitask learning while adding only 0.29% parameters per task. Given the fact that hypernetworks enable easy multitask fine-tuning of pre-trained models without having to actually fine-tune the model's weights (i.e., they remain frozen in this process), we adopt multitask finetuning in our proposed architecture.
### Multitask multimodal learning
Hu and Singh propose UniT [38], a Unified Transformer encoder-decoder model that learns 7 tasks jointly across 8 datasets spread over different vision and lanuage domains, ranging from object detection to natural language understanding and multimodal reasoning. UniT achieves strong performance with significantly few parameters in some cases outperforming separately trained single task models. While the architecture offers joint end-to-end training of each task, it requires a substantial amount of data across all tasks for the model to generalize. Our approach, on the other hand, utilizes FSL to efficiently learn a task with a small fraction of data.
In [26], Pramanik et al. propose OmniNet, a single model to support tasks with multiple input modalities as well as asynchronous multitask learning. OmniNet is powered by a spatio-temporal cache that enables learning spatial dimension of the input in addition to the hidden states corresponding to the temporal input sequence. Even though OmniNet is \(3\times\) parameter-efficient, there is a significant performance gap on most tasks it was trained on compared to the individual model counterparts.
### Multitask multilingual multimodal learning
\(M^{3}P\), proposed in [39], is a multitask multilingual multimodal pre-trained model that combines multilingual pre-training and multimodal pre-training into a unified framework via multitask pre-training. \(M^{3}P\) learns universal representations that can map objects occurred in different modalities or texts expressed in different languages into a common semantic space. In addition, to alleviate the issue of lack of sufficient labeled data for non-English multimodal tasks, they propose multimodal code-switched training (MCT) [40] which replaces each word in the caption with a translated word with a probability of \(\beta\). If a word has multiple translations, a random one is chosen. Experiments on the multilingual image retrieval task across MS COCO [41] and Multi30K [42] show competitive results for English and new establish SoTA results for non-English languages. While \(M^{3}P\) tackles a similar problem as FM3, it does not assume any restrictions on the annotation budget - in other words, it does not consider the few-shot setting for learning its tasks.
### Few-shot multimodal multitask learning
In [15], Alayrac et al. introduce Flamingo, a family of Visual Language Models (VLM) trained on large-scale multimodal web corpora with an ability to rapidly adapt to a variety of image and video tasks. _Flamingo_ proposes the following key architectural innovations: (i) bridge powerful pre-trained vision-only and language-only models, (ii) handle sequences of arbitrarily interleaved visual and textual data, and
(iii) seamlessly ingest images or videos as inputs. The end result is a single _Flanningo_ model that can achieve a new SoTA with FSL, simply by prompting the model with task-specific examples. On numerous benchmarks, _Flamingo_ outperforms models fine-tuned on thousands of times more task-specific data. These include open-ended tasks such as visual question-answering, where the model is prompted with a question which it has to answer; captioning tasks, which evaluate the ability to describe a scene or an event; and close-ended tasks such as multiple-choice visual question-answering.
In [24], Tsimpoukelli et al. present _Frozen_, a simple-yet-effective approach for transferring the FSL abilities inherent in large auto-regressive language models to a multimodal setting (vision and language). _Frozen_ is a multimodal few-shot learner, with the surprising ability to learn a variety of new tasks when conditioned on examples, represented as a sequence of multiple interleaved image and text embeddings. _Frozen_ can rapidly learn words for new objects and novel visual categories and do visual question-answering with only a handful of examples. While this work serves as an important baseline for FM3, a key limitation is that it achieves far from SoTA performance on the specific tasks that it learns in a few shot setting. _Frozen_ shows that training a visual encoder through a pre-trained and frozen language model results in a system capable of strong out-of-distribution (zero-shot) generalization. Furthermore, _Frozen_ confirms that the ability to rapidly adapt to new tasks given appropriate prompts is inherited from the pre-trained language model and transfers directly to multimodal tasks.
While _Flamingo_ and _Frozen_ are both ICL-based FSL methods, the differentiating factors are: (i) the scale of data used to train these models, and (ii) architectural variations. _Flamingo_ is trained on large-scale multimodal web corpora while is _Frozen_ is trained on the Conceptual Captions dataset [43]. The architectural design choices differ between the two in using pre-trained modality encoders vs. training them from scratch. Similar to FM3, _Flamingo_ uses off-the-shelf pre-trained encoders and only generates adapter components (in the form of Perceiver Resampler blocks) while _Frozen_ utilizes a pre-trained LM but trains its own vision encoder that feeds the LM. Inspired by this observation, FM3 borrows the idea of using separate text and vision adapters in the form of hypernetworks so as to offer the model additional degrees of freedom, which in turn, helps render better performance.
## 4 Fm3
### Methods
Figure 1 offers a visual summary of the architectural stages of FM3.
Figure 1: Architectural overview of FM3. FM3 consists of three stages: (i) contrastive pair mining for fine-tuning, which generates positive and negative pairs, (ii) task-based fine-tuning involves adapting the pre-trained text and image encoder models for down-stream tasks using hypernetworks, and (iii) training task-specific classification heads.
#### 4.1.1 Task and batch sampling
At each iteration during training, we randomly select a task with a sampling probability that can be manually specified based on the dataset size. Once the task list has been sampled, for tasks with multiple datasets, we randomly sample a dataset corresponding to that task to fill a batch of samples.
#### 4.1.2 Contrastive fine-tuning for few-shot learning
Similar to [23], we utilize a contrastive learning approach to FSL. Contrastive learning effectively enlarges the size of training data which is critical in few-shot scenarios and thus fosters effective learning for tasks with limited annotated data. Assuming a small number (\(k\)) of labeled examples for a classification task, the potential size of the fine-tuning set \(T\) derived from the number of unique positive and negative contrastive pairs that can be generated would be \(\frac{k(k-1)}{2}\), which is significantly larger than just \(k\)[23]. In this stage, we sample \(R\) positive and \(R\) negative triplet pairs, where \(R\) is a hyperparameter (set to 20, following [23]). We utilize multiple negatives ranking loss [44] for contrastive fine-tuning owing to its superior performance [44] and its ability to randomly sample negative pairs from each batch in an automated fashion.
#### 4.1.3 Task-based fine-tuning using hypernetworks
To our knowledge, FM3 is the first to utilize hypernetworks in a multimodal setting. Using frozen modality encoders has the distinct advantage of preventing catastrophic forgetting (compared to fine-tuning the encoders themselves) [15]. As such, we utilize an independent hypernetwork for each modality.
In this step, we perform task-specific fine-tuning of SoTA pre-trained text and vision models, namely a pre-trained multilingual MPNet [31] Sentence Transformer [22] from Huggingface [45] as our text backbone and CoCa-Base [46] as our vision backbone. We adopt the idea of hypernetworks from [37], which is a parameter-efficient method for multitask fine-tuning. We train shared hypernetworks to generate task-specific adapters conditioned on the task, layer ID, and adapter position embeddings. These shared hypernetworks capture knowledge across tasks, enabling positive transfer to low-resource and related tasks, while task-specific layers allow the model to adapt to each individual task. We optimize a distance function based on cosine similarity, minimizing it for positive pairs and maximizing it for negative pairs.
#### 4.1.4 Task-specific classification head training
Lastly, we train task-specific classification heads on the fine-tuned model obtained from the above step. The generated embeddings corresponding to the data samples for each task, along with their labels, constitute the training set for the respective classification head. We use logistic regression for binary classification tasks such as SST-2, QQP, QNLI, etc. and softmax for multiclass classification tasks such as VQA, SNLI-VE, GLUE, NER, etc.
### Tasks and Datasets
Table 1 delineates the domains, tasks, and datasets for training and evaluating FM3.
## 5 Experiments
### Experimental setup
We finetuned FM3 on the Conceptual Captions dataset (which _Frozen_[24] is trained on) for vis-a-vis comparisons. We use the AdamW optimizer with global norm clipping of 1, no weight decay for the hypernetworks and weight decay of 0.1 for the other trainable parameters. We anneal the learning rate, increasing it linearly from 0 to \(10^{-3}\) up over the first 5000 steps then held constant for the duration of training and then decayed exponentially. Unless specified otherwise we train our models for 500K steps.
\begin{table}
\begin{tabular}{c c c} \hline \hline Domain & Task & Dataset \\ \hline \multirow{2}{*}{Language understanding} & Neural entity recognition (NER) & CONL-2003 [47] \\ & GLUE benchmark [3] & QNLI [4], MNLI [5], QQP [6], and SST-2 [7] \\ \hline \multirow{2}{*}{Vision-and-language reasoning} & Visual entailment & SNLI-VE [1] \\ & Visual question answering (VQA) & VQA2 dataset [2] (with questions from Visual Genome [48] as additional data), OK-VQA [49] \\ \hline \hline \end{tabular}
\end{table}
Table 1: Datasets for training and evaluation
All datasets were trained with the same weights. Since the performance of models trained with a contrastive objective is sensitive to the batch size, we use a relatively large batch size of 32.
### Baselines
We utilize _Flamingo_[15] and _Frozen_[24] as our primary baselines since they deal with multimodal multitask learning in the context of FSL. We include UniT [38] as an additional baseline since it deals with multimodal multitask learning of tasks that _Flamingo_[15] and _Frozen_ haven't been evaluated on. For each task, we compare FM3 with both task-specific zero-/few-shot and pre-trained/fine-tuned SoTA. Since none of the above baselines support multilingual tasks, we utilize [39] as a baseline to qualify the performance of FM3 on non-English tasks.
### Results
Table 2 performs a comparative analysis of FM3 with _Flamingo_, _Frozen_, and the respective zero-/few-shot and fine-tuned SoTA on each task with number of support examples/shots as \(k\in\{0,4,16,64\}\). While Flickr30K Image-to-Text uses Recall@1, Multi30K en-de uses BLEU, CoNLL-2003 and QQP use F1, all other tasks utilize accuracy as their performance metric. Note that since _Frozen_ is an auto-regressive model/decoder which undergoes prompt-based fine-tuning, \(k\) indicates the number of support examples as part of the prompt/prefix passed as input to the model, while FM3 being an encoder-based architecture, \(k\) indicates the number of examples we contrastively fine-tune on.
**Few-shot results.** FM3 outperforms zero-/few-shot baselines on 7 out of the 10 benchmarks considered. This is achieved with as few as 16 examples per task, demonstrating superior adaptation to these tasks. More importantly, FM3 is often competitive with SoTA methods fine-tuned on up to hundreds of thousands of annotated examples. On 4 out of 10 tasks, FM3 even outperforms the fine-tuned SoTA despite using a single set of model weights and only 64 task-specific examples.
**Scaling with respect to parameters and shots.** As table 2 indicates, more the number of shots, better the few-shot performance, similar to GPT-3 [8]. The performance improvement shows diminishing returns as the number of shots increases.
### Inference runtime analysis
Table 3 summarizes our inference runtime analysis. We measure the time taken to run FM3 and our primary baselines on the test set of VQAv2 [2] and OKVQA [49] and averaging it out by the total number of total samples. These measurements are from a platform with NVIDIA A100 with 32GB VRAM. **Bold** numbers indicate best performance. Underlined numbers indicate the next best baseline on which the % improvements for FM3 are based.
## 6 Ablation analysis
Table 4 summarizes the results for FM's ablation experiments. **Bold** numbers indicate best performance. Underlined numbers indicate the baseline on which the % numbers are based. We analyze the impact of the following design decisions on FM3's performance:
* **Direct encoder fine-tuning with no hypernetworks.** While frozen modality encoders prevents catastrophic forgetting [15], we adopt a hypernetwork-fee architecture and fine-tune the encoders themselves with data from our target tasks.
* **Hypernetwork size selection.** We perform comparisons with varied parameter size allocations for hypernetworks to quantify the effect of hypernetwork size. The parameter count for hypernetworks is expressed as a percentage of the baseline FM3 model parameter count.
* **Compute/memory vs. performance trade-offs.** We vary the choice of text and vision encoders (which in turn, varies the number of parameters and time complexity of the model). For our text encoder, we choose multiingual MiniLM [63] with 57% lesser parameters compared to our default choice of the paraphrase-multilingual-mpnet-base-v2 variant of multilingual MPNet [45]. For our vision encoder, we choose the vit-base-patch16-224 variant of ViT [64] with 78% fewer parameters compared to our default choice of CoCa-Base [46].
## 7 Future Work
While FM3 establishes a new SoTA on several tasks, there are significant opportunities for improvement centered around three major aspects: (i) data, (ii) model architecture, and (iii) loss function. First, _Flamingo_
\begin{table}
\begin{tabular}{l c c} & **VQAv2** (acc.) & **OKVQA** (acc.) \\ \hline
**FM3 (default)** & **71.2** & **58.9** \\ \hline No hypernetworks & 64.3 (90.0\% \(\downarrow\)) & 52.1 (88.4\% \(\downarrow\)) \\ Hypernetworks with 5\% parameters & 69.5 (97.6\% \(\downarrow\)) & 56.7 (88.4\% \(\downarrow\)) \\ Text encoder: MiniLM & 68.1 (95.6\% \(\downarrow\)) & 55.4 (94.0\% \(\downarrow\)) \\ Vision encoder: ViT & 66.2 (92.9\% \(\downarrow\)) & 51.2 (86.9\% \(\downarrow\)) \\ Text/vision encoders: MiniLM/ViT & 65.5 (91.9\% \(\downarrow\)) & 52.3 (88.7\% \(\downarrow\)) \\ \hline \end{tabular}
\end{table}
Table 4: Ablation analysis of FM3 on VQAv2 and OKVQA with number of shots as 64. Default FM3 uses hypernetworks with 10% parameters, and MPNet/CoCa as text/vision encoders. Measurement units are % accuracy so higher is better.
\begin{table}
\begin{tabular}{l c c} & **VQAv2** (sec.) & **OKVQA** (sec.) \\ \hline
**FM3** & **0.187** (58\% \(\uparrow\)) & **0.214** (46\% \(\uparrow\)) \\ \hline _Flamingo_ & 0.353 & 0.371 \\ \hline _Frozen_ & 0.295 & 0.312 \\ \hline \end{tabular}
\end{table}
Table 3: Inference runtime analysis of FM3 vs. _Flamingo_ and _Frozen_ on VQAv2 and OKVQA. Measurements are based on wall clock time (sec.) so lower is better.
[15] highlights the importance of a diverse dataset amalgamated from various disparate sources (_Flamingo_ uses \(>\)2B image-text pairs vs. 3.3M that FM3 was trained on) in training the neural network. Using the publicly available massive LAION-400M dataset [65] would be a great starting point. Second, the model architecture can incorporate other techniques that offer reasonably high performance with a reduced parameter count such as low-rank based adaption methods, for e.g., LoRA [29]. Third, following [66, 25], we can formulate the ranking loss [44] as a binary classification problem. This has reported to lead to an increase in performance [66, 25]. In other words, given an aligned image-text pair, we randomly select a different image or a different caption to form an unaligned pair. Similar to FM3's current framework, the final concatenated multimodal embedding can still be used as the input for classification to predict whether the given pair is aligned or not. Finally, FM3 is easily extendable to other languages, tasks, and modalities.
## 8 Conclusion
FM3 combines the best of both worlds of in-context learning and fine-tuning as a front-runner in the niche domain of few-short multilingual multimodal multitask learning. It offers a scalable architecture that can span modalities, tasks, and languages, all while setting a new standard with SoTA performance on a plethora of tasks and competitive performance on others. FM3 outperforms zero-/few-shot baselines on 7 out of 10 benchmarks with as few as 16 examples per task. Moreover, FM3 is competitive with fine-tuning a plethora of task-specific SoTA models on fine-tuned on up to hundreds of thousands of annotated examples. On 4 out of 10 tasks, FM3 even outperforms the fine-tuned SoTA despite using a single set of model weights and only 64 task-specific examples. Lastly, FM3 also yields a \(\sim\)50% latency improvement compared to the next best FSL SoTA baseline on VQA and OKVQA datasets.
|
2306.00653 | Ultradifferentiable classes of entire functions | We study classes of ultradifferentiable functions defined in terms of small
weight sequences violating standard growth and regularity requirements. First,
we show that such classes can be viewed as weighted spaces of entire functions
for which the crucial weight is given by the associated weight function of the
so-called conjugate weight sequence. Moreover, we generalize results from M.
Markin from the so-called small Gevrey-setting to arbitrary convenient families
of (small) sequences and show how the corresponding ultradifferentiable
function classes can be used to detect boundedness of normal linear operators
on Hilbert spaces (associated to an evolution equation problem). Finally, we
study the connection between small sequences and the recent notion of dual
sequences introduced in the PhD-thesis of J. Jim\'{e}nez-Garrido. | David Nicolas Nenning, Gerhard Schindl | 2023-06-01T13:20:55Z | http://arxiv.org/abs/2306.00653v2 | # Ultradifferentiable classes of entire functions
###### Abstract.
We study classes of ultradifferentiable functions defined in terms of small weight sequences violating standard growth and regularity requirements. First, we show that such classes can be viewed as weighted spaces of entire functions for which the crucial weight is given by the associated weight function of the so-called conjugate weight sequence. Moreover, we generalize results from M. Markin from the so-called small Gevrey-setting to arbitrary convenient families of (small) sequences and show how the corresponding ultradifferentiable function classes can be used to detect boundedness of normal linear operators on Hilbert spaces (associated to an evolution equation problem). Finally, we study the connection between small sequences and the recent notion of dual sequences introduced in the PhD-thesis of J. Jimenez-Garrido.
Key words and phrases:Weight sequences, associated weight functions, growth and regularity properties for sequences, weighted spaces of entire functions, boundedness of linear operators 2020 Mathematics Subject Classification: 26A12, 30D15, 34G10, 46A13, 46E10, 47B02 D. N. Nenning and G. Schindl are supported by FWF-Project P33417-N
## 1. Introduction
The study of the _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _(_local_) _local_ (_local_) _(_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _(_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _(_local_) _local_ (_local_) _(_local_) _local_ (_local_) _(_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _(_local_) _local_ (_local_) _(_local_) _local_ (_local_) _local_ (_local_) _local_ (_local_) _) _local_ (_local_) _(_local_) _local_ (_local_) _(_local_) _local_ (_local_) _(_local_) _(_local_) _local_ (_local_) _(_local_) _local_ (_local_) _local_ (_local_) _(_local_) _(_local_) _(_local_) _) _local_ (_local_) _(_local_) _local_ (_local_) _(_local_) _local_ (_local_) _(_local_) _(_local_) _(_local_) _local_ (_local_) _(_local_) _(_local_) _(_local_) _(_local_) _(_local_) _(_local_) _(_local_) _(_local_) _) _(_local_) _(_local_) _(_local_) _(_local_) _ (_local_) _(_local_) _(_local_) _(_local_) _(_local_) _(_local_) _(_local_) _(_local_) _(_local_) _(_local_) _(_local_) _(_local_) _(_local_) _(_local_) _(_local_) _(_local_) _(_local_) _(_local_) _(_local_) _(_local_) _(_local_) _(_local_) _(_local_) _(_local_) _(_local_) _(_local_) _(_local_) _(_local_) _(_local_) _(_local_) _(_local_) _(_local_) _(_local_) _(_local_) _(_local_) _(_local_) _(_local_) _) _(_
and one asks the following question: Is a priori known smoothness of all (weak) solutions of this equation sufficient to get that the operator \(A\) is bounded? Markin has studied this problem within the small Gevrey setting, i.e. it has been shown that if each weak solution of this evolution equation belongs to some _small Gevrey class,_ then the operator \(A\) is bounded. In order to proceed Markin considers (small) Gevrey classes with values in a Hilbert space. Based on this knowledge one can then study if for different small classes Markin's results also apply and if one can generalize resp. strengthen his approach.
The paper is structured as follows: In Section 2 we introduce the notion of the so-called conjugate sequence \(M^{*}\) (see (5)), we collect and compare all relevant (non-)standard growth and regularity assumptions on \(M\) and \(M^{*}\) and define the corresponding function classes.
In Section 3 we treat question \((i)\) and show that classes defined by small sequences \(M\) are isomorphic (as locally convex vector spaces) to weighted spaces of entire functions, see the main result Theorem 3.4. Thus we are generalizing the auxiliary result [15, Lemma 3.1] from the small Gevrey setting, see Section 3.2 for the comparison. The crucial weight in the weighted entire setting is given in terms of the so-called associated weight \(\omega_{M^{*}}\) (see Section 2.7) and so expressed in terms of the conjugate sequence \(M^{*}\).
As an application of this statement, concerning problem \((iii)\) above, we characterize for such small classes the inclusion relations in terms of the defining (small) sequences, see Theorem 3.9. This is possible by combining Theorem 3.4 with recent results for the weighted entire setting obtained by the second author in [20].
Section 4 is dedicated to problem \((ii)\) and the study resp. the generalization of Markin's results. We introduce more general families of appropriate small sequences and extend the sufficiency testing criterion for the boundedness of the operator \(A\) to these sets.
Finally, in the Appendix A we focus on \((iv)\) and show that dual sequences are serving as examples for non-standard sequences and hence this framework is establishing a close relation between known examples for weight sequences in the literature and small sequences for which the main results in this work can be applied (see Theorem A.6 and Corollary A.7).
## 2. Definitions and notations
### Basic notation
We write \(\mathbb{N}:=\{0,1,2,\dots\}\) and \(\mathbb{N}_{>0}:=\{1,2,\dots\}\). With \(\mathcal{E}\) we denote the class of all smooth functions and with \(\mathcal{H}(\mathbb{C})\) the class of entire functions.
### Weight sequences
Let \(M=(M_{p})_{p}\in\mathbb{R}_{>0}^{\mathbb{N}}\), we introduce also \(m=(m_{p})_{p}\) defined by \(m_{p}:=\frac{M_{p}}{p!}\) and \(\mu=(\mu_{p})_{p}\) by \(\mu_{p}:=\frac{M_{p}}{M_{p-1}}\), \(p\geq 1\), \(\mu_{0}:=1\). \(M\) is called _normalized_ if \(1=M_{0}\leq M_{1}\) holds true. If \(M_{0}=1\), then \(M_{p}=\prod_{i=1}^{p}\mu_{i}\) for all \(p\in\mathbb{N}\).
\(M\) is called _log-convex_, denoted by (lc) and abbreviated by \((M.1)\) in [10], if
\[\forall\;p\in\mathbb{N}_{>0}:\;M_{p}^{2}\leq M_{p-1}M_{p+1}.\]
This is equivalent to the fact that \(\mu\) is non-decreasing. If \(M\) is log-convex and normalized, then both \(M\) and \(p\mapsto(M_{p})^{1/p}\) are non-decreasing. In this case we get
\(M_{p}\geq 1\) for all \(p\geq 0\) and
\[\forall\ p\in\mathbb{N}_{>0}:\ \ (M_{p})^{1/p}\leq\mu_{p}. \tag{2}\]
Moreover, \(M_{p}M_{q}\leq M_{p+q}\) for all \(p,q\in\mathbb{N}\).
In addition, for \(M=(M_{p})_{p}\in\mathbb{R}_{>0}^{\mathbb{N}}\) it is known that
\[\liminf_{p\to+\infty}\mu_{p}\leq\liminf_{p\to+\infty}(M_{p})^{1/p}\leq\limsup _{p\to+\infty}(M_{p})^{1/p}\leq\limsup_{p\to+\infty}\mu_{p}. \tag{3}\]
For convenience we introduce the following set of sequences:
\[\mathcal{LC}:=\{M\in\mathbb{R}_{>0}^{\mathbb{N}}:\ M\text{ is normalized, log-convex, }\lim_{p\to+\infty}(M_{p})^{1/p}=+\infty\}.\]
We see that \(M\in\mathcal{LC}\) if and only if \(1=\mu_{0}\leq\mu_{1}\leq\dots\) with \(\lim_{p\to+\infty}\mu_{p}=+\infty\) (see e.g. [17, p. 104]) and there is a one-to-one correspondence between \(M\) and \(\mu=(\mu_{p})_{p}\) by taking \(M_{p}:=\prod_{i=0}^{p}\mu_{i}\).
\(M\) has _moderate growth_, denoted by (mg), if
\[\exists\ C\geq 1\ \forall\ p,q\in\mathbb{N}:\ M_{p+q}\leq C^{p+q+1}M_{p}M_{q}.\]
A weaker condition is _derivation closedness_, denoted by (dc), if
\[\exists\ A\geq 1\ \forall\ p\in\mathbb{N}:\ M_{p+1}\leq A^{p+1}M_{p}\Leftrightarrow \mu_{p+1}\leq A^{p+1}.\]
It is immediate that both conditions are preserved under the transformation \((M_{p})_{p}\mapsto(M_{p}p!^{s})_{p}\), \(s\in\mathbb{R}\) arbitrary. In the literature (mg) is also known under _stability of ultradifferential operators_ or \((M.2)\) and (dc) under \((M.2)^{\prime}\), see [10].
\(M\) has \((\beta_{1})\) (named after [16]) if
\[\exists\ Q\in\mathbb{N}_{>0}:\ \liminf_{p\to+\infty}\frac{\mu_{Qp}}{\mu_{p}}>Q,\]
and \((\gamma_{1})\) if
\[\sup_{p\in\mathbb{N}_{>0}}\frac{\mu_{p}}{p}\sum_{k\geq p}\frac{1}{\mu_{k}}<+\infty.\]
In [16, Proposition 1.1] it has been shown that for \(M\in\mathcal{LC}\) both conditions are equivalent and in the literature \((\gamma_{1})\) is also called "strong non-quasianalyticity condition". In [10] this is denoted by \((M.3)\). (In fact, there \(\frac{\mu_{p}}{p}\) is replaced by \(\frac{\mu_{p}}{p-1}\) for \(p\geq 2\) but which is equivalent to having \((\gamma_{1})\).)
A weaker condition on \(M\) is \((\beta_{3})\) (named after [22], see also [2]) which reads as follows:
\[\exists\ Q\in\mathbb{N}_{>0}:\ \liminf_{p\to+\infty}\frac{\mu_{Qp}}{\mu_{p}}>1.\]
For two weight sequences \(M=(M_{p})_{p\in\mathbb{N}}\) and \(N=(N_{p})_{p\in\mathbb{N}}\) we write \(M\leq N\) if \(M_{p}\leq N_{p}\) for all \(p\in\mathbb{N}\) and \(M\preccurlyeq N\) if
\[\sup_{p\in\mathbb{N}_{>0}}\left(\frac{M_{p}}{N_{p}}\right)^{1/p}<+\infty.\]
\(M\) and \(N\) are called equivalent, denoted by \(M\approx N\), if
\[M\preccurlyeq N\text{ and }N\preccurlyeq M.\]
Finally, we write \(M\lhd N\), if
\[\lim_{p\to+\infty}\left(\frac{M_{p}}{N_{p}}\right)^{1/p}=0.\]
In the relations above one can replace \(M\) and \(N\) simultaneously by \(m\) and \(n\) because \(M\preccurlyeq N\Leftrightarrow m\preccurlyeq n\) and \(M\lhd N\Leftrightarrow m\lhd n\).
For any \(\alpha\geq 0\) we set
\[G^{\alpha}:=(p^{\mathbb{I}^{\alpha}})_{p\in\mathbb{N}}.\]
So for \(\alpha>0\) this denotes the classical _Gevrey sequence_ of index/order \(\alpha\).
### Classes of ultradifferentiable functions
Let \(M\in\mathbb{R}^{\mathbb{N}}_{>0}\), \(U\subseteq\mathbb{R}^{d}\) be non-empty open and for \(K\subseteq\mathbb{R}^{d}\) compact we write \(K\subset\subset U\) if \(\overline{K}\subseteq U\), i.e., \(K\) is in \(U\) relatively compact. We introduce now the following spaces of ultradifferentiable function classes. First, we define the (local) classes of _Roumieu-type_ by
\[\mathcal{E}_{\{M\}}(U):=\{f\in\mathcal{E}(U):\;\forall\;K\subset\subset U\; \exists\;h>0:\;\|f\|_{M,K,h}<+\infty\},\]
and the classes of _Beurling-type_ by
\[\mathcal{E}_{(M)}(U):=\{f\in\mathcal{E}(U):\;\forall\;K\subset\subset U\; \forall\;h>0:\;\|f\|_{M,K,h}<+\infty\},\]
where we denote
\[\|f\|_{M,K,h}:=\sup_{\alpha\in\mathbb{N}^{d},x\in K}\frac{|f^{(\alpha)}(x)|}{ h^{|\alpha|}M_{|\alpha|}}.\]
For a sufficiently regular compact set \(K\) (e.g. with smooth boundary and such that \(\overline{K^{\circ}}=K\))
\[\mathcal{E}_{M,h}(K):=\{f\in\mathcal{E}(K):\|f\|_{M,K,h}<+\infty\}\]
is a Banach space and so we have the following topological vector spaces
\[\mathcal{E}_{\{M\}}(K):=\varinjlim_{h>0}\mathcal{E}_{M,h}(K),\]
and
\[\mathcal{E}_{\{M\}}(U)=\varprojlim_{K\subset\subset U}\varinjlim_{h>0} \mathcal{E}_{M,h}(K)=\varprojlim_{K\subset\subset U}\mathcal{E}_{\{M\}}(K).\]
Similarly, we get
\[\mathcal{E}_{(M)}(K):=\varinjlim_{h>0}\mathcal{E}_{M,h}(K),\]
and
\[\mathcal{E}_{(M)}(U)=\varprojlim_{K\subset U}\varinjlim_{h>0}\mathcal{E}_{M,h }(K)=\varprojlim_{K\subset\subset U}\mathcal{E}_{(M)}(K).\]
We write \(\mathcal{E}_{[M]}\) if we mean either \(\mathcal{E}_{\{M\}}\) or \(\mathcal{E}_{(M)}\) but not mixing the cases. We omit writing the open set \(U\) if we do not want to specify the set where the functions are defined and formulate statements on the level of classes.
Usually one only considers real or complex valued functions, but we can analogously also define classes with values in Hilbert or even Banach spaces (for simplicity we assume in this case that the domain \(U\) is contained in \(\mathbb{R}\)) by simply using
\[\|f\|_{M,K,h}:=\sup_{p\in\mathbb{N},x\in K}\frac{\|f^{(p)}(x)\|}{h^{p}M_{p}},\]
in the respective definition, i.e. only the absolute value of \(f^{(p)}(x)\) is replaced by the norm in the Banach space. Observe that the (complex) derivative of a function with values in a Banach space is defined in complete analogy to the complex valued case. If we want to emphasize that the codomain is a Hilbert (or Banach) space \(H\), we write \(\mathcal{E}_{[M]}(U,H)\). In analogy to that also \(\mathcal{E}(U,H)\) shall denote the \(H\)-valued smooth functions on \(U\).
**Remark 2.1**.: Let \(M,N\in\mathbb{R}^{\mathbb{N}}_{>0}\), the following is well-known, see e.g. [17, Prop. 2.12]:
1. The relation \(M\lhd N\) implies \(\mathcal{E}_{\{M\}}\subseteq\mathcal{E}_{(N)}\) with continuous inclusion. Similarly, \(M\lhd N\) implies \(\mathcal{E}_{[M]}\subseteq\mathcal{E}_{[N]}\) with continuous inclusion.
2. If \(M\in\mathbb{R}^{\mathbb{N}}_{>0}\) is log-convex (and normalized) and \(\mathcal{E}_{\{M\}}(\mathbb{R})\subseteq\mathcal{E}_{(N)}(\mathbb{R})\) (as sets) then by the existence of so-called \(M\)-characteristic functions, see [17, Lemma 2.9], [25, Thm. 1] and the proof in [21, Prop. 3.1.2], we get \(M\lhd N\) as well.
### Ultradifferentiable classes of entire functions
We shall tacitly assume that a holomorphic function on (an open subset of) \(\mathbb{C}\) may have values in a Hilbert or even Banach space. The main theorems of one variable complex analysis (Cauchy integral formula, power series representation of holomorphic functions,...) hold mutatis mutandis, by virtue of the Hahn-Banach theorem, just as in the complex valued case.
First let us recall that for any open (and connected) set \(U\subseteq\mathbb{R}\) the space \(\mathcal{E}_{(G^{1})}(U,H)\) can be identified with \(\mathcal{H}(\mathbb{C},H)\), the class of entire functions and both spaces are isomorphic as Frechet spaces. The isomorphism \(\cong\) is given by
\[E:\mathcal{E}_{(G^{1})}(U,H)\to\mathcal{H}(\mathbb{C},H),\quad f\mapsto E(f):= \sum_{k=0}^{+\infty}\frac{f^{(k)}(x_{0})}{k!}z^{k}, \tag{4}\]
where \(x_{0}\) is any fixed point in \(U\). The inverse is given by restriction to \(U\), and its continuity follows easily from the Cauchy inequalities.
We apply the observation from Remark 2.1 to \(N\equiv G^{1}\).
**Lemma 2.2**.: _Let \(M\in\mathbb{R}^{\mathbb{N}}_{>0}\) be given._
1. _If_ \(\lim_{p\to+\infty}(m_{p})^{1/p}=0\)_, then_ \(\mathcal{E}_{\{M\}}\subseteq\mathcal{E}_{(G^{1})}(\cong\mathcal{H}(\mathbb{C}))\) _with continuous inclusion._
2. _Let_ \(M\) _be log-convex and normalized. Assume that_ \[\mathcal{E}_{\{M\}}(\mathbb{R})\subseteq\mathcal{E}_{(G^{1})}(\mathbb{R})( \cong\mathcal{H}(\mathbb{C}))\] _holds (as sets), then_ \(\lim_{p\to+\infty}(m_{p})^{1/p}=0\) _follows. In particular, this implication holds for any_ \(M\in\mathcal{LC}\)_._
Moreover, in the situation of Lemma 2.2 the inclusion always has to be strict. Thus spaces \(\mathcal{E}_{[M]}\) for sequences with \(m_{p}^{1/p}\to 0\) form classes of entire functions. Subsequently, we show that those spaces are weighted classes of entire functions and the weight is given by the _associated weight function_ of the _conjugate weight sequence_. We thoroughly define and investigate those terms in the following sections.
### Conjugate weight sequence
Let \(M\in\mathbb{R}^{\mathbb{N}}_{>0}\), then we define the _conjugate sequence_\(M^{*}=(M^{*}_{p})_{p\in\mathbb{N}}\) by
\[M^{*}_{p}:=\frac{p!}{M_{p}}=\frac{1}{m_{p}},\ \ p\in\mathbb{N}, \tag{5}\]
i.e. \(M^{*}:=m^{-1}\) for short. Hence, for \(p\geq 1\) the quotients \(\mu^{*}=(\mu^{*}_{p})_{p}\) are given by
\[\mu^{*}_{p}:=\frac{M^{*}_{p}}{M^{*}_{p-1}}=\frac{m_{p-1}}{m_{p}}=\frac{p!M_{p- 1}}{(p-1)!M_{p}}=\frac{p}{\mu_{p}}, \tag{6}\]
and we set \(\mu^{*}_{0}:=1\). By these formulas it is immediate that there is a one-to-one correspondence between \(M\) and \(M^{*}\).
### Properties of conjugate weight sequences
We summarize some immediate consequences for \(M^{*}\). Let \(M,N\in\mathbb{R}^{\mathbb{N}}_{>0}\) be given.
1. First, we immediately have \[\forall\;p\in\mathbb{N}:\quad M^{**}_{p}=M_{p},\;\;\;\;\;M^{*}_{p}\cdot M_{p}=p!,\] i.e. \[M^{**}\equiv M,\;\;\;\;\;\;\;\;\;\;\;\;M^{*}\cdot M\equiv G^{1}.\] Moreover, (see also the subsequent Lemma 2.6) \[M^{*}{\preccurlyeq M}\Longleftrightarrow G^{1/2}{\preccurlyeq M},\;\;\;\;\;M{ \preccurlyeq M}^{*}\Longleftrightarrow M{\preccurlyeq G^{1/2}},\] and alternatively the relation \(\preccurlyeq\) can be replaced by \(\leq\). We also get \(M^{*}_{0}=M^{-1}_{0}\), i.e. \(M^{*}\) is normalized if and only if \(1=M_{0}\geq M_{1}\).
2. \(M{\preccurlyeq N}\) holds if and only if \(N^{*}{\preccurlyeq M}^{*}\) and so \(M{\approx}N\) if and only if \(M^{*}{\approx}N^{*}\).
3. We get the following: 1. \(\lim_{p\to+\infty}(M^{*}_{p})^{1/p}=+\infty\) holds if and only if \(\lim_{p\to+\infty}(m_{p})^{1/p}=0\) and this implies \(\mathcal{E}_{\{M\}}\subseteq\mathcal{E}_{(G^{1})}\) (with strict inclusion). If in addition \(M\) is log-convex (and normalized) then all three assertions are equivalent, see Lemma 2.2. 2. 4. If \(\lim_{p\to+\infty}(M_{p})^{1/p}=+\infty\), then by \(\mu^{*}_{p}/p=\frac{1}{\mu_{p}}\), (3) and Stirling's formula we get both \(\mu^{*}_{p}/p\to 0\) and \((m^{*}_{p})^{1/p}\to 0\) as \(p\to+\infty\). 1. \(\lim_{p\to+\infty}(m^{*}_{p})^{1/p}=+\infty\) holds if and only if \(\lim_{p\to+\infty}(M_{p})^{1/p}=0\).
4. \(M^{*}\) is log-convex, i.e. \(\mu^{*}_{p+1}\geq\mu^{*}_{p}\) for all \(p\in\mathbb{N}_{>0}\), if and only if \(m\) is _log-concave_, i.e. (7) \[\forall\;p\in\mathbb{N}_{>0}:\quad m^{2}_{p}\geq m_{p-1}m_{p+1}\Longleftrightarrow \mu^{*}_{p+1}\geq\mu^{*}_{p},\] which in turn is equivalent to the map \(p\mapsto\frac{\mu_{p}}{p}\) being non-increasing. Analogously as in [21, Lemma 2.0.4] we get: If a sequence \(S\in\mathbb{R}^{\mathbb{N}}_{>0}\) is log-concave and satisfying \(S_{0}=1\), then the mapping \(p\mapsto(S_{p})^{1/p}\) is non-increasing. Consequently, if \(M^{*}\) is log-convex and if \(1=M^{*}_{0}=m_{0}=M_{0}\), then \(p\mapsto(m_{p})^{1/p}\) is non-increasing.
5. If \(M\) is log-convex (and having \(M_{0}=1\)), then \(M^{*}\) has (mg): In this case by [21, Lemma 2.0.6] for all \(p,q\in\mathbb{N}\) we get \(M_{p}M_{q}\leq M_{p+q}\Leftrightarrow m_{p}m_{q}\leq\frac{(p+q)!}{p!q!}m_{p+q}\) and so \(m_{p}m_{q}\leq 2^{p+q}m_{p+q}\). Hence \(M^{*}_{p+q}\leq 2^{p+q}M^{*}_{p}M^{*}_{q}\) holds true.
6. \(M^{*}\) has (dc) if and only if \(\mu^{*}_{p}\leq A^{p}\Leftrightarrow\frac{p}{\mu_{p}}\leq A^{p}\), so if and only if (8) \[\exists\;A\geq 1\;\forall\;p\in\mathbb{N}:\quad\mu_{p}\geq\frac{p}{A^{p}},\] which can be considered as "dual derivation closedness". Note that this property is preserved under the mapping \((M_{p})_{p}\mapsto(M_{p}p!^{s})_{p}\), \(s\in\mathbb{R}\) arbitrary, and it is mild: \(\liminf_{p\to+\infty}\mu_{p}/p>0\) is sufficient to conclude.
7. \(M^{*}\) has \((\beta_{1})\), i.e. \(\liminf_{p\to+\infty}\frac{\mu_{Q_{p}}}{\mu_{p}^{*}}>Q\) for some \(Q\in\mathbb{N}_{\geq 2}\), if and only if \(\liminf_{p\to+\infty}\frac{\mu_{p}}{\mu_{Q_{p}}}>1\); similarly \(M^{*}\) has \((\beta_{3})\) if and only if \(\liminf_{p\to+\infty}\frac{\mu_{p}}{\mu_{Q_{p}}}>\frac{1}{Q}\).
Using those insights, we may conclude the following.
**Lemma 2.3**.: _Let \(M\in\mathbb{R}^{\mathbb{N}}_{>0}\) be given with \(1=M_{0}\geq M_{1}\) and let \(M^{*}\) be the conjugate sequence defined via (5). Then:_
1. \(M^{*}\in\mathcal{LC}\) _if and only if_ \(m\) _is log-concave and_ \(\lim_{p\to+\infty}(m_{p})^{1/p}=0\)_._
2. \(M^{*}\in\mathcal{LC}\) _implies_ \(\mathcal{E}_{\{M\}}\subseteq\mathcal{E}_{(G^{1})}\) _with strict inclusion._
3. _If in addition_ \(M\) _is log-convex with_ \(1=M_{0}=M_{1}\)_, then the inclusion_ \(\mathcal{E}_{\{M\}}(\mathbb{R})\subseteq\mathcal{E}_{(G^{1})}(\mathbb{R})\) _gives_ \(\lim_{p\to+\infty}(M^{*}_{p})^{1/p}=+\infty\)_. Moreover,_ \(M^{*}\) _has moderate growth._
**Remark 2.4**.: Let \(M\in\mathbb{R}^{\mathbb{N}}_{>0}\) be given and we comment on the log-concavity and related conditions (for the sequence \(m\)):
1. If \(m\) is not log-concave but satisfying \[\exists\;H\geq 1\;\forall\;1\leq p\leq q:\;\;\;\frac{\mu_{q}}{q}\leq H\frac{ \mu_{p}}{p},\] i.e. the sequence \((\mu_{p}/p)_{p\in\mathbb{N}_{>0}}\) is _almost decreasing,_ then the sequence \(L\) defined in terms of the corresponding quotient sequence \(\lambda=(\lambda_{p})_{p\in\mathbb{N}}\) given by (9) \[\lambda_{p}:=H^{-1}p\sup_{q\geq p}\frac{\mu_{q}}{q},\;\;\;p\geq 1,\qquad \lambda_{0}:=1,\] satisfies (10) \[\forall\;p\geq 1:\;\;\;H^{-1}\frac{\mu_{p}}{p}\leq\frac{\lambda_{p}}{p} \leq\frac{\mu_{p}}{p}.\] Then we get: 1. \(L\) and \(M\) are equivalent and so \(L^{*}\) is equivalent to \(M^{*}\), too. 2. \(p\mapsto\frac{\lambda_{p}}{p}\) is non-increasing, i.e. \(l\) is log-concave, and so \(L^{*}\) is log-convex. 3. If \(1=M_{0}\geq M_{1}\), i.e. if \(\mu_{1}\leq 1\), then \(1=L_{0}\geq L_{1}\) is valid since \(L_{1}=\lambda_{1}\leq\mu_{1}\leq 1\) holds true. Thus \(L^{*}\) is normalized. 4. \(\lim_{p\to+\infty}(m_{p})^{1/p}=0\) if and only if \(\lim_{p\to+\infty}(l_{p})^{1/p}=0\) (with \(l_{p}:=L_{p}/p!\)). 5. Finally, if \(M\) is log-convex, then \(L\) shares this property: We have \(\lambda_{p}\leq\lambda_{p+1}\) if and only if \(p\sup_{q\geq p}\frac{\mu_{q}}{q}\leq(p+1)\sup_{q\geq p+1}\frac{\mu_{q}}{q}\) for all \(p\geq 1\). When \(p\geq 1\) is fixed, then clearly \(p\frac{\mu_{q}}{q}\leq(p+1)\frac{\mu_{q}}{q}\) for all \(q\geq p+1\). If \(q=p\), then \[p\frac{\mu_{q}}{q}=\mu_{p}\leq\mu_{p+1}=(p+1)\frac{\mu_{p+1}}{p+1}\leq(p+1)\sup _{q\geq p+1}\frac{\mu_{q}}{q},\] and so the desired inequality is verified. Summarizing, if \(M\in\mathbb{R}^{\mathbb{N}}_{>0}\) satisfies \(1=M_{0}\geq M_{1}\) and \(\lim_{p\to+\infty}(m_{p})^{1/p}=0\), then \(L^{*}\in\mathcal{LC}\), see (\(a\)) in Lemma 2.3. If \(M\) is in addition log-convex, then \(L\) has this property too. The definition (9) is motivated by [19, Lemma 8] and [9, Prop. 4.15]. 2. If \(m\) is log-concave, then for any \(s\geq 0\) also the sequence \((m_{p}/p!^{s})_{p\in\mathbb{N}}\) is log-concave because the mapping \(p\mapsto\frac{\mu_{p}}{p^{s}}\) is still non-increasing (see (7)). However, for the sequence \((p!^{s}m_{p})_{p\in\mathbb{N}}\) this is not clear in general.
**Example 2.5**.: Let \(M\equiv G^{s}\) for some \(0\leq s<1\), see [12]. (In fact in [12] instead of \(G^{s}\) the sequence \((p^{ps})_{p\in\mathbb{N}}\) is treated but which is equivalent to \(G^{s}\) by Stirling's
formula.) Then \(m\equiv G^{s-1}\) with \(-1\leq s-1<0\) and so \(m\) corresponds to a Gevrey-sequence with negative index. We get \(\lim_{p\to+\infty}(m_{p})^{1/p}=0\) and \(m\) is log-concave. Moreover \(M^{*}\equiv G^{1-s}\) and so clearly \(M^{*}\in\mathcal{LC}\).
In particular, if \(s=\frac{1}{2}\) then \((G^{\frac{1}{2}})^{*}=G^{\frac{1}{2}}\) and we prove the following statement which underlines the importance of \(G^{\frac{1}{2}}\) (up to equivalence of sequences) w.r.t. the action \(M\mapsto M^{*}\).
**Lemma 2.6**.: _Let \(M\in\mathbb{R}^{\mathbb{N}}_{>0}\) be given. Then the following are equivalent:_
1. _We have_ \(M{\preccurlyeq}M^{*}\)_._
2. _We have_ \[\exists\;C,h\geq 1\;\forall\;p\in\mathbb{N}:\quad M_{p}^{2}\leq Ch^{p}p!,\] _i.e._ \(M{\preccurlyeq}G^{1/2}\)_._
3. _We have_ \(G^{1/2}{\preccurlyeq}M^{*}\)_._
_The analogous equivalences are valid if \(M^{*}{\preccurlyeq}M\) resp. if relation \(\preccurlyeq\) is replaced by \(\leq\). Thus \(M{\approx}M^{*}\) if and only if \(M{\approx}G^{1/2}\) and \(M=M^{*}\) if and only if \(M=G^{1/2}=M^{*}\)._
_In particular, \(G^{1/2}=(G^{1/2})^{*}\) holds true._
Proof.: The equivalences follow immediately from the definition of \(M^{*}\) in (5).
### Associated weight function
Let \(M\in\mathbb{R}^{\mathbb{N}}_{>0}\) (with \(M_{0}=1\)), then the _associated function_\(\omega_{M}:\mathbb{R}_{\geq 0}\to\mathbb{R}\cup\{+\infty\}\) is defined by
\[\omega_{M}(t):=\sup_{p\in\mathbb{N}}\log\left(\frac{t^{p}}{M_{p}}\right)\quad \text{for}\;t>0,\qquad\quad\omega_{M}(0):=0. \tag{11}\]
For an abstract introduction of the associated function we refer to [11, Chapitre I], see also [10, Definition 3.1]. If \(\liminf_{p\to+\infty}(M_{p})^{1/p}>0\), then \(\omega_{M}(t)=0\) for sufficiently small \(t\), since \(\log\left(\frac{t^{p}}{M_{p}}\right)<0\Leftrightarrow t<(M_{p})^{1/p}\) holds for all \(p\in\mathbb{N}_{>0}\). Moreover under this assumption \(t\mapsto\omega_{M}(t)\) is a continuous non-decreasing function, which is convex in the variable \(\log(t)\) and tends faster to infinity than any \(\log(t^{p})\), \(p\geq 1\), as \(t\to+\infty\). \(\lim_{p\to+\infty}(M_{p})^{1/p}=+\infty\) implies that \(\omega_{M}(t)<+\infty\) for each \(t>0\) and which shall be considered as a basic assumption for defining \(\omega_{M}\).
Given \(M\in\mathcal{LC}\), then by [11, 1.8 III] we get that \(\omega_{M}(t)=0\) on \([0,\mu_{1}]\).
Closely related to \(\omega_{M}\) is the following counting function
\[\Sigma_{M}(t):=|\{p\in\mathbb{N}_{>0}:\mu_{p}\leq t\}|,\ \ t\geq 0. \tag{12}\]
By definition it is obvious that \(\Sigma_{M}(t)=0\) on \([0,\mu_{1})\) and \(\Sigma_{M}(t)=p\) on \([\mu_{p},\mu_{p+1})\) provided that \(\mu_{p}<\mu_{p+1}\). Note that for \(M\in\mathcal{LC}\) we have \(\lim_{p\to+\infty}\mu_{p}=+\infty\), see e.g. [17, p. 104].
## 3. Ultradifferentiable classes as weighted spaces of entire functions
In Section 2.4, we saw that ultradifferentiable classes \(\mathcal{E}_{[M]}\) with \(m_{p}^{1/p}\to 0\) are classes of entire functions. Now we go further and identify those classes with weighted spaces of entire functions, where the weight is given by the associated weight function of the conjugate weight sequence \(M^{*}\). To this end, let us first recall some notation already introduced in [20] (to be precise, in [20] the weighted
spaces of entire functions have only been defined for the codomain \(\mathbb{C}\), but everything can be done completely analogously for \(H\) instead of \(\mathbb{C}\)): For a Hilbert space \(H\), and a weight \(v:[0,+\infty)\to(0,+\infty)\), i.e. \(v\) is continuous, non-increasing and rapidly decreasing, we set
\[\mathcal{H}_{v}^{\infty}(\mathbb{C},H):=\{f\in\mathcal{H}(\mathbb{C},H):\|f\|_ {v}:=\sup_{z\in\mathbb{C}}\|f(z)\|v(|z|)<+\infty\}.\]
We shall assume w.l.o.g. that \(v\) is _normalized_, i.e. \(v(t)=1\) for \(t\in[0,1]\) (if this is not the case one can always switch to another normalized weight \(w\) with \(\mathcal{H}_{v}^{\infty}(\mathbb{C},H)=\mathcal{H}_{w}^{\infty}(\mathbb{C},H)\)).
For a non-increasing sequence of weights \(\underline{\mathcal{V}}=(v_{n})_{n\in\mathbb{N}_{>0}}\) (see [20] for details), we define the (LB)-space
\[\mathcal{H}_{\underline{\mathcal{V}}}^{\infty}(\mathbb{C},H):=\varinjlim_{n\in \mathbb{N}_{>0}}\mathcal{H}_{v_{n}}^{\infty}(\mathbb{C},H),\]
and for a non-decreasing sequence of weights \(\overline{\mathcal{V}}=(v_{n})_{n\in\mathbb{N}_{>0}}\), we define the Frechet space
\[\mathcal{H}_{\overline{\mathcal{V}}}^{\infty}(\mathbb{C},H):=\varinjlim_{n\in \mathbb{N}_{>0}}\mathcal{H}_{v_{n}}^{\infty}(\mathbb{C},H).\]
**Remark 3.1**.: In [20], the spaces are denoted by \(H_{v}^{\infty}(\mathbb{C})\) instead of \(\mathcal{H}_{v}^{\infty}(\mathbb{C},\mathbb{C})\). We use \(\mathcal{H}\) in order to avoid any confusion with the Hilbert space \(H\). In addition, \(\mathcal{H}_{v}^{\infty}(\mathbb{C})\) shall denote \(\mathcal{H}_{v}^{\infty}(\mathbb{C},\mathbb{C})\).
The following Lemma can be used to infer statements for \(\mathcal{H}_{v}^{\infty}(\mathbb{C},H)\) from the respective statements for \(\mathcal{H}_{v}^{\infty}(\mathbb{C})\).
**Lemma 3.2**.: _Let \(H\) be a (complex) Hilbert space and \(v\) be a weight. Then_
\[f\in\mathcal{H}_{v}^{\infty}(\mathbb{C},H)\ \Leftrightarrow z\mapsto\langle f(z),y \rangle\in\mathcal{H}_{v}^{\infty}(\mathbb{C})\text{ for all }y\in H.\]
Proof.: For the non-trivial part, take some \(f\in\mathcal{H}(\mathbb{C},H)\) such that \(|\langle f(z),y\rangle|\leq C_{y}v(|z|)\) for every \(y\in H\). Then this just means that \(\{\frac{f(z)}{v(|z|)}:\ z\in\mathbb{C}\}\) is weakly bounded (in \(H\)) which implies boundedness and this just means that \(f\in\mathcal{H}_{v}(\mathbb{C},H)\).
**Remark 3.3**.: Of course the same argument holds for a family of weights \(\overline{\mathcal{V}}\) or \(\underline{\mathcal{V}}\).
For a given weight \(v\) and \(c>0\), we shall write \(v_{c}(t):=v(ct)\) and \(v^{c}(t):=v(t)^{c}\), and set
\[\underline{\mathcal{V}}_{\mathfrak{c}}=(v_{c})_{c\in\mathbb{N}_{>0}},\text{ and }\overline{\mathcal{V}}_{\mathfrak{c}}=(v_{1/c})_{c\in\mathbb{N}_{>0}},\]
and
\[\underline{\mathcal{V}}^{\mathfrak{c}}=(v^{c})_{c\in\mathbb{N}_{>0}},\text{ and }\overline{\mathcal{V}}^{\mathfrak{c}}=(v^{1/c})_{c\in\mathbb{N}_{>0}},\]
in particular \(\underline{\mathcal{V}}_{\mathfrak{c}}\) and \(\underline{\mathcal{V}}^{\mathfrak{c}}\) are non-increasing, and \(\overline{\mathcal{V}}_{\mathfrak{c}}\) and \(\overline{\mathcal{V}}^{\mathfrak{c}}\) are non-decreasing sequences of weights.
Let \(M\in\mathbb{R}_{>0}^{\mathbb{N}}\) be given with \(M_{0}=1\), such that \(M\) is (lc) and satisfies \(\lim_{p\to+\infty}(M_{p})^{1/p}=+\infty\) (see [20, Def. 2.4, Rem. 2.6]). Then we denote by \(\underline{\mathcal{M}}_{\mathfrak{c}},\underline{\mathcal{M}}^{\mathfrak{c}},\overline{\mathcal{M}}_{\mathfrak{c}}\), and \(\overline{\mathcal{M}}^{\mathfrak{c}}\) the respective sequences of weights defined by choosing \(v(t):=v_{M}(t):=e^{-\omega_{M}(t)}\). If we write \(\underline{\mathcal{N}}_{\mathfrak{c}},\underline{\mathcal{M}}^{\mathfrak{c}},\overline{\mathcal{N}}_{\mathfrak{c}}\), and \(\overline{\mathcal{N}}^{\mathfrak{c}}\) we mean the respective definition for another weight sequence \(N\). Finally, we write (of course) \(\underline{\mathcal{M}}^{\mathfrak{c}},\dots\) for the systems corresponding to the conjugate sequence \(M^{\ast}\).
**Theorem 3.4**.: _Let \(M\in\mathbb{R}^{\mathbb{N}}_{>0}\) with \(M_{0}=1\geq M_{1}\) be given such that \(\lim_{p\to+\infty}(m_{p})^{1/p}=0\) and \(m\) is log-concave. Let \(I\subseteq\mathbb{R}\) be an interval, then_
\[E:\mathcal{E}_{\{M\}}(I,H)\to\mathcal{H}^{\infty}_{\underline{\mathcal{M}}_{ *}}(\mathbb{C},H),\quad f\mapsto E(f):=\sum_{k=0}^{+\infty}\frac{f^{(k)}(x_{0 })}{k!}(z-x_{0})^{k}\]
_is an isomorphism (of locally convex spaces) for any fixed \(x_{0}\in I\). Moreover, with the same definition for \(E\), also_
\[E:\mathcal{E}_{(M)}(I,H)\to\mathcal{H}^{\infty}_{\underline{\mathcal{M}}^{*} _{*}}(\mathbb{C},H)\]
_is an isomorphism._
**Remark 3.5**.: Before proving this main statement we give the following observations:
* By Lemma 2.3, the assumptions on \(M\) imply \(M^{*}\in\mathcal{LC}\). It is easy to check that any _small Gevrey_ class, i.e. choosing \(M_{j}=j!^{\alpha}\) for some \(\alpha\in[0,1)\), satisfies the assumptions of Theorem 3.4.
* Note that assumptions \(M_{0}=1\geq M_{1}\) and log-concavity are not preserved under equivalence of weight sequences. However, equivalent sequences yield the same ultradifferentiable function classes, equivalent conjugate sequences (recall \((ii)\) in Section 2.6) and finally (by definition) also the same weighted entire function classes, see [20, Prop. 3.8]. Summarizing, both isomorphisms in Theorem 3.4 are preserved under equivalence of weight sequences.
Proof of Theorem 3.4.: We start with the Roumieu case and assume w.l.o.g. that \(x_{0}=0\). Let us take \(f\in\mathcal{E}_{M,h}(K,H)\) for some compact set \(K\subset\subset I\) and some \(h>0\), i.e. there is \(A(=\|f\|_{M,K,h})\) such that for all \(x\in K\) and all \(k\in\mathbb{N}\) we have
\[\|f^{(k)}(x)\|\leq Ah^{k}M_{k}.\]
Then we infer immediately that
\[\|E(f)(z)\|\leq A\sum_{k=0}^{+\infty}\frac{h^{k}M_{k}}{k!}|z|^{k}=A\sum_{k=0}^ {+\infty}\frac{h^{k}}{M_{k}^{*}}|z|^{k}\leq 2A\exp(\omega_{M^{*}}(2h|z|)).\]
Therefore \(E\) maps \(\mathcal{E}_{M,h}(K,H)\) continuously into \(\mathcal{H}^{\infty}_{v_{M^{*},2h}}(\mathbb{C},H)\) and this immediately implies continuity of \(E\) as a mapping defined on the inductive limit with respect to \(h\).
In the Beurling case a function \(f\in\mathcal{E}_{(M)}(I,H)\) lies in \(\mathcal{E}_{M,h}(K,H)\) for any \(h>0\), and thus the above reasoning immediately gives that \(E\) is continuous as a mapping into \(\mathcal{H}^{\infty}_{\underline{\mathcal{M}}^{*}_{*}}(\mathbb{C},H)\).
Let us now show continuity of the inverse mapping, which is clearly given by restricting an entire function to the interval \(I\). Take some \(F\in\mathcal{H}^{\infty}_{v_{M^{*},k}}(\mathbb{C},H)\), then
\[\|F(z)\|\leq Ae^{\omega_{M^{*}}(k|z|)}.\]
for \(A=\|F\|_{v_{M^{*},k}}>0\). Consider an arbitrary \(K\subset\subset I\) and let \(R\geq 1\) be such that \(K\subset[-R,R]\). Then take \(r\geq 2R\), which ensures that \(K+B(0,r)\subset B(0,2r)\) and where \(B(0,r)\) denotes the ball around \(0\) of radius \(r\). Then by the Cauchy estimates we infer for such \(r\) and all \(x\in K\) and \(n\in\mathbb{N}\)
\[\|F^{(n)}(x)\|\leq An!\frac{e^{\omega_{M^{*}}(2kr)}}{r^{n}}. \tag{13}\]
Since \(e^{\omega_{M^{*}}(r)}=\frac{r^{n}}{M_{n}^{*}}\) for \(r\in[\mu_{n}^{*},\mu_{n+1}^{*})\) (see e.g. [11, 1.8 III]), we may plug in some \(r\in[\mu_{n}^{*}/(2k),\mu_{n+1}^{*}/(2k))\) in (13); for all \(n\) large enough such that \(\mu_{n}^{*}/(2k)\geq 2R\) (thus depending on chosen compact \(K\)) and which is possible since \(M^{*}\in\mathcal{LC}\) and so \(\mu_{n}^{*}\to+\infty\). Hence we get
\[\|F^{(n)}(x)\|\leq An!\frac{(2kr)^{n}}{r^{n}M_{n}^{*}}=A(2k)^{n}M_{n}.\]
For the remaining (finitely, say \(n_{0}\)) many integers \(n\) with \(\mu_{n}^{*}/(2k)<2R\), we can estimate
\[\|F^{(n)}(x)\|\leq CA(2k)^{n}M_{n}\]
where, e.g. \(C=n_{0}!e^{\omega_{M^{*}}(2kR)}\). Altogether we have shown
\[\|F|_{I}\|_{M,K,2k}\leq C\|F\|_{v_{M^{*},k}}\]
which proves continuity of the inverse mapping in both the Roumieu and the Beurling case.
Comparison of \(\mathcal{H}_{\underline{M^{*}}}^{\infty}\) and \(\mathcal{H}_{\underline{M^{*}}}^{\infty}\) (resp. \(\mathcal{H}_{\underline{M^{*}}}^{\infty}\) and \(\mathcal{H}_{\overline{M^{*}}}^{\infty}\))
Let us quickly recall a recent result characterizing the equality of the two different types of weighted spaces of entire functions, see [20, Thm. 5.4]. To this end we need one more condition for \(M\):
\[\exists L\in\mathbb{N}_{>0}:\quad\liminf_{j\to+\infty}\frac{(M_{ Lj})^{1/(Lj)}}{(M_{j})^{1/j}}>1. \tag{14}\]
In [23, Thm. 3.1] it has been shown that \(M\in\mathcal{LC}\) has (14) if and only if
\[\omega_{M}(2t)=O(\omega_{M}(t))\text{ as }t\to+\infty. \tag{15}\]
**Lemma 3.6**.: _Let \(M\in\mathcal{LC}\). Then the following statements are equivalent:_
1. \(M\) _has_ (mg) _and satisfies (_14_),_
2. \(\mathcal{H}_{\underline{M}_{\epsilon}}^{\infty}(\mathbb{C},H)\cong\mathcal{H} _{\underline{M}_{\epsilon}}^{\infty}(\mathbb{C},H)\)_,_
3. \(\mathcal{H}_{\overline{M}_{\epsilon}}^{\infty}(\mathbb{C},H)\cong\mathcal{H} _{\overline{M}^{\epsilon}}^{\infty}(\mathbb{C},H)\)_._
Proof.: In [20], the result is shown for \(H=\mathbb{C}\). In order to get that \((i)\) implies \((ii)\) and \((iii)\) the proof of [20] can be repeated and only the appearances of \(|\cdot|\) (the absolute value in \(\mathbb{C}\)) have to be substituted by \(\|\cdot\|\) (the norm in the Hilbert space \(H\)).
In order to get the other implications, i.e. that \((ii)\) resp. \((iii)\) implies \((i)\), note that the respective equality in the Hilbert space-valued case implies the equality for the \(\mathbb{C}\)-valued case by observing that \(f\in\mathcal{H}_{\psi}^{\infty}(\mathbb{C})(=\mathcal{H}_{\psi}^{\infty}( \mathbb{C},\mathbb{C}))\) if and only if for any \(0\neq x\in H\) we have \(z\mapsto f(z)x\in\mathcal{H}_{v}^{\infty}(\mathbb{C},H)\). Therefore we may apply the result from [20] and infer \((i)\).
Together with results from Section 2.6, we derive the following.
**Corollary 3.7**.: _Let \(M\in\mathbb{R}_{>0}^{\mathbb{N}}\) be given and assume the following:_
1. \(M\) _is log-convex with_ \(1=M_{0}=M_{1}\) _(i.e. both normalization and_ \(1=M_{0}\geq M_{1}\)_),_
2. \(\lim_{p\to+\infty}m_{p}^{1/p}=0\)_,_
3. \(m\) _is log-concave, and finally_
4. _for some_ \(Q\in\mathbb{N}_{\geq 2}\) _we have_ \(\liminf_{p\to+\infty}\frac{\mu_{p}}{\mu_{Qp}}>\frac{1}{Q}\)
_Then_
\[\mathcal{H}^{\infty}_{\underline{\mathcal{M}^{*}}}(\mathbb{C},H)\cong\mathcal{H} ^{\infty}_{\underline{\mathcal{M}^{*}}}(\mathbb{C},H),\quad\mathcal{H}^{\infty }_{\overline{\mathcal{M}^{*}}}(\mathbb{C},H)\cong\mathcal{H}^{\infty}_{ \overline{\mathcal{M}^{*}}}(\mathbb{C},H),\]
_and \(E\) is an isomorphism between \(\mathcal{E}_{\{M\}}(I,H)\) and \(\mathcal{H}^{\infty}_{\underline{\mathcal{M}^{*}}}(\mathbb{C},H)\) resp. between \(\mathcal{E}_{(M)}(I,H)\) and \(\mathcal{H}^{\infty}_{\overline{\mathcal{M}^{*}}}(\mathbb{C},H)\)._
Proof.: By \((v)\) in Section 2.6 it follows that \(M^{*}\) has (mg). By \((vii)\) from Section 2.6 we infer that \(M^{*}\) has (\(\beta_{3}\)) and thus [23, Prop. 3.4] gives that \(M^{*}\) has (14). Finally observe that \(M^{*}\in\mathcal{L}\mathcal{C}\): \(\lim_{p\to+\infty}m_{p}^{1/p}=0\) implies \((M^{*}_{p})^{1/p}\to+\infty\) (see \((iii)\) in Section 2.6), log-convexity of \(M^{*}\) follows from log-concavity of \(m\) (see \((iv)\) in Section 2.6) and normalization of \(M^{*}\) is immediate. Thus we may apply Lemma 3.6 to \(M^{*}\). The rest follows from Theorem 3.4.
**Remark 3.8**.: Observe that the conditions of Lemma 3.6 hold if and only if \(\mathcal{E}_{[M^{*}]}\cong\mathcal{E}_{[\omega_{M^{*}}]}\), cf. [2, Thm. 14], [17, Sect. 5] and [23, Prop. 3.4].
Note also that Corollary 3.7 applies, in particular, to all small Gevrey sequences \(G^{\alpha}\), \(0\leq\alpha<1\), see the next Section for its importance.
### A result by Markin as a Corollary of Theorem 3.4
One of Markin's core results in [15], Lemma 3.1, shows, in our setting the following: For any \(\alpha\in[0,1)\) and \(M^{\alpha}_{j}:=j^{j\alpha}\), which is equivalent to \(G^{\alpha}_{j}=j^{1\alpha}\) (i.e. the small Gevrey sequence of order \(\alpha\)) and with \(v(t):=e^{-t^{1/(1-\alpha)}}\) we obtain that
\[E:\mathcal{E}_{\{G^{\alpha}\}}(I,H)\to\mathcal{H}^{\infty}_{\underline{ \mathcal{V}}}(\mathbb{C},H)\]
is an isomorphism of locally convex vector spaces; and mutatis mutandis the same holds in the respective Beurling case. With our preparation, this now is a corollary of Theorem 3.4 together with the following observations:
* Corollary 3.7 applies to \(M=G^{\alpha}\),
* \((G^{\alpha})^{*}=G^{1-\alpha}\),
* \(\omega_{G^{1-\alpha}}\cong t^{\frac{1}{1-\alpha}}\), i.e. \(\omega_{G^{1-\alpha}}(t)=O(t^{\frac{1}{1-\alpha}}),\ t^{\frac{1}{1-\alpha}}=O( \omega_{G^{1-\alpha}}(t))\) as \(t\to+\infty\).
### Characterization of inclusion relations for small weight sequences
In the theory of ultradifferentiable functions the characterization of the inclusion \(\mathcal{E}_{[M]}\subseteq\mathcal{E}_{[N]}\) in terms of a growth property expressed in terms of \(M\) and \(N\) is studied. Summarizing we get the following, e.g. see [17, Prop. 2.12] and the literature citations there; similar/analogous techniques have also been applied to the more general and recent approaches in [17, Prop. 4.6] and [6, Sect. 4]:
* If \(M,N\in\mathbb{R}^{\mathbb{N}}_{>0}\) with \(M{\preccurlyeq}N\), then \(\mathcal{E}_{\{M\}}\subseteq\mathcal{E}_{\{N\}}\) and \(\mathcal{E}_{(M)}\subseteq\mathcal{E}_{(N)}\) with continuous inclusion.
* If in addition \(M\) is normalized and log-convex, then \(\mathcal{E}_{\{M\}}(\mathbb{R})\subseteq\mathcal{E}_{\{N\}}(\mathbb{R})\) (as sets) yields \(M{\preccurlyeq}N\). If \(M,N\in\mathcal{L}\mathcal{C}\), then \(\mathcal{E}_{(M)}(\mathbb{R})\subseteq\mathcal{E}_{(N)}(\mathbb{R})\) (as sets and/or with continuous inclusion, see the proof of [17, Prop. 4.6] and [6, Prop. 4.5, Rem. 4.6]) yields \(M{\preccurlyeq}N\).
Thus for the necessity of \(M{\preccurlyeq}N\) standard regularity and growth assumptions for \(M\) are required and so far it is not known what can be said for (small) sequences \(M\) "beyond" this setting. Via an application of Theorem 3.4 and main results from [20], we now may actually prove as a corollary an analogous statement.
First let us recall [20, Thm. 3.14], where the following characterization is shown (even under formally slightly more general assumptions on the weight \(N\), see also [20, Rem. 2.6]).
**Theorem 3.9**.: _Let \(N\in\mathcal{LC}\) and \(M\in\mathbb{R}^{\mathbb{N}}_{>0}\) such that \(M\) is satisfying \(M_{0}=1\) and \(\lim_{p\to+\infty}(M_{p})^{1/p}=+\infty\). Then the following are equivalent:_
1. _[label=_\(()\)_]_
2. _We have_ \(N{\preccurlyeq}M\)_._
3. _We have_ \[\mathcal{H}^{\infty}_{\underline{\mathcal{M}}_{\epsilon}}(\mathbb{C})\subseteq \mathcal{H}^{\infty}_{\underline{\mathcal{N}}_{\epsilon}}(\mathbb{C}).\]
4. _We have_ \[\mathcal{H}^{\infty}_{\overline{\mathcal{M}}_{\epsilon}}(\mathbb{C})\subseteq \mathcal{H}^{\infty}_{\overline{\mathcal{N}}_{\epsilon}}(\mathbb{C}).\]
Thus, by combining Theorem 3.4 and Theorem 3.9, which we apply to \(N^{*}\) and \(M^{*}\), we get the following:
**Theorem 3.10**.: _Let \(M,N\in\mathbb{R}^{\mathbb{N}}_{>0}\) be given and assume that_
1. [label=\(()\),
### Solutions for bounded operators
First let us recall quickly the situation for bounded operators \(A\). For those, the domain is all of \(H\). It is a classical result in this context that every solution \(y\) of (16) is of the form
\[y(t)=e^{tA}y_{0},\]
for some \(y_{0}\in H\), where \(e^{tA}:=\sum_{k=0}^{+\infty}\frac{t^{k}}{k!}A^{k}\) and which converges locally uniformly (with respect to \(t\)) in the norm topology on \(B(H)\) (the space of bounded operators on \(H\)). Moreover, \(y\) can be extended to an entire function such that
\[\|y(z)\|\leq Me^{C|z|}\]
for some constants \(M\) and \(C\) and all \(z\in\mathbb{C}\). Thus we may conclude the subsequent statement.
* If \(A\) is a bounded operator on \(H\), then _each_ solution \(y\) of (16) is an entire function of exponential type.
On the other hand we have the following:
* As outlined by M. Markin in [13], [14] and [15], there exists an unbounded normal operator \(A\) (that is actually not bounded on \(H\)) such that each (weak) solution of (16) is an entire function.
### Motivating question
So one may ask whether one can reverse the implication in \((i)\), and if this is possible to what extent one can weaken the assumption of exponential type. From \((ii)\) it is clear that one cannot get completely rid of any additional growth restriction!
Markin does exactly that in [15]. Let us first recall his approach and then subsequently considerably extend it.
### A generalization of Markin's results
The main result [15, Thm. 5.1] states that if _each_ weak solution of (16) is in some _small_ Gevrey class, i.e. admitting a growth restriction expressed in germs of \(G^{\alpha}\) with \(\alpha<1\), then the operator \(A\) is necessarily bounded on \(H\). This is of special interest since, as outlined in Section 3.2, every small Gevrey class can be identified with a weighted class of entire functions.
Before we are able to generalize Markin's result we need some definitions: For a densely defined operator \(A\) on \(H\), we first set
\[C^{\infty}(A):=\bigcap_{n\in\mathbb{N}}D(A^{n}),\]
where \(D(A^{n})\) is the domain of \(A^{n}\), the \(n\)-fold iteration of \(A\). Then put
\[\mathcal{E}_{\{M\}}(A):=\{f\in C^{\infty}(A):\ \ \exists C,h>0\ \ \forall n\in \mathbb{N}\ \|A^{n}f\|\leq Ch^{n}M_{n}\},\]
and the Beurling class is again defined by interchanging the existential quantifier in front of \(h\) by the universal quantifier.
From [4, Sect. 1.3] a different description of \(\mathcal{E}_{\{M\}}(A)\) in terms of \(E_{A}\), the spectral measure associated to \(A\), can be deduced as follows:
\[\mathcal{E}_{\{M\}}(A)=\{f\in H:\ \ \exists t>0\ \int_{\mathbb{C}}e^{2\omega_{ M}(t|\lambda|)}\langle dE_{A}(\lambda)f,f\rangle<+\infty\},\]
and
\[\mathcal{E}_{(M)}(A)=\{f\in H:\ \ \forall t>0\ \int_{\mathbb{C}}e^{2\omega_{ M}(t|\lambda|)}\langle dE_{A}(\lambda)f,f\rangle<+\infty\}.\]
Now we have the following result which generalizes [14, Thm. 3.1].
**Theorem 4.1**.: _Let \(M\in\mathbb{R}^{\mathbb{N}}_{>0}\) be given and \(I\subseteq\mathbb{R}\) a closed interval. Then a solution \(y\) of (16) belongs to \(\mathcal{E}_{[M]}(I,H)\) if and only if \(y(t)\in\mathcal{E}_{[M]}(A)\) for all \(t\in I\). In this case one has \(y^{(n)}(t)=A^{n}y(t)\) for all \(t\in I\)._
Proof.: Let \(y\) be a solution of (16) such that \(y\in\mathcal{E}_{[M]}(I,H)\). Sine \(y\in C^{\infty}(I,H)\), we have by [13, Prop. 4.1] that \(y^{(n)}(t)=A^{n}y(t)\) for all \(t\in I\) and all \(n\in\mathbb{N}\). Therefore
\[\|A^{n}y(t)\|=\|y^{(n)}(t)\|\leq Ch^{n}M_{n},\]
where \(h\) is either in the scope of an existential or universal quantifier depending on the context. This immediately gives that \(y(t)\in\mathcal{E}_{[M]}(A)\) for all \(t\).
For the converse direction, we argue as in [14] where it is shown that in this case for any subinterval \([a,b]\subseteq I\)
\[\max_{t\in[a,b]}\|y^{(n)}(t)\|\leq\|y^{(n)}(a)\|+\|y^{(n)}(b)\|.\]
Since again we have \(y^{(n)}(t)=A^{n}y(t)\) this immediately yields \(y\in\mathcal{E}_{[M]}(I,H)\).
We need one more result generalizing [15, Lemma 4] which reads as follows.
**Lemma 4.2**.: _Let \(0<\beta<+\infty\). If_
\[\bigcup_{0<\beta^{\prime}<\beta}\mathcal{E}^{\{\beta^{\prime}\}}(A)=\mathcal{ E}^{(\beta)}(A),\]
_then the operator \(A\) is bounded._
Here \(\mathcal{E}^{[\beta]}(A)\) stands for \(\mathcal{E}_{[G^{\beta}]}(A)\); i.e. the respective Gevrey class of order \(\beta\). Since we have a generalization of [15, Thm. 5.1] as our goal, we only need a generalization of the above Lemma in the case \(\beta=1\). So we want to conclude that an operator \(A\) on a Hilbert space \(H\) is bounded if we can write the _entire_ functions corresponding to \(A\) (i.e. \(\beta=1\)) as an union of certain smaller Roumieu classes.
Summarizing, our generalization of Markin's result reads as follows.
**Lemma 4.3**.: _Let \(\mathfrak{F}\subseteq\mathcal{LC}\) be a family of sequences such that_
\[\forall\;N\in\mathfrak{F}\;\exists\;M\in\mathfrak{F}:\quad\omega_{M}(2t)=O( \omega_{N}(t))\text{ as }t\to+\infty, \tag{17}\]
_i.e. a mixed version of (15) (of Roumieu-type, see [8, Sect. 3])._
_Suppose there exists \(\mathbf{a}=(a_{j})\in\mathbb{R}^{\mathbb{N}}_{>0}\) with the following properties:_
* _we have_ \(\lim_{j\to+\infty}a_{j}^{1/j}=0\)_,_
* \(\mathbf{a}\) _is a uniform bound for_ \(\mathfrak{F}\) _which means that_ \[\forall\;N\in\mathfrak{F}\;\exists\;C>0\;\forall\;j\in\mathbb{N}:\quad(N_{j}/j!=)n_{j}\leq Ca_{j}.\]
_Then_
\[\bigcup_{N\in\mathfrak{F}}\mathcal{E}_{\{N\}}(A)=\mathcal{E}_{(G^{1})}(A) \text{ as sets}\]
_implies that \(A\) is bounded._
**Remark 4.4**.: We gather some comments concerning the previous result:
* By choosing \(a_{j}=\frac{1}{\log(j)^{j}}\), Lemma 4.3 includes Lemma 4.2 (with \(\beta=1\)) as a special case.
* Requirements \((i)\) and \((ii)\) in Lemma 4.3 imply that \(\lim_{j\to+\infty}n_{j}^{1/j}=0\) for all \(N\in\mathfrak{F}\).
* If each \(N\in\mathfrak{F}\) satisfies (14), then (17) follows with \(M=N\).
* In [8, Thm. 3.2] condition (17) has been characterized for one-parameter families (weight matrices, see [8, Sect. 2.5]) in terms of the following requirement: \[\exists\;r>1\;\forall\;N\in\mathfrak{F}\;\exists\;M\in\mathfrak{F}\;\exists\;L \in\mathbb{N}_{>0}:\quad\liminf_{j\to+\infty}\frac{(M_{Lj})^{1/(Lj)}}{(N_{j})^ {1/j}}>r,\] i.e. a mixed version of (14).
Actually we show now that, if \(\mathfrak{F}\) consists of a one-parameter family of sequences satisfying some rather mild regularity and growth properties, then it is already possible to find some sequence \(\mathbf{a}\) as required in Lemma 4.3.
**Proposition 4.5**.: _Let \(\mathfrak{F}:=\{N^{(\beta)}\in\mathbb{R}_{>0}^{\mathbb{N}}:\beta>0\}\) be a one-parameter family of sequences \(N^{(\beta)}\) satisfying the following properties:_
* \(N_{0}^{(\beta)}=1\) _for all_ \(\beta>0\) _(normalization),_
* \(N^{(\beta_{1})}\leq N^{(\beta_{2})}\Leftrightarrow n^{(\beta_{1})}\leq n^{( \beta_{2})}\) _for all_ \(0<\beta_{1}\leq\beta_{2}\) _(point-wise order),_
* \(\lim_{j\to+\infty}(n_{j}^{(\beta)})^{1/j}=0\) _for each_ \(\beta>0\)_,_
* \(j\mapsto(n_{j}^{(\beta)})^{1/j}\) _is non-increasing for every_ \(\beta>0\)_,_
* \(\lim_{j\to+\infty}\left(\frac{N_{j}^{(\beta_{2})}}{N_{j}^{(\beta_{1})}}\right) ^{1/j}=\lim_{j\to+\infty}\left(\frac{n_{j}^{(\beta_{2})}}{n_{j}^{(\beta_{1})}} \right)^{1/j}=+\infty\) _for all_ \(0<\beta_{1}<\beta_{2}\) _(large growth difference between the sequences)._
_Then there exists \(\mathbf{a}=(a_{j})_{j}\in\mathbb{R}_{>0}^{\mathbb{N}}\) such that_
* \(j\mapsto(a_{j})^{1/j}\) _is non-increasing,_
* \((a_{j})^{1/j}\to 0\) _as_ \(j\to+\infty\)_, and_
* \(\lim_{j\to+\infty}\left(\frac{a_{j}}{n_{j}^{(\beta)}}\right)^{1/j}=+\infty\) _for all_ \(\beta>0\)_._
_In particular, this implies that there exists a uniform sequence/bound \(\mathbf{a}\) for \(\mathfrak{F}\) as required in Lemma 4.3._
_In addition, the family \(\mathfrak{F}\) satisfies (17)._
_Note:_
* Requirement \((iv)\) weaker than assuming log-concavity for each \(n^{(\beta)}\): Together with \((i)\), i.e. \(n_{0}^{(\beta)}=1\) (for each \(\beta\)), log-concavity implies \((iv)\); see \((iv)\) in Section 2.6.
* Moreover, if \((iv)\) is replaced by assuming that each \(n^{(\beta)}\) is log-concave and \((i)\) by slightly stronger \(n_{1}^{(\beta)}\leq n_{0}^{(\beta)}=1\) (for each \(\beta\)), then in view of Theorem 3.10 we see that \((iii)\) and \((v)\) together yield \[\forall\;0<\beta_{1}<\beta_{2}:\quad\mathcal{E}_{[N^{(\beta_{1})}]}\subsetneq \mathcal{E}_{[N^{(\beta_{2})}]}.\]
* In any case, \((v)\) implies that the sequences are pair-wise not equivalent.
* Finally, property \((v)\) alone is sufficient in order to have (17) for \(\mathfrak{F}\).
Proof.: Put \(j_{1}:=1\) and for \(k\in\mathbb{N}_{>0}\) set \(j_{k+1}\) to be the smallest integer \(j_{k+1}>j_{k}\) with
\[(n_{j_{k}}^{(k)})^{1/j_{k}}>k(n_{j_{k+1}}^{(k+1)})^{1/j_{k+1}},\]
see properties \((ii),(iii),\)\((iv),\) and such that for all \(j\geq j_{k+1}\) and all \(k\) we get (by property \((v)\))
\[\frac{(n_{j}^{(k+1)})^{1/j}}{(n_{j}^{(k)})^{1/j}}\geq k.\]
Now put \(a_{0}:=1\) and, for \(j_{k}\leq j<j_{k+1}\), we set
\[(a_{j})^{1/j}:=(n_{j_{k}}^{(k)})^{1/j_{k}}.\]
Thus we have by definition that \(j\mapsto(a_{j})^{1/j}\) is non-increasing and tending to \(0\).
Finally, let \(k_{0}\in\mathbb{N}_{>0}\) be given (and from now on fixed). For \(j\geq j_{k_{0}+1}\) we can find \(k\geq k_{0}\) such that \(j_{k+1}\leq j<j_{k+2}\). Thus, in this situation we can estimate as follows:
\[\frac{a_{j}^{1/j}}{(n_{j}^{(k_{0})})^{1/j}}=\frac{(n_{j_{k+1}}^{(k+1)})^{1/j_ {k+1}}}{(n_{j}^{(k_{0})})^{1/j}}\geq\frac{(n_{j_{k+1}}^{(k+1)})^{1/j_{k+1}}}{( n_{j}^{(k)})^{1/j}}\geq\frac{(n_{j}^{(k+1)})^{1/j}}{(n_{j}^{(k)})^{1/j}}\geq k \rightarrow+\infty,\]
as \(j\rightarrow+\infty\). The second inequality follows from the fact that \(j\mapsto(n_{j}^{(k+1)})^{1/j}\) is non-increasing (property \((iv)\)). By the point-wise order for any \(\beta>0\) we can find some \(k_{0}\in\mathbb{N}_{>0}\) such that \(\frac{a_{j}^{1/j}}{(n_{j}^{(\beta)})^{1/j}}\geq\frac{a_{j}^{1/j}}{(n_{j}^{(k_ {0})})^{1/j}}\) for all \(j\geq 1\) and hence the last desired property for \(\mathbf{a}\) is verified.
Concerning (17), we note that by \((v)\) we get \(2^{j}N_{j}^{(\beta_{1})}\leq N_{j}^{(\beta_{2})}\) for all \(0<\beta_{1}<\beta_{2}\) and all \(j\) sufficiently large. Consequently,
\[\forall\;0<\beta_{1}<\beta_{2}\;\exists\;C\geq 1\;\forall\;j\in\mathbb{N}: \quad 2^{j}N_{j}^{(\beta_{1})}\leq CN_{j}^{(\beta_{2})},\]
which yields by definition of associated weights \(\omega_{N^{(\beta_{2})}}(2t)\leq\omega_{N^{(\beta_{1})}}(t)+\log(C)\) for all \(t\geq 0\). This verifies (17) for \(\mathfrak{F}\).
**Remark 4.6**.: The previous result shows that any family \(\mathfrak{F}\subseteq\mathcal{LC}\) that can be parametrized to satisfy \((ii)-(v)\) from Proposition 4.5 is already uniformly bounded by some sequence \(\mathbf{a}\).
Consequently, in this case the assumptions \((i)\) and \((ii)\) from Lemma 4.3 on the existence of \(\mathbf{a}\) are superfluous and also assumption (17) for \(\mathfrak{F}\) holds true automatically.
Before we can give the proof of Lemma 4.3, we need one more technical lemma as preparation.
**Lemma 4.7**.: _Let \(\mathbf{a}=(a_{j})_{j}\in\mathbb{R}_{>0}^{\mathbb{N}}\) with \(a_{j}^{1/j}\to 0\) be given. Then there exists a function \(g=g_{\mathbf{a}}:\mathbb{R}_{+}\rightarrow\mathbb{R}_{+}\) with the following properties:_
* \(g_{\mathbf{a}}(t)\rightarrow+\infty\) _as_ \(t\rightarrow+\infty\)_._
* _For all_ \(N\in\mathcal{LC}\) _such that_ \(n_{j}\leq Da_{j}\) _(for some_ \(D=D(N)>0\) _and all_ \(j\in\mathbb{N}\)_), and all_ \(d,s>0\) _we have that_ \[s\omega_{N}(t/2)-dg_{\mathbf{a}}(t)t\rightarrow+\infty,\;\;\;t\rightarrow+\infty.\]
Proof.: Observe that
\[\omega_{N}(t)\geq\sup_{k\in\mathbb{N}}\log\frac{t^{k}}{Da_{k}k!}\geq\log\left( \frac{1}{2D}\sum_{k=0}^{+\infty}\frac{(t/2)^{k}}{a_{k}k!}\right)=:h_{\mathbf{a} }(t)-\log(2D).\]
It is clear from the definition that \(h_{\mathbf{a}}\) is non-decreasing. From the assumption \(a_{j}^{1/j}\to 0\), it follows that for every \(R>0\) there exists \(C\in\mathbb{R}\) such that for all \(t>0\) we have
\[h_{\mathbf{a}}(t)\geq C+Rt. \tag{18}\]
This estimate follows since for every (small) \(\varepsilon>0\) there exists \(B>0\) such that \(a_{k}\leq B\varepsilon^{k}\) for all \(k\in\mathbb{N}\); and therefore
\[\log\left(\sum_{k=0}^{+\infty}\frac{(t/2)^{k}}{a_{k}k!}\right)\geq\frac{t}{2 \varepsilon}-\log(B),\]
which gives (18).
Let us set \(f_{\mathbf{a}}(t):=\frac{h_{\mathbf{a}}(t/2)}{t}\), then, by (18), \(f_{\mathbf{a}}(t)\to+\infty\) as \(t\to+\infty\). Finally set \(g_{\mathbf{a}}:=\sqrt{f_{\mathbf{a}}}\) and so \(g_{\mathbf{a}}(t)\to+\infty\) as \(t\to+\infty\). Moreover, we have \(\varepsilon f_{\mathbf{a}}(t)-g_{\mathbf{a}}(t)\to+\infty\) for every \(\varepsilon>0\). Thus for any arbitrary fixed \(s>0\), we get
\[s\omega_{N}(t/2)-g_{\mathbf{a}}(t)t\geq sh_{\mathbf{a}}(t/2)-s\log(2D)-g_{ \mathbf{a}}(t)t=t(sf_{\mathbf{a}}(t)-g_{\mathbf{a}}(t))-s\log(2D)\to+\infty, \tag{19}\]
as \(t\to+\infty\). This shows the statement for \(d=1\). For \(d\neq 1\), the result simply follows by choosing \(s/d\) in (19).
Proof of Lemma 4.3.: We adapt the proof of [15, Lemma 4]. So assume that the operator \(A\) is actually unbounded. Then the spectrum \(\sigma(A)\) is unbounded as well and so there exists a strictly increasing sequence of natural numbers \(k(n)\) such that
* \(n\leq g_{\mathbf{a}}(k(n))\) (and \(n\leq k(n)\)) for all \(n\in\mathbb{N}_{>0}\),
* in each ring \(\{\lambda\in\mathbb{C}:\ k(n)<|\lambda|<k(n)+1\}\) there is a point \(\lambda_{n}\in\sigma(A)\),
and we can actually find a \(0\)-sequence \(\varepsilon_{n}\) with \(0<\varepsilon_{n}<\min(1/n,\varepsilon_{n-1})\) such that \(\lambda_{n}\) belongs to the ring
\[r_{n}:=\{\lambda\in\mathbb{C}:\ k(n)-\varepsilon_{n}<|\lambda|<k(n)+1- \varepsilon_{n}\}.\]
As in Markin's proof, the subspaces \(E_{A}(r_{n})H\) are non-trivial and pairwise orthogonal. Thus in each of those spaces we may choose a non-trivial element \(e_{n}\) such that
\[e_{n}=E_{A}(r_{n})e_{n},\quad\langle e_{i},e_{j}\rangle=\delta_{i,j}.\]
Now we define
\[f:=\sum_{n=1}^{+\infty}g_{\mathbf{a}}(k(n))^{-(k(n)+1-\varepsilon_{n})}e_{n}.\]
As in [15], the sequence of coefficients belongs to \(\ell^{2}\), and
\[E_{A}(r_{n})f=g_{\mathbf{a}}(k(n))^{-(k(n)+1-\varepsilon_{n})}e_{n},\quad E_{ A}(\bigcup_{n\in\mathbb{N}_{>0}}r_{n})f=f.\]
Moreover, for every \(t>0\), we have
\[\int_{\mathbb{C}}e^{2t|\lambda|}d\langle E_{A}(\lambda)f,f\rangle =\int_{\mathbb{C}}e^{2t|\lambda|}d\langle E_{A}(\lambda)E_{A}( \bigcup_{n\in\mathbb{N}_{>0}}r_{n})f,E_{A}(\bigcup_{n\in\mathbb{N}_{>0}}r_{n})f\rangle\] \[=\sum_{n=1}^{\infty}\int_{r_{n}}e^{2t|\lambda|}d\langle E_{A}( \lambda)f,f\rangle\] \[=\sum_{n=1}^{\infty}\int_{r_{n}}e^{2t|\lambda|}d\langle E_{A}( \lambda)E_{A}(r_{n})f,E_{A}(r_{n})f\rangle\] \[=\sum_{n=1}^{\infty}g_{\mathbf{a}}(k(n))^{-2(k(n)+1-\varepsilon_{ n})}\int_{r_{n}}e^{2t|\lambda|}d\langle E_{A}(\lambda)e_{n},e_{n}\rangle\] \[\leq\sum_{n=1}^{\infty}e^{-2\log(g_{\mathbf{a}}(k(n))(k(n)+1- \varepsilon_{n})}e^{2t(k(n)+1-\varepsilon_{n})}\underbrace{\|E_{A}(r_{n})e_{ n}\|^{2}}_{=1}\] \[=\sum_{n=1}^{+\infty}e^{-2(\log(g_{\mathbf{a}}(k(n))-t)(k(n)+1- \varepsilon_{n})}<+\infty,\]
where we used in the first inequality that for \(\lambda\in r_{n}\) we have \(|\lambda|\leq k(n)+1-\varepsilon_{n}\), and in the final inequality that \(g_{\mathbf{a}}\) tends to infinity and that \(k(n)\geq n\). Thus we have shown that \(f\in\mathcal{E}_{(G^{1})}(A)\).
Moreover, in analogy to [15], and by a similar reasoning as above, we get for all \(N\in\mathfrak{F}\) and \(t>0\)
\[\int_{\mathbb{C}}e^{2\omega_{N}(t|\lambda|)}d\langle E_{A}(\lambda)f,f\rangle =\sum_{n=1}^{\infty}g_{\mathbf{a}}(k(n))^{-2(k(n)+1-\varepsilon_{n})}\int_{ r_{n}}e^{2\omega_{N}(t|\lambda|)}d\langle E_{A}(\lambda)e_{n},e_{n}\rangle. \tag{20}\]
Next we observe that for \(\lambda\in r_{n}\) we have \(\omega_{N}(t|\lambda|)\geq\omega_{N}(t(k(n)-\varepsilon_{n}))\geq\omega_{N}(t (k(n)-1))\). We continue to estimate the right hand side of (20) and infer
\[\int_{\mathbb{C}}e^{2\omega_{N}(t|\lambda|)}d\langle E_{A}(\lambda)f,f\rangle \geq\sum_{n=1}^{\infty}g_{\mathbf{a}}(k(n))^{-2(k(n)+1-\varepsilon _{n})}e^{2\omega_{N}(t(k(n)-1))}\underbrace{\int_{r_{n}}d\langle E_{A}( \lambda)e_{n},e_{n}\rangle}_{=1}\] \[\geq\sum_{n=1}^{\infty}e^{2(\omega_{N}(t(k(n)-1))-\log(g_{ \mathbf{a}}(k(n)))(k(n)+1))}.\]
By iterating (17) there exist \(M\in\mathfrak{F}\), \(s>0\) (small) and \(C>0\) (large) such that for all \(\lambda\in\mathbb{C}\)
\[\omega_{N}(t|\lambda|)\geq s\omega_{M}(|\lambda|)-C,\]
which allows us to continue the estimate and get
\[\int_{\mathbb{C}}e^{2\omega_{N}(t|\lambda|)}d\langle E_{A}(\lambda)f,f\rangle \geq\sum_{n=1}^{\infty}e^{2(s\omega_{M}((k(n)-1))-C-\log(g_{\mathbf{a}}(k(n))) (k(n)+1))}=+\infty, \tag{21}\]
where the last equality follows from Lemma 4.7 (applied to the sequence \(M\) and \(d=2\)). Thus we infer that \(f\notin\mathcal{E}_{\{N\}}(A)\). Since \(N\in\mathfrak{F}\) has been arbitrary we are done.
Finally we are now in the position to prove our main theorem, a generalization of [15, Thm. 5.1] which reads as follows.
**Theorem 4.8**.: _Suppose there exists \(\mathbf{a}=(a_{j})_{j}\) such that \(a_{j}^{1/j}\to 0\) and a family \(\mathfrak{F}\) of weight sequences as in Lemma 4.3. Assume that for any weak solution \(y\) of (16) on \([0,+\infty)\), there is \(N\in\mathfrak{F}\) such that \(y\in\mathcal{E}_{\{N\}}([0,+\infty),H)\). Then the operator \(A\) is bounded._
Proof.: Let \(y\) be a weak solution of (16). By assumption, there exists \(N\in\mathfrak{F}\) such that \(y\in\mathcal{E}_{\{N\}}([0,+\infty),H)\). By Theorem 4.1, we get that for every \(t\geq 0\), we have
\[y(t)\in\mathcal{E}_{\{N\}}(A),\]
in particular \(y(0)\in\mathcal{E}_{\{N\}}(A)\). Via an application of [13, Thm. 3.1], we infer
\[\bigcap_{t>0}D(e^{tA})\subseteq\bigcup_{N\in\mathfrak{F}}\mathcal{E}_{\{N\}} (A). \tag{22}\]
On the other hand, since
\[\bigcap_{t>0}D(e^{tA})=\bigcap_{t>0}\{f\in H:\ \int_{\mathbb{C}}e^{2t \mathcal{R}(\lambda)}\langle dE_{A}(\lambda)f,f\rangle<+\infty\},\]
it is clear that
\[\bigcap_{t>0}D(e^{tA})\supseteq\bigcap_{t>0}\{f\in H:\ \int_{\mathbb{C}}e^{2t| \lambda|}\langle dE_{A}(\lambda)f,f\rangle<+\infty\}=\mathcal{E}_{(G^{1})}(A).\]
Together with (22) this yields
\[\bigcup_{N\in\mathfrak{F}}\mathcal{E}_{\{N\}}(A)=\mathcal{E}_{(G^{1})}(A).\]
Thus, by using Lemma 4.3 we conclude that \(A\) is bounded.
When taking \(\mathfrak{F}\) to be the family of all small Gevrey sequences, i.e. \(\mathfrak{F}=\mathfrak{G}:=\{G^{\alpha}:\ \alpha<1\}\), we infer [15, Thm. 5.1] (see also Remark 4.4).
### An answer to the motivating question from Section 4.2
The final goal is now to combine the information from Theorem 3.4 and Theorem 4.8.
So, suppose \(\mathfrak{F}\) is a family of weight sequences satisfying:
1. \(N\in\mathcal{LC}\) for all \(N\in\mathfrak{F}\) and \(1=N_{0}=N_{1}\),
2. \(\mathfrak{F}\) has (17),
3. \(\mathfrak{F}\) is uniformly bounded by some \(\mathbf{a}=(a_{j})_{j}\) with \(a_{j}^{1/j}\to 0\), and
4. for all \(N\in\mathfrak{F}\) we have that \(n\) is log-concave.
Note that \((iii)\) gives \((n_{j})^{1/j}\to 0\) for all \(N\in\mathfrak{F}\). So \(\mathfrak{F}\) is a family as required in Lemma 4.3 and by \((i)\), \((iii)\) and \((iv)\) Theorem 3.4 can be applied to each \(N\in\mathfrak{F}\), hence
\[\forall\ N\in\mathfrak{F}:\ \ \ \mathcal{E}_{\{N\}}(I,H)\cong\mathcal{H}_{ \mathcal{N}^{*}}^{\infty}(\mathbb{C},H).\]
Summarizing, we can reformulate Theorem 4.8 as follows.
**Theorem 4.9**.: _Let \(\mathfrak{F}\) be a family of weight sequences as considered before. Suppose that for every weak solution \(y\) of (16) there exist \(N\in\mathfrak{F}\) and \(C,k>0\) such that \(y\) can be extended to an entire function with_
\[\|y(z)\|\leq Ce^{\omega_{N*}(k|z|)}.\]
_Then \(A\) is already a bounded operator._
Theorem 4.9 applies to the family \(\mathfrak{G}:=\{G^{\alpha}:0\leq\alpha<1\}\) of all small Gevrey sequences.
## Appendix A Dual weight sequences
The growth and regularity assumptions for weight sequences \(M\) in Theorem 3.4 or for \(N\in\mathfrak{F}\) in Lemma 4.3, in the technical Proposition 4.5 and in Theorems 4.8, 4.9 are by far not standard in the theory of ultradifferentiable (and ultraholomorphic) functions. More precisely the sequences under consideration are required to grow very slowly or to be even non-increasing. This is due to the fact that in Theorem 3.4 resp. in Theorem 4.9 the _conjugate sequence_\(M^{*}\) resp. \(N^{*}\) plays the crucial role in order to restrict the growth. Therefore the conjugate sequence(s) is (are) required to satisfy the frequently used conditions in the weight sequence setting; e.g. in order to work with the associated function \(\omega_{M^{*}}\).
We are interested in studying and constructing such "exotic/non-standard" sequences and may ask how they are "naturally" related to standard sequences. On the one hand, as already stated in Section 2.5, formally we can start with a standard/regular sequence \(R=M^{*}\) and then get \(M\) by the formula (5) which relates \(M\) and \(M^{*}\) by a one-to-one correspondence; i.e. take \(M=R^{*}\). However, in this Section the aim is to give a completely different approach and to show how such "exotic" small sequences \(M\) are appearing and can be introduced in a natural way. The main idea is to start with \(N\in\mathcal{LC}\) (and satisfying some more standard requirements) and then consider the so-called _dual sequence_\(D\) from [5, Sect. 2.1.5].
### Preliminaries
We recall some facts and definitions from [5, Sect. 2.1.2], see also the literature citations therein and especially [1]. Moreover we refer to [7, Sect. 3]. Recall that in [5] and in [7] a sequence \(M\in\mathbb{R}^{\mathbb{N}}_{>0}\) is called a weight sequence if it satisfies all requirements from the class \(\mathcal{LC}\) except necessarily \(M_{0}\leq M_{1}\), see [5, Sect. 1.1.1, p. 29; Def. 1.1.8, p.32] and [7, Sect. 3.1].
For any given sequence \(\mathbf{a}=(a_{p})_{p}\in\mathbb{R}^{\mathbb{N}}_{>0}\) the _upper Matuszewska index_\(\alpha(\mathbf{a})\) is defined by
\[\alpha(\mathbf{a}):= \inf\{\alpha\in\mathbb{R}:\frac{a_{p}}{p^{\alpha}}\text{ is almost decreasing}\}\] \[= \inf\{\alpha\in\mathbb{R}:\exists\;H\geq 1\;\forall\;1\leq p\leq q :\quad\frac{a_{q}}{q^{\alpha}}\leq H\frac{a_{p}}{p^{\alpha}}\},\]
and the _lower Matuszewska index_\(\beta(\mathbf{a})\) by
\[\beta(\mathbf{a}):= \sup\{\beta\in\mathbb{R}:\frac{a_{p}}{p^{\beta}}\text{ is almost increasing}\}\] \[= \sup\{\beta\in\mathbb{R}:\exists\;H\geq 1\;\forall\;1\leq p\leq q :\quad\frac{a_{p}}{p^{\beta}}\leq H\frac{a_{q}}{q^{\beta}}\}.\]
Let \(N\in\mathcal{LC}\) be given. We define a new sequence \(D\), called its _dual sequence_, in terms of its quotients \(\delta=(\delta_{p})_{p\in\mathbb{N}}\) as follows, see [5, Def. 2.1.40, p. 81]:
\[\forall\;p\geq\nu_{1}(\geq 1):\quad\delta_{p+1}:=\Sigma_{N}(p),\qquad\delta_{ p+1}:=1\quad\forall\;p\in\mathbb{Z},\;-1\leq p<\nu_{1}, \tag{23}\]
and set \(D_{p}:=\prod_{i=0}^{p}\delta_{i}\). Hence \(D\in\mathcal{LC}\) with \(1=D_{0}=D_{1}\) follows by definition.
Recall that by [5, Def. 2.1.27] the function \(\nu_{\mathbf{n}}\) in [5] precisely denotes the counting function \(\Sigma_{N}\) (see (12)) and note that in the sequence of quotients there exists an index-shift: more precisely we have \(n_{p}\equiv\nu_{p+1}\) for all \(p\in\mathbb{N}\) with \(\mathbf{n}=(n_{p})_{p}\) used as in [5] and [7].
In [5, Thm. 2.1.43, p. 82] the following result has been shown:
**Theorem A.1**.: _Let \(N\in\mathcal{LC}\) be given and satisfying_
\[\exists\;A\geq 1\;\forall\;p\in\mathbb{N}:\quad\nu_{p+1}\leq A\nu_{p}. \tag{24}\]
_Then we get \(\alpha(\nu)=\frac{1}{\beta(\delta)}\) and \(\beta(\nu)=\frac{1}{\alpha(\delta)}\)._
_Note:_
1. As pointed out in [5, Sect. 2.1.3, p. 63-64] and [7, Remark 3.8], the aforementioned index shift in the sequences of quotients is not effecting the value of the Matuszewska indices \(\alpha(\cdot)\) and \(\beta(\cdot)\).
2. (24), see [5, (2.11), p. 76] and which has also appeared due to technical reasons in [18], is connected to the growth behaviors _moderate growth_ and _derivation closedness_. More precisely, in [5, Remark 2.1.36, p. 78] it has been shown that for log-convex sequences we have (25) \[(\mathrm{mg})\Longrightarrow(24)\Longrightarrow(\mathrm{dc}),\] and each implication cannot be reversed in general.
### Main statements
First, by applying Theorem A.1 we immediately get the following statement.
**Lemma A.2**.: _Let \(N\in\mathcal{LC}\) be given with (24). Assume that \(N\) satisfies_
\[\exists\;H\geq 1\;\exists\;\beta>1\;\forall\;1\leq p\leq q:\quad\frac{\nu_{p} }{p^{\beta}}\leq H\frac{\nu_{q}}{q^{\beta}}, \tag{26}\]
_i.e. the sequence \((\nu_{p}/p^{\beta})_{p}\) is almost increasing for some \(\beta>1\)._
_Then the dual sequence \(D\) is equivalent to a sequence \(L\) such that \(L^{*}\) is normalized and log-convex (and \(D^{*}\) is equivalent to \(L^{*}\), too)._
Proof.: By assumption we have \(\beta(\nu)\geq\beta>1\) and so \(\alpha(\delta)<1\) follows by Theorem A.1. Consequently, we have that
\[\exists\;H\geq 1\;\forall\;1\leq p\leq q:\quad\frac{\delta_{q}}{q}\leq H\frac{ \delta_{p}}{p},\]
i.e. \(p\mapsto\frac{\delta_{p}}{p}\) is almost decreasing. If we can choose \(H=1\), then we are done with \(L\equiv D\) since \(d:=(D_{p}/p!)_{p\in\mathbb{N}}\) directly is log-concave and so \(D^{*}\) is log-convex, see \((iv)\) in Section 2.6 and \((a)\) in Lemma 2.3. Note that \(D_{0}=D_{1}=1\) by definition and so \(D^{*}\) is normalized, too.
If \(H>1\), then we are applying \((a)\) in Remark 2.4 to \(M\equiv D\) in order to switch from \(D\) to the equivalent sequence \(L\) defined via (9). Thus \(p\mapsto\frac{\lambda_{p}}{p}\) is non-increasing and hence \(l:=(L_{p}/p!)_{p\in\mathbb{N}}\) is log-concave which is equivalent to the log-convexity for \(L^{*}\). Normalization for \(L^{*}\) follows since \(D_{0}=D_{1}=1\) and finally \(D^{*}\) is equivalent to \(L^{*}\) which holds by \((ii)\) in Section 2.6.
**Lemma A.3**.: _Let \(N\in\mathcal{LC}\) be given with \((n_{p})^{1/p}\to+\infty\) as \(p\to+\infty\). Then we get \(\delta_{p}/p\to 0\) and \((d_{p})^{1/p}\to 0\) as \(p\to+\infty\)._
Consequently, when combining Lemmas A.2 and A.3 we have that the sequence \(L\) defined via (9) and being equivalent to \(D\) has \(\lambda_{p}/p\to 0\) and \((l_{p})^{1/p}\to 0\) as \(p\to+\infty\), too.
Proof.: First, by (2) and Stirling's formula we see that \((n_{p})^{1/p}\to+\infty\) as \(p\to+\infty\) implies \(\nu_{p}/p\to+\infty\) as well.
Let \(C\geq 1\) be given, arbitrary but from now on fixed. Then we can find some \(p_{C}\in\mathbb{N}_{>0}\) such that \(\nu_{p}>pC\) for all \(p\geq p_{C}\) holds true. Since \(\lfloor\frac{p}{C}\rfloor\geq\frac{p}{C}-1\geq p_{C}\) is valid for all \(p\in\mathbb{N}\) with \(p\geq Cp_{C}+C(>p_{C})\) we have for all such (large) integers \(p\) that
\[\nu_{\lfloor p/C\rfloor}>\lfloor p/C\rfloor C\geq\Big{(}\frac{p}{C}-1\Big{)} \,C=p-C\geq\frac{p}{2},\]
where the last estimate is equivalent to having \(p\geq 2C\) which holds true since \(p\geq Cp_{C}+C\geq C+C=2C\). Consequently, by the definition of the counting function \(\Sigma_{N}\) and the dual sequence we have shown \(\Sigma_{N}(p/2)<\lfloor\frac{p}{C}\rfloor\leq\frac{p}{C}\) and so \(\delta_{p+1}=\Sigma_{N}(p)<\frac{2p}{C}\) for all sufficiently large integers \(p\). Now, when \(C\to+\infty\) it follows that \(\delta_{p}/p\to 0\) as \(p\to+\infty\).
Finally, since \(D\in\mathcal{LC}\) by (3) and Stirling's formula we see that \(\delta_{p}/p\to 0\) does imply \((d_{p})^{1/p}\to 0\) as \(p\to+\infty\).
Concerning these Lemmas we comment:
**Remark A.4**.: Let \(N\) satisfy the assumptions from Lemmas A.2 and A.3. Then we get for the technical sequence \(L\) constructed via the dual sequence \(D\) the following (see again \((a)\) in Remark 2.4 applied to \(D\)):
1. \(L^{*}\in\mathcal{LC}\) is valid.
2. Since \(D\) is log-convex and equivalence between sequences preserves (mg), by \((v)\) in Section 2.6 we have that both \(D^{*}\) and \(L^{*}\) have (mg).
3. Moreover, log-convexity for \(D\) implies this property for \(L\) and, indeed, \(L\) satisfies all requirements of sequences belonging to the class \(\mathcal{LC}\) except \(L_{0}\leq L_{1}\) because only \(\lambda_{1}\leq\delta_{1}=1\) is known (see (10)).
4. However, when technically modifying \(L\) at the beginning with the following trick one can achieve w.l.o.g. that even \(L\in\mathcal{LC}\): When \(\lambda_{1}=1\), then no modification is required. So let now \(\lambda_{1}<1\). Since \(L\) is log-convex the mapping \(p\mapsto\lambda_{p}\) is non-decreasing and \(\lambda_{p}\to+\infty\) as \(p\to+\infty\) because \(L\) is equivalent to \(D\). Thus there exists \(p_{0}\in\mathbb{N}_{>0}\) (chosen minimal) such that for all \(p>p_{0}\) we have \(\lambda_{p}\geq 1\). Then replace \(L\) by \(\widetilde{L}\) defined in terms of its quotients \(\widetilde{\lambda}_{p}\), i.e. putting \(\widetilde{L}_{p}=\prod_{i=0}^{p}\widetilde{\lambda}_{i}\), where we set \[\widetilde{\lambda}_{p}:=1,\ \ \text{for}\ 0\leq p\leq p_{0},\ \ \ \ \widetilde{\lambda}_{p}:=\lambda_{p},\ \ \text{for}\ p>p_{0}.\] Consequently we get: \(1=\widetilde{L}_{0}=\widetilde{L}_{1}\), \(\widetilde{L}\) is log-convex since \(p\mapsto\widetilde{\lambda}_{p}\) is non-decreasing and \(L\leq\widetilde{L}\leq cL\) for some \(c\geq 1\) which yields that \(\widetilde{L}\) and \(L\) are equivalent. Finally, \(\widetilde{l}\) is log-concave since \(p\mapsto\frac{\widetilde{\lambda}_{p}}{p}\) is non-increasing which can be seen as follows: Clearly, \(\frac{\widetilde{\lambda}_{p}}{p}\geq\frac{\widetilde{\lambda}_{p+1}}{p+1}\) for all \(1\leq p\leq p_{0}-1\) and also for all \(p>p_{0}\) since \(l\) is log-concave. Then note that \(\frac{1}{p}\geq\frac{\lambda_{p}}{p}\) for all \(1\leq p\leq p_{0}\) and so \(\frac{\widetilde{\lambda}_{p_{0}}}{p_{0}}=\frac{1}{p_{0}}\geq\frac{\lambda_{p_ {0}}}{p_{0}}\geq\frac{\lambda_{p_{0}+1}}{p_{0}+1}=\frac{\widetilde{\lambda}_{p_ {0}+1}}{p_{0}+1}\).
Summarizing (see \((a)\) in Remark 2.4) we have that \(\widetilde{L},\widetilde{L}^{*}\in\mathcal{LC}\), \(\widetilde{L}\) is equivalent to \(D\) and \(\widetilde{L}^{*}\) is equivalent to \(D^{*}\).
**Remark A.5**.: By the characterization given in [7, Thm. 3.11] and [7, Thm. 3.10], see also [5, Prop. 2.1.22, p. 68] and the discussion after the proof of [7, Thm. 3.11], we have the following:
(26), i.e. \(\beta(\nu)>1\), is equivalent to the fact that \(N\in\mathcal{LC}\) has \((\gamma_{1})\) or equivalently \((\beta_{1})\).
Thus \(\beta(\nu)>1\) if and only if \(N\) is _strongly non-quasianalytic_.
Recall that \((\gamma_{1})\) for \(N\) implies, in particular, that \(\lim_{p\to+\infty}(n_{p})^{1/p}=+\infty\).
Summarizing everything, in particular the information from Lemmas A.2 and A.3 and Remark A.4, we get the following main result.
**Theorem A.6**.: _Let \(N\in\mathcal{LC}\) be given and let \(D\in\mathcal{LC}\) denote the corresponding dual sequence. We assume that:_
\((*)\)_\(\beta(\nu)>1\) holds true, i.e. \(N\) is strongly non-quasianalytic and hence \((n_{p})^{1/p}\to+\infty\) as \(p\to+\infty\), and_
\((*)\)_\(N\) satisfies (24)._
_Then there exists \(L\in\mathbb{R}^{\mathbb{N}}_{>0}\) (given by (9) w.r.t. the sequence \(D\)) which is equivalent to \(D\) and such that \(L\) satisfies all requirements in order to apply Theorem 3.4 to \(L\). Moreover, the corresponding isomorphisms are valid for the class defined by \(D\) as well (see Remark 3.5) and we also have \(\alpha(\delta)=\alpha(\lambda)<1\). Finally, \(L\) is log-convex, \(D^{*}\) and \(L^{*}\) are equivalent and both satisfy \((\mathrm{mg})\)._
Proof.: This follows directly by involving Lemmas A.2 and A.3, Remark A.4 and the comments listed in Section 2.6.
**Corollary A.7**.: _Let \(N\in\mathbb{R}^{\mathbb{N}}_{>0}\) satisfy the following conditions:_
\((*)\)_\(n\in\mathcal{LC}\),_
\((*)\)_\((\gamma_{1})\), and_
\((*)\)_\((\mathrm{mg})\)._
_Then Theorem A.6 can be applied to \(N\)._
Proof.: By (25) we get that \((\mathrm{mg})\) implies (24), the other assertions follow immediately.
_Note:_
* A sequence \(N\) satisfies the assertions listed in Corollary A.7 if and only if \(n\) is formally a so-called _strongly regular sequence_ in the notion of [24, Sect. 1.1]. The sequence \(M\) in [24] is precisely denoting \(m\) in the notation used in this work.
* Corollary A.7 applies to \(N\equiv G^{s}\) for any \(s>1\). On the other hand Theorem A.6 also applies to the so-called \(q\)-Gevrey sequences given by \(M^{q}:=(q^{p^{2}})_{p\in\mathbb{N}}\) with \(q>1\). Each \(M^{q}\) violates \((\mathrm{mg})\) but (24) is satisfied.
We also have the following result which shows how (15) can be obtained for the dual sequence \(D\) (and for \(L\)). This is crucial when \(D\) (resp. \(L\)) shall belong to a family \(\mathfrak{F}\) as considered in Section 4.
**Proposition A.8**.: _Let \(N\in\mathcal{LC}\) be given, let \(D\in\mathcal{LC}\) denote the corresponding dual sequence and let \(L\) given by (9) w.r.t. \(D\). We assume that \(N\) is also satisfying_
\((*)\)_\(\alpha(\nu)<+\infty\)._
_Then \(\beta(\delta)=\beta(\lambda)>0\) and both \(\omega_{D}\) and \(\omega_{L}\) satisfy (15)._
Proof.: First, by [7, Thm. 3.16, Cor. 3.17] we know that \(\alpha(\nu)<+\infty\) implies (in fact it is even equivalent to) (mg). Consequently, also (24) holds true, see (25). Second, using these facts Theorem A.1 implies that \(\beta(\delta)>0\). Then, by [7, Thm. 3.11 \((vii)\Leftrightarrow(viii)\)] (applied to \(\beta=0\)) we get \(\gamma(D)>0\) as well (for the definition and the study of this growth index \(\gamma(\cdot)\) for weight sequences we refer to [7, Sect. 3.1]). By combining [7, Cor. 4.6 \((i)\)] and [7, Cor. 2.14] (applied to \(\sigma:=\omega_{D}\)) we have that \(\omega_{D}\) satisfies (15) and this condition is abbreviated by \((\omega_{1})\) in [7]. Finally, the equivalence between \(D\) and \(L\) clearly preserves (15) for \(\omega_{L}\) by definition of the associated weight functions and the equivalence [23, Thm. 3.1 \((ii)\Leftrightarrow(iii)\)] applied to the sequence \(D\).
Let us combine now Theorem A.6 and Proposition A.8:
**Theorem A.9**.: _Let \(N\in\mathcal{LC}\) be given, let \(D\in\mathcal{LC}\) denote the corresponding dual sequence and let \(L\) be given by (9) w.r.t. \(D\). We assume that \(N\) is also satisfying_
\((*)\;\;1<\beta(\nu)\leq\alpha(\nu)<+\infty\)_._
_Then \(L\) is a sequence satisfying \((l_{p})^{1/p}\to 0\), \((ii)\) and \((iv)\) in Section 4.4 and all requirements from \((i)\) there except \(L_{0}\leq L_{1}\). However, in view of \((iv)\) in Remark A.4 also \((i)\) from Section 4.4 can be obtained when passing to \(\widetilde{L}\)._
_Note:_ By applying the technical Proposition 4.5 it is possible that, when given a one-parameter family of sequences \(N^{(\beta)}\), \(\beta>0\), satisfying the requirements from Theorem A.9, to construct from the corresponding family \(\mathcal{L}:=\{L^{(\beta)}:\beta>0\}\) (resp. \(\widetilde{\mathcal{L}}:=\{\widetilde{L}^{(\beta)}:\beta>0\}\)) a technical uniform bound \(\mathbf{a}\) as required in Section 4 and hence to apply Theorem 4.9 to \(\mathcal{L}\) (resp. to \(\widetilde{\mathcal{L}}\)).
### The bidual sequence
The goal of this final Section is to show how the procedure from Section A.2 can be reversed in a canonical way. Let us first recall: For any \(N\in\mathcal{LC}\) we have that the corresponding dual sequence \(D\in\mathcal{LC}\) and so in [5, Definition 2.1.41, p. 81] the following natural definition has been given:
\[\forall\;p\geq\delta_{1}=1:\quad\epsilon_{p+1}:=\Sigma_{D}(p),\qquad\epsilon_ {0}=\epsilon_{1}:=1, \tag{27}\]
and set \(E_{p}:=\prod_{i=1}^{p}\epsilon_{i}\). Finally we put \(E_{0}:=1\) and so \(E\in\mathcal{LC}\) with \(1=E_{0}=E_{1}\) follows by definition. This sequence \(E=(E_{p})_{p\in\mathbb{N}}\) is called the _bidual sequence_ of \(N\) and in [5, Theorem 2.1.42, p. 81] it has been proven that \(N\) and \(E\) are equivalent. (In fact there even a slightly stronger equivalence on the level of the corresponding quotient sequences has been established.)
We prove now converse versions of Lemmas A.2 and A.3.
**Lemma A.10**.: _Let \(D\in\mathcal{LC}\) be given with \(\alpha(\delta)<1\)._
_Then the (bi)-dual sequence \(E\) defined via (27) has (26) for some \(\beta>1\) (and so \(E\) is strongly non-quasianalytic)._
Proof.: Since \(\alpha(\delta)<1\) we have that \(D\) satisfies (24), see the proof of Proposition A.8. Thus \(\beta(\epsilon)>1\) follows by Theorem A.1 and so, for some \(\beta>1\), we have
\[\exists\;H\geq 1\;\forall\;1\leq p\leq q:\quad\frac{\epsilon_{p}}{p^{\beta}} \leq H\frac{\epsilon_{q}}{q^{\beta}},\]
i.e. \(p\mapsto\frac{\epsilon_{p}}{p^{\beta}}\) is almost increasing.
**Lemma A.11**.: _Let \(D\in\mathcal{LC}\) be given with \(\delta_{p}/p\to 0\) (resp. equivalently \((d_{p})^{1/p}\to 0\)) as \(p\to+\infty\). Then the dual sequence \(E\) satisfies \(\epsilon_{p}/p\to+\infty\) and \((e_{p})^{1/p}\to+\infty\) as \(p\to+\infty\)._
Proof.: First, \(\delta_{p}/p\to 0\) as \(p\to+\infty\) if and only if \((d_{p})^{1/p}\to 0\) as \(p\to+\infty\) holds by (3).
Let \(C\geq 1\) be given, arbitrary but from now on fixed and w.l.o.g. we can take \(C\in\mathbb{N}_{>0}\). Then we find some \(p_{C}\in\mathbb{N}_{>0}\) such that \(\delta_{p}\leq pC^{-1}\) for all \(p\geq p_{C}\) holds true. For all such (large) integers \(p\) we also have \(pC\geq p_{C}\) and so \(\delta_{pC}\leq(pC)C^{-1}=p\) for all \(p\geq p_{C}\). By definition, since \(\epsilon_{p+1}=\Sigma_{D}(p)=|\{j\in\mathbb{N}_{>0}:\delta_{j}\leq p\}|\) and \(j\mapsto\delta_{j}\) is non-decreasing, we get now \(\epsilon_{p+1}\geq pC\Leftrightarrow\frac{\epsilon_{p+1}}{p}\geq C\) for all \(p\geq p_{C}\). Thus we are done because \(C\) is arbitrary (large).
Finally we get the following main result.
**Theorem A.12**.: _Let \(D\in\mathcal{LC}\) be given with \(1=D_{0}=D_{1}\) and assume that_
* \((d_{p})^{1/p}\to 0\) _as_ \(p\to+\infty\) _and_
* \(\alpha(\delta)<1\)_._
_Then one can apply Theorem 3.4 to the sequence \(L\) given by (9) and, in addition, the isomorphisms from Theorem 3.4 hold for the classes defined via \(D\) too (by Remark 3.5). The corresponding dual sequence \(E\in\mathcal{LC}\) (see (27)) is strong non-quasianalytic._
Proof.: Note that \(\alpha(\delta)<1\) implies that \(p\mapsto\frac{\delta_{p}}{p}\) is almost decreasing. By switching to the equivalent sequence \(L\) defined via (9) applied to \(D\) we see that we can apply Theorem 3.4 to \(L\). The rest follows from Lemma A.10.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.